Sonify videos or images to provide “synthetic vision” using audio
This experimental feature would provide “synthetic vision” using audio, by processing an image or video into an audio equivalent – with brightness represented by loudness, height by pitch, etc. Although research is being done in this area, this feature currently requires too much training to be widely usable by people who are blind.
Discussion by Disabilities
For users who are able to learn and understand the audio equivalent of images, this can increase awareness of surroundings. However, the learning curve is currently too great for any such tools to be widely accepted.
Related Research and Papers
- A Framework For Designing Image Sonification Methods – Stanford – Woon Seung Yeo and Jonathan Berger (2005)
- An Approach for Image Sonification – Suresh Matta, Dinesh K Kumar, Xinghuo Yu, Mark Burry (2004)
- An Experimental System for Auditory Image Representations – Peter B. L. Meijer (1992)