Jay Caroline, Brown Andrew, Harper Simon
School of Computer Science, University of Manchester, Kilburn Building, Manchester M13 9PL, UK.
Disabil Rehabil Assist Technol. 2011;6(2):97-107. doi: 10.3109/17483107.2010.496523. Epub 2010 Jun 25.
On simple Web pages, the text to speech translation provided by a screen reader works relatively well. This is not the case for more sophisticated 'Web 2.0' pages, in which many interactive visual features, such as tickers, tabs, auto-suggest lists, calendars and slideshows currently remain inaccessible. Determining how to present these in audio is challenging in general, but may be particularly so for certain groups, such as people with congenital or early-onset blindness, as they are not necessarily familiar with the visual interaction metaphors that are involved. This article describes an evaluation of an audio Web browser designed using a novel approach, whereby visual content is translated to audio using algorithms derived from observing how sighted users interact with it. Both quantitative and qualitative measures showed that all participants, irrespective of the onset of their visual impairment, preferred the visual interaction-based audio mappings. Participants liked the fact that the mappings made the dynamic content truly accessible, rather than merely available to those who could find it, as is presently the case. The results indicate that this 'visual-centred' mapping approach may prove to be a suitable technique for translating complex visual content to audio, even for users with early-onset visual disabilities.