Providing access to Graphical User Interfaces (GUIs) for blind people is the most difficult challenge facing accessibility designers today. Traditional screen readers, residing in the target system, retrieve visual information from the operating system and transform it into alternative sound, speech, or tactile representations. These strategies are becoming ineffective, however, as their capabilities are outstripped by the growing complexity of both operating systems and the ways in which visual data can be presented on a screen. Archimedes researchers have developed a new external screen access technique that breaks with tradition by performing all data retrieval and processing outside of the target system. A VisualTAP captures screen information from the target system, and a personal GUI accessor translates it into a form that is accessible to the user (see Figure 2). The Archimedes solution places emphasis on active participation by blind users. Whereas traditional screen readers analyze the visual data and present users with highly structured results, the GUI accessor presents multiple overlapping representations of the visual data and leaves much of the extraction and organization of meaningful information to the innate filtering and associative capabilities of the user's brain. Figure 3 shows this process.
The hardware and software required for recovering and processing the visual image includes:
The growing use of multimodal interfaces is creating new problems for deaf computer users. In the past, speech and sound were used to augment text printed on the screen of a computer. Now, they are being used in place of the text, particularly in low-bandwidth interfaces to the Internet such as PDAs and telephones. Archimedes researchers are investigating visual alternatives to spoken messages. A deaf accessor translates spoken or printed text into American Sign Language (ASL).
Initial experiments used VRML to create and animate 3D models of hands, faces, and torsos to produce sign language. This approach was abandoned because of disappointing results and high overheads in both the authoring and presentation phases. Animations were slow and jerky, and inaccuracies in the way secondary features such as shadows were rendered became very distracting to the users. Also, the 3D images degraded rapidly if viewed at anything but the original size and resolution.
Current research is focused on 2D animations using Macromedia Flash. A professional animation artist is creating the basic images required to create representations of a broad range of ASL (see Figure 4). The approach we have adopted eliminates unnecessary anatomical details and uses traditional animation techniques to portray movements and relative positions. Flash enables us to create real-time ASL images that can be accurately scaled for presentation on cellular phones, PDAs, computer screens, or projection screens in auditoriums.
We are frequently asked why it is necessary to bother with animated sign language instead of just printing text messages. The main reason is that English (or other printed languages) is the second language for most deaf people and it is difficult for them to read English quickly enough to keep up with normal conversation.
An interesting and potentially important side effect of the ASL accessor development is that the same techniques can be used to improve communications between any people who don't share a common language.
-- N.S.
Back to Article