Innovations In User Interfaces

Dr. Dobb's Special Report December 2000

We may all benefit from new UIs for the disabled

By Neil Scott


Neil is leader and chief engineer of the Archimedes Project at Stanford University's Center for the Study of Language and Information (CSLI). He can be reached at ngscott@arch.stanford.edu.
VisualTAP and GUI Accessor

My friend and former colleague J.B. has just graduated with an MBA and joined a new startup that does business consulting over the Web. When he came to work for me as an intern with the Archimedes Project seven years ago, he was a senior in the Psychology Department at Stanford University with more than a passing interest in the technologies we were developing to make computers accessible to people with disabilities. At the age of 17, he dived into a swimming pool and struck his head on the bottom. While many of us have done the same thing and nursed a bump on the head for a few days, J.B. struck the bottom at an angle that caused his spinal column to snap at the base of his neck. In an instant, he had become a quadriplegic and his life headed in a very different direction.

When he wheeled into my office seven years later to apply for an intern position, he had graduated from high school, gained admission to Stanford, and was well on the way to completing his degree. You may wonder how a person, totally paralyzed from the neck down, could keep up with the amount of writing necessary to pursue a degree at Stanford. The answer, of course, is that J.B. used a computer. At the time he was injured, however, there were few practical options for hands-free access to a computer. The most widely used technique of gripping a "mouth-stick" with his teeth and moving his head to press the keys on a keyboard or push a mouse around were not practical for J.B. due to limited head movement. Speech recognition had not yet become a viable alternative. His solution was to use an ultrasonic head tracker that enabled him to point to anything on the screen of his Macintosh using relatively small head movements. He clicked the mouse button by puffing on a tube, about the size of a drinking straw, connected to a small pressure-sensing switch. With this system, he entered text, one letter at a time, by pointing to the desired key on an image of a keyboard superimposed on the screen and giving a little puff. This arrangement let him perform all standard keyboard and mouse operations, but it was quite slow and tiring to use.

When he joined our project, we used the Archimedes Total Access System (see Figure 1) to replace J.B.'s puff switch and on-screen keyboard with a speech-recognition system. This combination proved to be extremely effective and enabled him to use his Macintosh as quickly as, if not more quickly than, most nondisabled users. J.B. stayed with us for five years, two as an intern and three as a full-time member. During this time, we rarely dwelled on the fact that he was paralyzed. He was a fully functioning member of the project because his access tools let him use the same computers and software as everyone else.

The Archimedes Project

The Archimedes Project was established at Stanford's Center for the Study of Language and Information in 1992. The mission is to ensure that all people can fully participate in the global information society regardless of individual needs, abilities, preferences, and culture. We were concerned that the emerging Internet and World Wide Web might lead to a repetition of earlier mistakes that had disenfranchised certain groups of people. For instance, when the Graphical User Interface (GUI) was introduced, developers assumed that blind people wouldn't use such an interface. What they didn't foresee was the time when text-based interfaces would disappear and blind people would have no option.

The inspiration for using the name Archimedes for our project came from the Greek philosopher who is reputed to have said, "Give me a place to stand and a lever long enough and I will move the world." We promote the idea that appropriate technology can provide the leverage for each person with a disability to move his or her world. The leverage for J.B., for instance, was the ability to combine the very latest speech-recognition technology with the head-pointing technology he had been using for several years. In his case, the performance of the sum was definitely greater than that of the individual parts.

The Challenge of the GUI

At the time we began the Archimedes Project, we quickly recognized that disabled people had little choice about the type of computer they could use if they wanted to work in a particular organization. Basically, they had to be able to use the same type of computer and software as everyone else. There were exceptions to this, of course, but the ongoing burden of supporting a nonstandard computer was, and still is, a strong disincentive for employing individuals with disabilities. This posed a serious problem for disabled people because most of the initial computer-access technologies were based on the Apple II family which, while common in schools, was not at all common in commercial situations.

Traditional methods for making computers accessible require alterations to either or both the operating system and applications software in the computer that is to be accessed. This was a relatively straightforward approach in the days of text-based interfaces such as DOS. The situation changed dramatically, however, with the advent of GUI environments such as Windows and the MacOS. The complexity and abstractions of the underlying software in a GUI make it very difficult to perform the modifications required to give disabled users sufficient and stable control of their working environment. For example, screen-reading programs that recover text from a screen for blind computer users are relatively easy to implement in text-based environments such as DOS. The one-to-one correlation between the character codes stored in the screen buffer and the characters displayed on the screen makes it straightforward to transform screen data into synthesized speech or Braille. In sharp contrast, GUI screen-reading programs are considered to be among the most difficult software to write and maintain. A GUI screen buffer contains a bitmap of the screen image, but the relationships between the text or graphical elements displayed on the screen and the underlying information are hidden by the operating system. The situation is further complicated when different parts of the bitmap belong to totally different applications. GUI screen readers must work intimately with the operating system to build an "off-screen-model" that mirrors the GUI while presenting the data in an accessible form. Access to the GUI is a serious and ongoing problem for blind computer users, and the problems are becoming more intractable with each new version of the system.

Access for the Deaf

Cataloging all of the problems faced by each disability group would take more space than we have available for this article, so I will share just one more pressing concern. Deaf computer users have fared quite well up until now because important information was almost always presented visually (even when other modes were employed). This situation is changing quite quickly, however, with the growth of animated graphics and speech-only access to information via multimodal computer interfaces and handheld devices such as PDAs and cell phones. The deaf community is extremely concerned that it will suffer the same fate as the blind community and lose access to significant sources of information.

There has been an ongoing debate for more than a decade about who is responsible for making computers accessible. One side argues for internal access in which hardware and software manufacturers must provide everything needed for accessibility within their products. The other side argues for external access in which disabled users are responsible for providing their own accommodations.

Programming and Accessibility Requirements

Internal access requires very close collaboration among the various groups responsible for designing operating systems, applications software, and accessibility programs. Microsoft, IBM, and Sun Microsystems each made a strong commitment to providing accessible products and programming environments. Their programming tools, for example, provide extensive support for programmers who wish to create accessible software products. In spite of this, inaccessible products are far too common. There are a variety of reasons for this. The inclusion of accessibility features may incur programming overheads that lower the overall performance of the program for all users; and catering to different disabilities may require solutions that are mutually incompatible. The pressures of completing a software product within an allocated budget or bringing it to market before a competitor often lead to accessibility functions being ignored totally or put off until a later revision.

External access can be likened to wearing spectacles, where the user is responsible for obtaining the correct prescription and using them whenever necessary. While it is possible to create access solutions that function entirely outside of the target computer, the Archimedes Project promotes a compromise solution in which accessibility is built into a product only whenever it makes functional and economic sense to do so. In all other situations, appropriate external access devices are connected when they are needed to provide accessibility. The Archimedes Total Access System (TAS), see Figure 1, implements this model by clearly separating accessibility functions from the computer that is being accessed (the target computer). The accessor provides a personal user interface closely matched to the needs, abilities, and preferences of the user. The Total Access Port (TAP) emulates the functions of the keyboard, mouse, and screen on the target computer without interfering with their normal operation. A standardized communication protocol enables any accessor to interact with any TAP. This approach greatly simplifies the task of providing access. Each user can have an accessor that is precisely matched to his or her individual requirements, without concern for the type of computer that is to be accessed. Any target computer can be made accessible without prior knowledge about potential users. Disability specialists no longer need to be concerned about computer hardware and software issues, and computer designers can provide a single TAS interface that accommodates all disabilities.

The Archimedes Project has created TAPs for standard computer platforms such as IBM PC, Sun, SGI, and Macintosh; for home networks controlling lights, doors, windows, and appliances; and industrial networks controlling machines and computer peripherals. Accessors have been developed for many types of disabilities using special keyboards and switches, speech recognition, head tracking, eye tracking, Braille, haptics, and animated graphics. The basic TAS technology has been licensed to a company specializing in accessibility products and new TAPs and accessors are being developed as needed. The Archimedes Project is now undertaking a broad range of studies to confirm the validity and efficacy of the TAS concept and to determine further needs of users.

Regardless of whether access is provided internally or externally, the effectiveness of any solution depends greatly on how well the operating system and applications programs are written, and whether standard input and output functions are used in a consistent manner. All programmers must share the responsibility for creating and maintaining accessible user environments. Popular programming environments' tools now provide hooks that enable accessibility programs, such as screen readers, to identify and access internal data structures. But these can be effective only if programmers actually use them properly in their code. This is particularly important for web-based software because of the wide range of material a disabled person may encounter, often for only a short amount of time, which precludes spending time necessary to customize access tools to individual web pages. Extensive resources and guidelines for writing accessible code and web pages are available from web sites run by Microsoft, IBM, Sun, W3C, WAI, and the Trace Center. The accessibility of web pages can be checked at a site called "Bobby" (http://www.cast.org/bobby/).

Legal requirements for accessibility have been in place in the U.S. for more than a decade, but until recently, these laws had few teeth. Inaccessible solutions were tolerated if the vendor could show that including accessibility would be unreasonably difficult or expensive. Recent changes to the rehabilitation laws are designed to remove the loopholes. The Americans with Disabilities Act, and Section 508 of the Rehabilitation Act of 1998, heighten the importance of disability awareness for IT professionals, manufacturers, and retailers at all phases of the design, manufacture, procurement, and support of electronic and information technology. Section 508 directly impacts companies that lease or sell IT equipment to the Federal Government. As of August 2000, products failing to meet the requirements of Section 508 will not be accepted.

Many accessibility requirements can be met by simply following good design practices. Keyboard operation of all available functions is one of the most important aspects of accessible design, since many alternatives to the keyboard are available to people with disabilities. Interface design should adhere to established conventions, such as using the Tab key to move from one field to another and Control-Tab to move from one pane or section to another, so that access software can correctly interpret what a user is doing. Programmers who adopt nonstandard coding practices that result in code that can't be recognized by accessibility tools cause most of the access problems experienced by disabled computer users. A sloppily designed GUI, for example, can freeze a system when a person using keyboard-only navigation encounters a screen that can only be exited by clicking a mouse button. Macros are used extensively in accessibility programs for repetitive or complex tasks, but they are easily broken when interfaces are used inconsistently.

			            Resources 

		The Archimedes Project
	 	  Neil Scott (ngscott@arch.stanford.edu)
		  http://www-csli.stanford.edu/arch/
		Trace R & D Center
		  Gregg Vanderheiden (gv@trace.wisc.edu)
		  http://trace.wisc.edu/
		Sun Microsystems
		  Sun Microsystems Accessibility team: Earl Johnson, 
		  Peter Korn, Lynn Monsanto, and Jeff Dunn (access@sun.com).
		  http://www.sun.com/access/
		World Wide Web Consortium (W3C) 
		  http://www.w3.org/TR/WAI-WEBCONTENT/
		Microsoft 
		  http://www.microsoft.com/enable/dev/
		IBM Special Needs Systems Site 
		  http://www.austin.ibm.com/sns/
		CAST: HTML Compliance Testing
		  http://www.cast.org/bobby/
		The HTML Writers Guild Site "AWARE"
		  http://www.hwg.org/resources/accessibility/
		The Alliance for Technology Access 
		  http://www.ataccess.org/html
Table 2.

It is not acceptable for designers to assume that a certain disability group won't have any interest in accessing a particular type of interface. For instance, when the designers of the original GUI were asked how blind people would access the system, their response was, "What a crazy question. Why would blind people want to use a graphical interface?" What they didn't foresee was that within a short time the GUI would be the only option available to blind users. Making GUI systems accessible to blind people has been a long and arduous process, but the lessons learned are providing the basis for speech-only interfaces to the Internet that can be used by anyone.

So the next time you are designing a user interface, think about users who can't easily use the keyboard or mouse or read the screen. Each time you introduce a user-accessible feature, run through a short checklist in your mind asking yourself whether a blind, deaf, physically disabled, or cognitively impaired person will be able to use the feature (see Table 1). The bonus for doing this is that you will design interfaces that are better for everyone. Providing redundant options for performing key tasks gives all users the option to choose what works best for them in any particular situation. Over the years, we have seen many examples where features that were added as necessities for people with disabilities have become features of convenience for everyone.

DDJ