IDRC - Celebrating 25 Years

1993 - 2018

Continuing Our Work During COVID-19

Read the letter regarding COVID-19 by IDRC Director, Jutta Treviranus.

Jutta Treviranus
Adaptive Technology Resource Centre
(ATRC), University of Toronto
This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract

Virtual reality gives us the opportunity to create accessible equivalents of previously inaccessible information by translating information from a modality that the user cannot sense, into a modality that the user can sense. It also allows us to give new powers to voluntary acts the user can control, thereby replacing acts they cannot control. While the educational and rehabilitation benefits of this are clear, there are also risks associated with creating new realities. We owe it to potential users to carefully craft unified translation conventions so that we do not squander the learning investments that will be made in using the new environments.

Introduction

One of the major benefits of virtual reality for people with disabilities is the opportunity to translate or transform one display or control modality into another. Given this opportunity, we can translate information in a modality that the user cannot sense, into a modality that the user can sense or we can translate voluntary movements or acts the user can control, to replace required control acts that the person is not able to make. While this opportunity promises great benefits, it is also completely unchartered territory, with all of the inherent risks of unexplored domains. To explore this opportunity we must invent new languages and tamper with laws of physics. In essence we are manipulating minds, albeit for good. In translating modalities we want to insure that we do not create a whole new Tower of Babel. In transforming control acts we want to insure that we do not promote non-transferable delusions about the behaviors of space and matter.

The opportunity to translate modalities and control acts have been leveraged in a number of applications of virtual reality. Among these are: translation of inherently visual information into auditory and tactile information for someone who is blind, visual augmentation of reality for people who are visually impaired, translation of auditory information into visual and tactile information for people who are deaf or hard of hearing, and translation of the range, strength and precision of available voluntary control acts to control tools and environments that require greater range, strength or precision.

Translation of Visual Information

An example of the translation of visual information can be found in the project entitled: Adding Feeling, Sound and Equal Access to Distance Education (1). The ATRC with a number of research partners has explored the use of virtual reality to teach subjects that have inherently visual and spatial information, to learners who are blind. Geography is among these subjects. To communicate the visual-spatial information encompassed in a standard map, a three dimensional map was rendered in Virtual Reality Modeling Language (VRML) and the information that is usually conveyed through visual means was selectively translated into the available modalities: namely speech, sound and haptics. The researchers attempted to classify the available modalities according to the information they were best suited to convey. Attempts were also made to determine whether there were pre-existing translation conventions. The resulting translation matrix relegated spatial overview, object classification (this is a city, this is a river) and topographical or shape identification (e.g., this is a mountain, this is a valley) to haptics. Speech was used to identify specific objects (e.g., this is Toronto, this is Lake Ontario) and associated information that would usually be identified by text. Finally, real world sounds were used to denote proximity (e.g., railway sounds tell you that you are close to a railway track). While the translation matrix that was applied successfully conveyed the desired information, it became clear that another research group might have reached a completely different set of translation decisions. A learner moving from one system to the next would need to re-learn the modal language.

Visual Augmentation of Reality

An application of visual augmentation that highlights the need for care in manipulating perception is the Eyetap system developed and personally applied by Steve Mann (2). Prof. Mann has worn, for many years, an augmentative visual display that captures, transforms and re-displays reality in real time. This system is used to enhance visual information for people with vision impairments. Dr. Mann reports that he has become so habituated to the display that if he is without the system or if the system changes, he experiences profound disorientation and signs of autonomic distress. In developing systems that augment reality we need to insure that the technology and augmentation is sufficiently stable and sustainable to deserve the learning and habituation investment made by the user.

Translation of Auditory Information

The convergence of television and networked computers has provided an opportunity to produce much richer captions. With the availability of graphics, animation, text art and video layers as caption alternatives we can translate the paralinguistic and audio information that Deaf viewers presently miss out on. The Centre for Learning Technologies has explored the communication of this information through a project entitled 'Emotive Captioning' (3). They have used color, text styling and vibration to successfully convey emotive information. In the process they have grappled with the same conundrums faced by other developers performing modal translation: how do you create a new language when there are no, or very inconsistent conventions.

Translation of Control Act

The potential of virtual reality to teach spatial sense, cause-effect and mobility related skills to children who are not independently mobile or children who have very little motor control has long been recognized by researchers (4, 5). Using virtual reality we can give a child who cannot grasp, or cannot walk, equivalents to the formative experiences of navigating a playground, building with blocks, or playing ball. By doing this we hope to develop the important spatial skills that independently mobile children gain from these tasks. Because virtual reality is not confined by the laws of physics, we have great flexibility in the experiences we can present to children. Precisely because there are no constraints we have to proceed with great caution. What are we teaching? Where is the interface between reality and our benevolent translation? Are the systems and the standards in place to allow transfer of what is learned to other tasks and other systems (i.e., because I can hit a baseball now using my nose, will the same controlling act have the same effect in the next system I try to control?).

Conclusion

What we are venturing into is potentially more monumental than the creation of Braille or sign language. Where Braille 'simply' created an equivalent tactile code for an existing graphic and auditory code; and sign language, over its long evolutionary history, has developed a gestural, semantic alternative to speech; we are proposing to create equivalents to the infinitely rich and highly nuanced vocabularies that are our visual, auditory and physical realities. To do justice to the learning investment of our users we must establish unified translation conventions, and we must do this carefully. For, after all, we are tampering with reality and ultimately with the minds of people who will learn our new languages.

References

  • Treviranus, J. "Adding Feeling, Sound and Equal Access to Distance Education", 14th Annual Meeting of the CSUN Center for Disabilities, Los Angeles, CA, 1999.
  • Mann, S., 1998. Wearable computing as means for personal empowerment, Keynote Address for The First International Conference on Wearable Computing, ICWC-98, May 12-13, Fairfax VA, http://wearcam.org/icwc/index.html.
  • Fels, D.I., & Degan, S.S. (2001). Expressing non-speech information and emotion to the deaf and hard of hearing. CD-ROM Proceedings IEEE Systems, Man and Cybernetics, Tuscon.
  • Treviranus, J. (1994). Virtual Reality Technology and People with Disabilities. Presence: Teleoperators and Virtual Environments. MIT Press. 3(3),201-208.
  • McComas, J; Pivik, JR; Laflamme, M (1998), Children's Transfer of Spatial Learning from Virtual Reality to Real Environments. CyberPsychology & Behavior, 1 (2), 121-128.