ORI ACSL

Applied Computer Simulation Labs

  • Increase font size
  • Default font size
  • Decrease font size

VR Education and Rehabilitation

VR Education and Rehabilitation

Dean P. Inman, Ken Loge, and John Leavens

Introduction


VRtechnology was originally developed by the military in the 1960s to help train pilots. This was useful because in VR, dangerous situations could be simulated without risking personal injury or loss of expensive equipment. Understandably, the technology was very expensive even by current standards. The military continues to use this technology for a wide range of training activities including teaching personnel to operate tanks and submarines in combat situations. More recently, the entertainment industry has begun to embrace VR as a means to provide users with exciting, fully interactive 3D game scenarios. This development is made possible by the availability of powerful new technology that becomes more affordable every day.

Our interest in VR technology is not driven by its availability or affordability-of course, we could not have embarked upon our recent work in the absence of these adventitious conditions. Our work stems from the needs of disabled children to acquire skills to function independently in the world. Our initial attempt to use this technology focused on teaching children how to operate motorized wheelchairs. More recently, our interests have expanded to include using VR technology in science education.

In 1982 we began a program to teach children with cerebral palsy how to operate motorized wheelchairs. This was, in effect, a driver's education program intended to teach children how to operate a type of vehicle as safely and as independently as possible. We used actual wheelchairs and, during instruction, we worked with the kids in relevant real world situations-bathrooms, living rooms, and inclined sidewalks. Training focused on teaching skills such as going straight, turning right and left, maneuvering backward, and stopping before hitting obstacles. Every effort was made to teach skills in context to minimize the possibility that skills would not transfer from the training environment to the real world.

The work was difficult and frequently unsuccessful. Beyond the usual sensory-motor impairments generally associated with cerebral palsy, we seemed to be facing other limiting factors that diminished or even precluded progress. For example, a lack of motivation was frequently a problem. Granted, it is not always easy or fun to complete a drivers education program, but we would have expected the children to work hard to succeed because of the importance of the goal: we were trying to provide them with independent mobility; an essential ingredient of everyday life. Unfortunately, as we discovered this goal was not always shared by the children. In short, we seemed to be facing the well-known phenomenon of learned helplessness which manifested itself in many ways. It was like leading the proverbial horse to water-we could not make the children drink.

We began trying to develop a computer-based training program for an Apple IIe and eventually for a Macintosh SE, hoping this would help resolve the motivational problem. Work was progressing slowly until we learned there was a low-end VR platform available manufactured by sense8 that ran on Intel 486/66MHz-based machines. The idea of immersing children in VR to teach them motorized wheelchair operation was intriguing. In July of 1993 we were awarded a grant from the U.S. Department of Education to investigate the feasibility of using VR to teach this population of children how to operate motorized wheelchairs.


VR Training Platform

To begin a session, a wheelchair-bound child is placed on a roller platform that permits the back wheels of the wheelchair to rotate at different speeds. When the child moves the joystick forward, the back wheels rotate at a speed proportional to the degree of joystick movement. The computer monitors the pulse rate output from two optical encoders, which are essentially cylindrical bar-code readers that index speed and direction of roller rotation supporting the wheelchair's right and left rear wheels. When the two signals are equivalent, the chair moves in a straight line. When the right-hand wheel turns faster than the left-hand wheel, the chair turns to the left. The rate at which the wheelchair makes a turn is proportional to the pulse rate differential between the two encoders: the greater the differential, the faster the turn.

Screen shot from world one.

Figure 1. The first and least complicated of our three virtual worlds is an infinite plane.


This particular version of the encoder system (we went through two) has the advantage of permitting the system to work with any wheelchair, including those specially fitted for particular users, as well as non-powered or manual wheelchairs. We simply place the wheelchair on the rollers and the students begin. One disadvantage of this system is that it does not permit realistic simulations of collisions. Specifically, in the current version of our system, when the computer detects a collision between the virtual wheelchair and a virtual object, an intense crash sound is produced and perceived forward motion of the chair ceases. However, the wheelchair's motor continues to operate, and hence the wheels rotate, until the user moves the joystick back to a neutral position. This is the weakest aspect of our attempted simulation.

In an earlier version we were able to create a more realistic crash by interrupting the voltage coming out of the joystick control box to inform the computer what the wheelchair was supposed to be doing in the virtual world. The advantage of that particular solution was that it allowed us to create a more realistic crash by engaging an electronic-braking system, which was already available in the wheelchair's computer control system, at the moment of impact. This caused the wheels to stop rotating and caused the user to experience a slight jolt. When combined with the crash sound, the collision was fairly realistic. The disadvantage of the system was that (because of the electronics) we had to use our wheelchair, or one made by the same manufacturer, which made it impossible for some children and very difficult for others to participate in the study. We discovered the former solution was much better because it allowed more children to participate.

When we began development, we were warned by John Trimble and Chuck Blanchard, who had done some initial work in this area, that we would experience problems with inertia in our simulation program, which was indeed the case. More specifically, when the virtual wheelchair was started from a complete stop the user experienced a leap in which the wheelchair went from full stop to full speed ahead almost instantaneously, which was unrealistic and not what a user experiences when using a real wheelchair. Similarly, if while operating, the user suddenly released the joystick, causing it to snap back to its neutral position, the user would experience a sudden stop, which again was not realistic.

We learned we could solve the problems of inertia by filling the rollers on the training platform with ballast, and by devising a simplified algorithm to simulate the friction properties of wheels on an actual surface. The former solution resulted in each roller weighing approximately 25 pounds, making them start and stop more slowly. The friction software algorithm was tuned to attenuate changes in the speed differential between each wheel at higher speeds to make it easier to travel straight ahead, and to augment the wheel differential at lower rotational speeds to improve control while turning. The combined result was a much more realistic physical model of what a user experiences when operating a motorized wheelchair in the real world.


Image of VR animated statue from world two.

Figure 2. The second virtual world features a finite grassy area with objects and obstacles scattered about.


The computer monitors the user's head position via a head tracker (manufactured by Logitec) mounted on top of the HMD. Changes in head position are determined by timing three receivers signals transmitted by a unit mounted on the ceiling. Rate and degree of angular rotation of the head in VR are calculated by monitored latencies to three receivers in the unit mounted on top of the HMD. Earphones provide 3D sound using a Beachtron sound card, permitting four dynamic sound sources to be presented to the user in three dimensions.

We used an HMD manufactured by Virtual Research (VR4), which provides a stereoscopic 3D 380X560 pixelated image to each eye through two 1" x 1" color liquid crystal display monitors. Two graphic accelerators (Fireboards, manufactured by SPEA Corporation), one for each eye, were used to render the images. In our first and most simple virtual world, we found we could achieve a frame rate of approximately 25 frames per second (fps) to each eye. Considering that a VHS videotape player runs at about 30 fps, our system was able to approximate the movie-like smoothness of video movies. In the most complex virtual world, consisting of 750 polygons, we were able to achieve a maximum frame rate of 10-12 fps. Thus, when the pictures became more complex (realistic) in terms of the number of polygons used to create the objects in the virtual world, the performance dropped and the picture lost fidelity. We struggled with the tradeoff between performance and artistic reality. The graphics therefore, especially in the third and most complex world, are blocky and the movement of the images is fairly smooth, but not fluid. We did this to optimize performance. Also, shading and light sources were not available features on the early version of the World Tool Kit, which makes the world look cartoonish.

Virtual Training Scenarios

We created three training scenarios (worlds), which were graduated in terms of difficulty and complexity. The first world consists of a black-and-white tile floor that runs to infinity in all directions. The user can maneuver in any direction at speeds of up to approximately 50 mph. There are no obstacles, no errors can be made, and the user is completely safe (see Figure 1). However, even in this simple world children can learn important concepts such as go, no go, keep going, and stop. It is important to note these are often foreign concepts to a child who has never been able to crawl.

In the second world there is a finite field or surface available for exploring. The surface of the world is approximately 1,500 ft. x 1,500 ft. to real scale. If users travel too far, they can fly off the edge of the world. The VE consists of a grassy area with objects or obstacles scattered around (see Figure 2). The obstacles are far apart but close enough to see as small dots on the horizon. When the user gets close enough to the object, it begins to make a sound. Some objects also begin to move. The idea was to inspire curiosity and wonder in the children and to encourage the development of a visual memory of the virtual space so they could visit their favorite places when they visited this virtual world. We also tried to create a few places the children would learn to avoid, places where they might get stuck or lose control, since this is a normal part of the development of mobility skills. With this in mind, we created a patch of virtual mud and some virtual ice. Both places were very popular with the kids.

Image of virtual street crossing intersection.

Figure 3 a/b. Our third virtual world is the most complex, and allows children to practice crossing a busy street in a wheelchair. Part a shows a child using the world; part b depicts the world from the user's perspective.

The third world was modeled after a street crossing in Eugene, Oregon. The crosswalk crosses a fairly busy street in the middle of the block, not at an intercession. It is operated by a standard hand-operated button mounted on the streetlight pole on one side of the crosswalk. Figure 3a shows a child using the virtual world; Figure 3b is a depiction of the world from the wheelchair operator's viewpoint. The world contains two virtual cars that travel up and down the street, producing realistic engine noises as they drive by. They also stop at the virtual crosswalk when the walk sign signals the user it is safe to cross. Children can be hit by vehicles if they enter the crosswalk and begin crossing the street at the wrong time. However, we made every effort to make sure getting struck by vehicles was not fun. If a user is virtually run over, screen motion freezes, an unpleasant thunderous crash is heard, and the user is automatically repositioned at the starting point on the virtual sidewalk.

Findings

We found that in children who completed the study, driving skills increased as a function of time spent training in VR. Specifically, the ability to turn right, turn left, stop before hitting a wall, and travel down a sidewalk improved. We measured these skills using procedures standardized across children, after every two hours of VR training. In general, we also found children tended to spend most of their time driving in the first and second world in the early stages of training. As training progressed and skills were acquired, children tended to work less in the first world and more in the second and third worlds. It is also of interest that many of the children chose not to use the HMD, instead preferring to look at a large monitor placed in front of them while operating the wheelchair. This minimized the extent to which some children were immersed in VR, but did not seem to diminish their interest in using a wheelchair in VR.

Virtual Science Education

While working to develop the mobility training application, we began to consider a problem that orthopedically-impaired high school students face when taking required science courses such as biology, chemistry, and physics. Many of these students are of normal or above-normal intelligence but lack sufficient motor control of the upper extremities to participate fully in science experiments and activities, which minimizes their potential to learn important scientific concepts. In VR, we can arrange it so that simple motor skills enable the student to participate fully in scientific tasks that non-disabled students engage in: observation, measurement, and scientific experimentation.

Image of the Virtual Science Pod.

Figure 4: The Virtual Science Research Pod



In virtual science education, tools can be available to measure time, size, and speed, for example, as well as the ability to manipulate variables during observed events to discover, analyze, and ultimately to understand cause-and-effect relationships among the variables under study. As a first step we have created, again with the help of the U.S. Department of Education, a Virtual Science Research Pod (VSRP) (see Figure 4) that can be operated with a standard joystick assembly. The pod is a scientific research tool students use to learn about science and the scientific method.

The VSRP is a transparent dodecahedron and contains a wheelchair with the wheels removed that operates using the same motor skills learned in previous VR applications. It has a simulated onboard computer that can be used to study or investigate scientific phenomena. The pod can be flown like a helicopter to inspect items or events of interest. It has the unique ability to shrink, which permits students to enter into smaller substructures for more detailed study. It can also expand so as to permit study from a broader perspective. When an item or event is selected for study, the pod is capable of scanning it, which stores information about the scanned item in the onboard computer. Students can then access the material for study. Other features to be built into this system include the capacity to compress or expand time, as well as to move backward or forward in time. This will permit systematic study of geology and astrophysics, for example. It would also permit a student to study an entire plant growth cycle from seedling to compost.


Virtual Science Platform

Our current system runs using an Intel 200MHz Pentium motherboard with no graphics accelerator. The Sense8 graphics engine was replaced with an in-house VR environment developed around BRender's 3D graphics engine. Again, the VR4 flight helmet manufactured by Virtual Research is being used as our HMD. Our improved system is able to produce 20 frames per second, even in our most complex scenes, which include as many as 3,000 polygons. These virtual objects are textured, Gouraud-shaded, and can include texture-mapped artwork that is attractive and compelling. Our unique application requires us to split the signal to each of the two LCD screens in the HMD, which is a feature the graphics engine was not designed to accomplish. With the help and cooperation of engineers at BRender Corporation and Virtual Research, we have solved this problem and are now able to display 3D images in an immersive VR. We do not yet have stereoscopic capability, but are continuing to work on this problem. Head position is tracked using a Polhemus sen-sor mounted on top of the HMD.

Why Use VR?

VR technology has the potential to permit all science students, including those with orthopedic impairments, to observe events and phenomena impossible for them to experience any other way. For example, physics students are asked to imagine how very small particles react when combined or when bombarded with high-energy electrons. They are also asked to imagine what the trajectory of a ball is when dropped from a moving train, when viewed from either the train itself, or from the trainstation as the train passes by. Students are also asked to imagine how phagocyte cells and cilia function to clean the lungs of impurities. These and other important scientific phenomena can be observed and studied directly in VR. Also, while viewing these and other events, students have the ability to take notes using voice-recognition technology. This will allow them to record information about the topic of study in their own words, which can be subsequently printed and used to prepare for tests, create scientific reports, and for completing relevant worksheets made available during regular science education classes.

Conclusion

During mobility training we were surprised at some of the children who became attentive and worked hard while in VR. Some of these children were nonverbal, had questionable functional vision, and had apparent significant learning disabilities. When we saw such children concentrating on the task, following instructions, and demonstrating gains in operating skills, we were delighted. There is definitely a place for this application in the education and training of children with severe orthopedic impairments-motivation is the key factor.

Frankly, it is difficult to think of an application that cannot be created, in some form or another, using today's VR technology. Creating a new application is largely a matter of time and money. The more difficult question is what to do with the technology? What applications make the most sense? What applications will have the most impact, be the most cost-effective, and affect the largest number of people? These are difficult and important questions to ask.

The engine that should be driving the evolution of this field should not be the technology becoming available so quickly few of us are able to keep up. What should be inspiring the field's evolution are the needs of important segments of the population-allowing technological innovation to define the applications is putting the cart before the horse. We have been fortunate enough to define a few important problems and to find corporate leaders willing to help us solve application problems by reconfiguring their technology. This trend bodes well for the future of VR in education and rehabilitation and should be encouraged.

Authors

Dean P. Inman is a research scientist at the Oregon Research Institute.

Ken Loge  is an implementation coordinator/designer at the Oregon Research Institute.

John Leavens is a systems analyst at the Oregon Research Institute.

Acknowledgments

We gratefully acknowledge the support of the U.S. Department of Education, which has funded the research reported in this article (grants number H18OE3OOO1 and HI80T40100). We would also like to acknowledge the support of the NEC Foundation of America for support of the science project.

Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or a fee.

August 1997 / Vol. 40, No.8  Communications of The ACM.