Kurt Schwehr
						schwehr _at_ cs stanford edu
						8/93
						Revised: Aug. 17, 1993
						HTML Upated: Feb, 1995

A.1-Rover Technology

Contents


A.1.0		Autonomous Rovers
A.1.1		Introduction
A.1.2		Mission Ready Rovers
A.1.2.1		  Flown Rovers
A.1.2.1.1	    Lunokhod 1 & 2
A.1.2.1.2	    Mars 2 & 3
A.1.2.2		  Currently Fliable Rovers
A.1.2.2.1	    Marsokhod
A.1.2.2.2	    Rocky IV
A.1.3		Development Platforms
A.1.3.1		  Ambler
A.1.3.1.1	    SITE-Recon Mission
A.1.3.2		  Attila
A.1.3.3		  Dante-Virgil
A.1.3.4		  Robby
A.1.3.5		  Others
A.1.4		Virtual Reality
A.1.4.1		  Introduction
A.1.4.2		  Telepresence
A.1.4.3		  Virtual Environments and Geographic Information Systems
A.1.4.4		  Entertainment
A.1.4.5		  Conclusion


A.1.0 Autonomous Rovers

Robotics Internet Resources Page

A.1.1 Introduction

It is important that we take an overview look at all of the rover technology currently available. There are currently a variety of systems that are near to being ready to fly and many other systems that present prototype implementations of concepts that could be a big improvement in our ability explore the surfaces of other planets. The cost savings of using preexisting technologies and systems can be tremendous. As John Garvey of McDonnell Douglas said, "Our Russian colleagues have developed true world-class capabilities in planetary exploration. Working with them, we can do better work than we could alone." (Ref. Wildermuth, 1993)

There are currently two camps of robotics that tend to argue back and forth along the major division in this field: simple, small "micro-rovers" verses larger more complicated rovers. There are definite trade offs between the two design approaches. I will cover both types of rovers and will present a concept for a combined mission that tries to use different scales of rovers to make a more robust system. Using micro rovers could send a large number of small micro-rovers in the space and weight required for one or two regular rovers to Mars. There would be less concern over loosing a percentage of micro rovers compared to loosing on of 1 or 2 larger rovers. On the other hand, a single larger rover can have more onboard computational power and larger scientific instruments.

There is also the split between legged vehicles, wheeled vehicles, and vehicles that integrate the two paradigms. Legs provide a more stable platform for instruments, are more energy efficient for motion, and cause less environmental damage to the area explored, but they require higher computational loads in order to figure out where to place each foot. Wheeled vehicles are mechanically simpler, computationally less expensive and more reliable (?), but require more power for moving the same distance and leave behind wheel tracks.

A.1.2 Mission Ready Rovers

A.1.2.1 Flown Rovers

So far, only the Soviet Union has attempted to land robotic rovers on other planets or moons. They have sent a total of 5 rovers, of which two were successfully operated. The fifth, which is not described here, is a hopping rover (PROP-F) sent on the ill fated Phobos 2 mission. In addition to the work in rovers presented here, any serious design work on planetary exploration vehicles should look at the Apollos' Lunar Roving Vehicle (LRV). The LRV will not be covered here because it had to be driven by an astronaut. The remote piloting system was scrapped in the concept phase due to weight constraints.

A.1.2.1.1 Lunokhod 1 & 2

The first planetary rover, Lunokhod I (Moon Rover 1), landed on the Moon November 17, 1970 and explored the moon for 11 months. The Lunokhod's total mass was 750 kilograms, with the 105 kg undercarriage moving the rover along at 0.8 to 2 kph. (Ref. Kermurjian, 1990) Lunokhod 2 arrived on the Moon on January 16, 1973. The two Lunokhods operated for a total of 414 days and traveled 7 and 37 kilometers. The Lunokhod design has eight wheels and is powered by solar with a storage battery.

A.1.2.1.2 Mars 2 & 3

The first and only two robotic rovers reached the surface of Mars in 1971 during the Soviet Mars 2 and 3 missions. The Mars 2 lander reached the surface on November 27 followed by Mars 3 on December 2. The two landers arrived during the worst martian dust storm on record and both of the landers failed: Mars 2 crashing into the surface and Mars 3 lasting a brief 20 seconds on the surface. As a result, the two rovers never got a chance to operate. These two small rovers were designed to walk 15 meters on skis while carrying two instruments each: a dynamic penetrator and a densitometer. (Ref. Kermurjian, 1990)

A.1.2.2 Currently Fliable Rovers

These rovers are very close to the point where they can be sent on missions and are slated for missions in the near future.

A.1.2.2.1 Marsokhod

Figure A.1.2.2.1-1

The Marsokhod (Russian for "Mars Rover") was developed through several iterations at VNII Transmash (Ref. Kermurjian et al., 1992). The Marsokhod is pictured in Figure A.1.2.1-1. The result of this research is the very capable Small Marsokhod (TABLE A.1.2.1-1) that is scheduled to fly on the Russian Mars 96 mission (See Section 2.2.2.3.2). In addition, while the Russian design team was working in California they quoted a price for the Marsokhod: $60 M - delivered to the surface of Mars with an instrument package provided by the purchaser. The rover has been successfully tested in the California deserts: Dumont Dunes north of Baker and Mars Hill in Death Valley (Ref. Anderson, 1992) as well as in several laboratory sand boxes and in the volcanic terrain of Kamchatka.

Price per Unit: $60M (delivered to martian surface) Total Mass: 100 kg Life-Time: 1 Terrestrial Year Covered Distance Goals: 100 km/Year, 50 to 100m/Day Velocity: <180m/h Electrical Power: 10 to 20W (RTG) Chassis: 31 kg Articulated Frame with 6 cone wheels, by Vniitransmach Payload: 14.5 kg (8 kg science + drilling system) Width: 60 cm Length: 90 cm (Ref. Runavot, 1993; JPL Universe, 1993) The Marsokhod 96 Table A.1.2.2.1-1

A.1.2.2.2 Rocky IV

Figure A.1.2.2.2-1

JPL's Rocky IV micro-rover (Figure A.1.2.2.2-2) is an excellent expression of the drive to develop cheaper methods for exploring planetary surfaces. It fits well into NASA's Discovery Class Missions discussed previously in that it focuses on simplifying difficult engineering tasks: "...[M]any complex tasks may [be] achieved by programming a robot with a set of behaviors and activating or deactivating a subset of those behaviors as required by the specific situation in which the robot finds itself... Behavior control requires much less computation than is required by traditional AI planning techniques. The reduced computational requirements allow the entire rover to be scaled down to the micro-rover (1-5 kg) level." Rocky is meant to operate autonomously on small tasks with little communication between Earth and the rover. The behavior system also has the advantage of working well in a team of micro-rovers. The benefits of this system should be "savings in fabrication costs, launch mass, landing mass, and increased mission reliability." (Ref. Miller, 1990)

The two main problems with this small of a vehicle are 1) the communication systems and many of the instruments do not scale down well with the rest of the rover and 2) with this design philosophy, this vehicle is not usable for telepresence operations. The scaling problems can be solved by sending multiple rovers, each with a different set of science instruments, and by relaying the rovers data through a single uplink station on the surface of Mars.

For a recent press demonstration, the JPL prototype weighed in at 7 Kg and is 61 cm long by 38.5 cm wide by 36 cm high. It has six 13- centimeter (5-inch) diameter wheels made of strips of stainless steel foil with cleats to provide traction. Rocky runs on 5 watts of solar energy, used during the day. At night, the electronics are turned off, and the keep-alive batteries run the unit. Rocky, was controlled by a Macintosh PowerBook for the demo. Rocky is able to ascend slopes of 26¡. It is equipped with a visible infrared spectrometer, a color camera, a rock chipper, and a soft-sand scoop takes soil samples. Additionally, the rover can place a seismometer on the surface (Ref. JPL Universe, 1993). The control system is based on the subsumption architecture of Brooks, known as behavior-based control. The system currently runs on a tiny 1-MIP Motorola 6811 micro controller with 40k of memory (Ref. Gat et al., 1993.) A space-qualifiable version of Rocky IV scaled to approximately 4 kg is currently being designed for a planned 1996 MESUR Pathfinder launch (See Section 2.2.2.3.4).

Price per unit:			$2.5M (includes landing system)
Total Mass:			7.5 kg
Life-Time:			7 Days (minimum)
Coverage Distance Goals:	100 m
Velocity:			~1 m/min.
Electrical Power:		100W Hr/Day (Solar) + 150 W Hr non-rechargeable battery
Chassis:			Rocker/Bogie 6 wheel system
Power Consumption:		14.7 W Hr/Day; 8.0 W Hr/Night 
(Ref. Reynolds, 1993) 1996 JPL Rocky IV
Table A.1.2.2.2-1

A.1.3 Development Platforms

A.1.3.1 Ambler

Ambler is a large prototype rover developed at Carnegie Mellon University. It is supported by six legs and weighs 2500 kg. Ambler is equipped with large on board computational facilities and has shown an ability to traverse a wide variety of terrains including slopes up to 30¡ while using a laser scanner to image the landscape (Ref. Krotkov and Simmons, 1992).

Mass:			2500 kg
Height:			4.1 to 6.0 m
Width:			4.5 to 7.1 m
Locomotion:		6 legs - circulating gate
Speed:			35 cm/min.
Power Consumption:	
Steady-State:		1400 W
Laser Scanner:		+ 210 W
Leg Motion:		+ 150 W
Horizontal Body Motion:	+ 600 W
Vertical Body Motion:	+ 1800 W
Ambler
Table A.1.3.1-1

A.1.3.1.1 SITE-Recon Mission

Why An Ambler Mk 2 on the SITE-Recon Mission?

Code named: "Bulwinkle"

The SITE-Recon mission proposed in Section 2.2.2.5 is based on a modified cargo vehicle. The mission as whole is meant to demonstrate working versions of a number of untested systems by sending them on a useful Mars precursor mission. In particular, the mission contains an unusual array of robotic rovers. The mission proposes two groups of rovers: 1) 10 Marsokhod Rovers and 2) one second generation Ambler rover (named "Bulwinkle") that is parent to 10 Rocky microrovers. The driving force behind this assortment of rovers is that we do not know what the "best rover configuration is so we must try a number of different combinations in actual field use to gain more hands-on experience.

The primary reason for the choice was the large amount of local computational power available on the Ambler (currently two Sun workstations) that allow the Ambler and subordinate rovers to be almost entirely autonomous short of deciding the high level mission goals. This is important, in that the mission design already has ten Marsokhod rovers which use extensive human interaction to perform their missions.

A large part of this mission is a technology demonstration. The combination of ten Rocky IV rovers and an Ambler allow for complicated grouping behaviors to be tested. This will increase the useability of the group without having complex computational facilities located on each and every rover. See Gat's paper (Ref. Miller, 1990) for more detailed description of grouping behaviors in simple/small rovers. The group behaviors can be implemented with simple high level commands from the Ambler which is a base that provides a communication base/beacon/repeater with an antenna that is high enough up that it should be visible by most of the group at any time. The Ambler will be able to carry one large communications system that will allow the group to communicate back to the mother ship (the cargo vehicle) over a much longer ground distance with higher data rates.

The taller Ambler will allow us to mount a meteorology station above the surface layer and take this station a good distance from the landing site. Along with this meteorology station will go a nice imaging system. As Professor Don Lowe of Stanford University has said, a lot of excellent geology can be done with good, high resolution imaging with a zoom lens. If we were able to mount such an imaging system so high above the martian surface and be amble to move it around, we would be able to explore areas a long was away from the roving group without actually have gone to these sites.

It is assumed that the Ambler will have a much larger range than the Marsokhod. As the Ambler makes its traverse, it serves at a refueling depot for its daughter rovers (i.e.. recharging of batteries). This extends the effective use of the Rocky rovers.

Finally, the mission already contains ten Marsokhods, which only use up a small portion of the available 4000 kg payload available on the cargo vehicle. It would be silly to propose that we send another ten to the same area when we have an opportunity to test other systems. Admittedly, a sample return vehicle would probably fit nicely in this space/weight, but this would be a repetition of many previous proposals - numerous authors have discussed a variety of sample return missions. We need to try out some new ideas and get creativity flowing. This is an excellent opportunity to combine two different rover methodologies and do lots of science at the same time.

A.1.3.2 Attila

Attila is a small prototype developed by Rodney Brooks and Colin Angle at MIT . It is a continuation of the initial "insect" rovers concept. The little bugger has 6 legs and a small video camera. In addition to Attila, Brooks has developed another insect like microrobot known as Genghis. The ISX Corporation, which works with Brooks, had ISRobotics build a more robust, environmentally sealed, tracked microrover known as Pebbles that it equipped with a soil scoop.

At the 1992 California desert rover tests, a ramp was added to the Russian Marsokhod rover so Attila and Pebbles could ride piggy back on the Marsokhod (Ref. Bullock, 1993.) The idea was to provide a smaller vehicle that could carry a sampling tool into tight areas and also as an extra set of eyes should the parent rover get stuck. This is very similar to the SIMM '93 SITE-Recon mission concept (See Section 2.2.2).

A.1.3.3 Dante-Virgil

Dante was an attempt to robotically go where man has been unable to go: the throat of the active Mt. Erebus volcano in Antarctica. Dante's goal was to climb into the caldera and sample the active lava lake. Two previous attempts have been made by a combined New Zealand, French and American team in 1974 and '78. The first attempt was aborted on the caldera rim and the second attempt was halted within 30m of the floor when a the scientist being lower by rope was nearly killed in an eruption. The vehicle was to be delivered to the edge of the caldera by a turbine driven vehicle named Virgil. Once Dante was let off of Virgil, it was to repel by a tether mounted on Virgil down the lava lake to take samples of the lava's composition. Unfortunately, Dante failed 21 feet below the caldera rim. The fiber optic communications link between Dante and Virgil broke leaving Virgil immobile. Due to this failure, the project was called off and Dante was rescued from the cliff. It was predicted that Dante would take 24 to 36 hours to descend 850 feet to the crater floor.

Amazingly, the vehicle was designed, built, and deployed in only a year. The vehicle has eight legs, weights 450 kg and is 1.8m x 2.5m x 3m in size. The complete system consisted of four nodes: Dante, Virgil, Base Station (Erebus Hut), and Mission Control (NASA Goddard Space Flight Center) Dante's journey was computationally expensive: it used a total of 5 sun4 and 3 Motorola 68030 VME computers throughout the system. See Table A.1.3.3-1 for more of the vehicle parameters.

The project was lead by William "Red" Whittiker of the Field Robotics Center, Carnegie Mellon University who planned and executed the project. The project was also co-lead by Philip R. Kyle of the New Mexico Institute of Mining and Technology. Dante is a synthesis of the previous work done at CMU on the NavLab and Ambler projects. "The robot demonstration project had three objectives: to test telerobotic capabilities; to test the use of such sophisticated hardware in a very harsh and demanding environment; and to test the use of advanced computer programs which would enable machines such as the Dante robot to act under a form of machine intelligence." The development and telepresence aspects of the process were considered a success despite the projects failure to reach the caldera floor and David Lavery, NASA Telerobotics program manager for the project feels "[t]he prototypes are worthy contenders for inclusion in any further planetary exploration."


DANTE

Propulsion: 4 legs on each of 2 frames Height: 2.5 m Width: 1.875 m (foot to foot) Length: 3 m (foot to foot) Ground Clearance: 0 to 1.45 m Speed: 2 m/min. Slope Handling: 30* without tether Telemetry: 400 m fiber optic (in tether) Power: Virgil via tether

VIRGIL

Height: 1 m Width: 1.2 m Length: 4 m Weight: 4540 kg Speed: 30 kph Track: 2.6 m Ground Clearance: 0.1 to 0.4 m Power: Internal combustion engine & 5 kW Honda generator Dante-Virgil TABLE A.1.3.3-1

A.1.3.4 Robby

Robby comes out of work in the late '70's and early '80's on the Surveyor Lunar Rover Vehicle (SLRV). JPL combined its experience with the SLRV and the missions requirements for NASA's (now defunct) Mars Rover Sample Return (MRSR) program to develop the Robby testbed in 1988. Robby has been used to test planning algorithms, in particular they have focused on the "Semiautonomous Navigation" (SAN) algorithm. In September, 1990, Robby successfully navigated a 100 m course in 4.3 hours (Ref. Wilcox et al., 1992). Robby has also been used as a platform to test sampling arms and algorithms (Ref. Cameron et al., 1992).

Robby is rather large: 3 segments, for a total of about 4 meters long and 2 meters with; six wheels, 1 meter in diameter each. It weights in at 1200 Kg. Robby's design is able to surmount obstacles 50% larger than the wheel diameter.

A.1.3.5 Others

Obviously, there is much more out there than has been covered. A number of rover projects that the reader should be aware of: JPL's lab beast Tooth (Ref. Gat et. al., 1992), JPL's tiny Go-For: (Ref. Wilcox, 1992), MIT's MITY 1 & 2, plus many others. Also, do not forget all the work being done with submersible ROV's and AUV's - a rapidly developing industry that is doing large amount of R&D. For example, see Section A.1.4.2 on telepresence ROV's.

A.1.4 Virtual Reality

Virtual Reality WWWeb Sites Virtual Reality (VR): "Virtual Reality deals with convincing the participant that s/he is actually in another place, by replacing the normal sensory input received by the participant with information produced by a computer. This is usually done through three-dimensional graphics and I/O devices which closely resemble the participant's normal interface to the physical world. The most common I/O devices are gloves, which transmit information about the participant's hand (position, orientation, and finger bend angles), and head-mounted displays, which give the user a stereoscopic view of the virtual world via two computer-controlled display screens, as well as providing something to mount a position/orientation sensor on." (Ref. sci.virtual-worlds, 1992)

A.1.4.1 Introduction

As computers and their tasks become more complicated and overwhelming, it is important to work on methods that make communicating and interacting with these machines a more natural affair. In the last ten years, it has become possible to interact with a systems in new ways such as by the use of hand gestures and voice commands (Ref. Fisher et al., 1986). LCD glasses and head-mounted-displays (HMDs) allow the user to truly experience a three dimensional world instead of the prevalent 2D computer screen. In addition to visual and visceral immersion, three-dimensional auditory displays are another important aspect (Ref. Wenzel et al., 1988) which will allow the system to provide additional information without interfering with the users vision. A good overview of the technologies currently in use to generate virtual realities is written by Foley (1986) and the best introduction to virtual reality was written recently by a NASA Ames researcher (Ref. Ellis, 1991). All of these new technological advances come together to produce some very interesting possibilities for planetary exploration that will effect everything from the way an astronaut will work on the surface of another planet to the average citizen who will get to explore the worlds created by orbiters, rovers, and astronauts.

The general population has a strange view of virtual reality . This comes from seeing such things as Hollywood movie "Lawnmower Man." When asked about VR, they often bring up thinks like cyberpunks, "virtual sex" and Star Trek the Next Generation's "Holodeck." This does not have much of a relation to the current technology. It is important to clear your mind of the tremendous hype while thinking about what it can do for planetary exploration.

However, despite being way out of our technological reach, the Holodeck does bring in the right feeling of what virtual reality is about. In a number of episodes the crew of the Enterprise explore copies of the real world. It allows them a deeper view a situation or a system. The Holodeck is very similar too and must of come from "the Cave" - an MIT project that surrounds a person in a room that has screens on all surfaces, thereby immersing the person in the environment projected by the video screens. A good example of the power of this immersion can be experienced at such places as the Disney Epcot Center's theater in the round. This is essentially a large circular auditorium that has movie screens completely surrounding the audience. This total immersion has a much greater power to instill the immensity of such things as the Grand Canyon as a simple flat screen could. The audience feels like they are there, because everywhere they turn is still the Grand Canyon. As the film drops into the canyon, the standing audience has to grab hand railings for balance since optic righting reflexes are stimulated. The goal is to make the person feel like they are really there.

A.1.4.2 Telepresence

Telepresence: "Telepresence is a high fidelity form of remote control in which the natural sensory capabilities of the human operator are projected thru a robot to a distant work site." (Ref. Gwynne et al., 1992)

Geologists have said that, "The complex yet subtle nature of geological materials requires powers of observation, pattern recognition, and synthesis not possessed by automated devices. Field study...absolutely requires human geologists to be involved intimately." Telepresence technology provides us a way to get the scientists deeply involved in their field areas even when they are out of human reach, but we must be careful in applying telepresence: "if the remote operation becomes too cumbersome...the operator will concentrate more on mechanical aspects of the work and less on the intellectual ones" (Ref. Taylor and Spudis, 1989).

Rudimentary telepresence is achievable using only limited computational power. For example, the Telepresence Demonstration Project at NASA Ames achieved usable telepresence with their remotely piloted submersible using a simple Motorola 68000 based Commodore Amiga system (Ref. Schwehr, 1992). Unfortunately, most planetary exploration applications will need more computational power and more complex programs to function up to expectations. Using this platform, the group has shown that telepresence can be an effective tool for scientific exploration.

The Telepresence Demonstration Project, run by C.R. Stoker, uses a submersible remotely operated vehicle (ROV), a Deep Ocean Engineering SuperPhantom II, as a development platform. The group added a head tracked camera and science instruments to the ROV for an initial testbed. The vehicle has opperated successfully in several very different environments: a coral reef in the Florida Keys, the south shore of Lake Tahoe, and the ice covered Lake Hoare of the Antarctic Dry valleys. The trip to Antarctica is the most important of these expeditions because according to C.R. Stoker, "Antarctica is the most Mars-like environment on Earth." (Ref. NASA PR 92-147) The ROV is a big win for the group in allowing scientists to get around physiological constraints of diving, thus allowing the scientists to spend more time exploring their field areas. This is very similar to the constraints on astronauts due to life support systems. The telepresence system of the ROV succeeded in helping to improve the pilot's situation awareness while exploring underwater. In all three cases the vehicle was restricted to being less than 1100 from the pilot at the control console.

While the ROV was in Antarctica, the software was extended on the ice by a brilliant field engineer to allow for an operator to control the ROV from Ames while the vehicle dived under the ice near McMurdo base. The test was the first success of direct telepresence via satellite link. In December, 1992, the CMU Dante robot used this same satellite link to control the robot on Mt. Erebus from state side.

Recently, Butler Hine's group at NASA Ames conducted further remote tests with a Marsokhod Rover located in Moscow at the IKI laboratory (Ref. NASA PR 93-84). The tests were designed to verify this technology for use on the Russian Mars 96 mission. Hine describes the control system at NASA Ames as "a 'tele-operator interface' because it is a combination of virtual reality and telepresence. We can drive the vehicle by looking through the rover's cameras, which is telepresence. We also can drive it using a computer-generated graphic simulation, which is virtual reality."

The time delay in response of the telepresence system for all of these tests was between 1/3 of a second for direct control up to a few seconds for telepresence via satellite. These results show that for on sight or "on planet" situations, straight telepresence control of a vehicle or other robotic system works quite well, but there are several reasons to want to add layers between the operator and the machine. The most dramatic reason, is for teleoperation of robotic explorers on other planets from Earth. For example, the distance between the Earth and Mars causes round trip delays of 11 min. 20 sec to 40 min. 50 sec depending on the positions of the two planets. With time delays of more than a few seconds, direct telepresence becomes more of a detriment and will cause the operator to quickly become fatigued (and damn bored!) The solution is to use predictive systems that model the environment and allow an operator to control a computer generated rover that acts like the real thing with out the delay. The operator can then run through a series of operations with the model and then wait for the conformation that the rover has achieved this set of goals. Predictive systems are currently being developed at NASA Ames in Hine and McGreevys' groups and in Schenker's Man-Machine Systems Group at JPL.

Another reason for wanting virtual reality between the operator and the vehicle is to allow the person to work through several different scenarios to see the simulated results of each. Once he decides on the proper sequence a events, the operator can move through the sequence with confidence or have the simulator send off the correct actions of the operator that were generated and saved during the tests.

"One can integrate and automate many of the vehicle functions that are normally driven manually. Teleoperation may thus be raised to a supervisory level, relieving the operator of tedious tasks as piloting the vehicle from point-to-point. Automated control may be employed at the operator's discretion to free him of tiring tasks that range from the most mundane to the most complicated." (Ref. Gwynne et al., 1992) An excellent example comes from a joint project between Stanford's Aerospace Robotics Laboratory (ARL) and the Monterey Bay Aquarium Research Institute (MBARI), which is developing technologies for use on several different remotely operated submersible vehicles. The goal of their project is to design task level controls that are off-loaded onto local computers which should free the operator to focus more on planning and decision making than on the difficulties of piloting a vehicle. The group has developed "and demonstrated the capability of combined camera and vehicle tracking of underwater targets" - i.e., the group created a system that can autonomously follow objects (such as a plastic turtle or a fish) as they move through the water. (Ref. Marks et al, 1992). When integrated with the MBARI ROV, this system should make following fish for hours on end a much more pleasant task for the ROV crew - allowing the scientist to keep following a fish without the highly experienced pilot always having to run the show.

McGreevy and Stoker (1991) conclude that "geologic field work consists of highly integrated perceptual, cognitive, manipulative, and locomotive behaviors organized around a hierarchy of scientific goals and methods, and it is unlikely that automation and robotics will be able to replace this human capability any time soon. Thus, for missions beyond the mere rote gathering of rocks, that is, for planetary surface field work, human presence is required. To effectively explore large areas of Mars or other terrestrial bodies, telepresence offers the best combination of man and machine."

A.1.4.3 Virtual Environments and Geographic Information Systems

While the various rovers sent to Mars are out exploring, they will return a large quantity of data that will have a 3-D base. A good way to store the data would be in a geographical information system (GIS) that allows data to be associated with the location at which it was collected. Once these data sets are put into the GIS along with the topographic data from the Mars Observer orbiter, it can be accessed with traditional database tools and directly from a virtual environment interface.

As these massive data bases continue to expand we need a good way to explore this model of Mars. Virtual environments (VEs) are the creation, in virtual reality, of physical locations using a variety of data sets. At the moment we have Viking and soon to come is the Mars Observer Data sets with which to create a virtual Mars to explore. An immersive interactive environment will allow scientists to explore Mars and get the best intuitive/visceral feel for the planet. It will also be valuable in getting astronauts intimately acquainted with the geologic setting before they even leave for Mars. They will be able to explore any part of Mars anytime through the trip to re-explore any location.

Geologist like to have an oblique view of area - i.e. like from a plane. "[The geologists] confessed to relying heavily on vision; theirs is a highly spatial business" say McGreevy and Stoker (1991). A VE can provide a close equivalent to a plane for the astronauts. The simulation will let them be able to quickly see from any angle they choose, even those not physically possible (like inside a mountain!)

A Marssuit heads up VR display can present the astronaut with terrain (topos), reference material, orbital photos while he is in the field. Similar systems have frequently been proposed for use in the zero-G environment around the space station Freedom (Ref. Fisher et al, 1988) as an EVA Spacesuit visor display. This system will allow easy hand-free note taking that is linked the astronauts location. The main impediment, speech recognition, should be up to speed by mission time.

A system similar to a ÒDatagloveÓ (Ref. Fisher, 1986) can be integrated into the suit to allow an astronaut to more easily interact with a virtual reality system. This system would sense the position and orientation of the astronautÕs arms, hands, and fingers. A system like this would be light weight and very trim (not bulky) with the use of small fiber optic flex sensors (patented by VPL research, Inc.)

A.1.4.4 Entertainment

While the astronauts are trapped in a small SEV and base, VR can provide a entertainment system with games and a way to tour Earth. Already, there have been attempts to commercialize virtual reality. In the game seen there is the BattleTech and Dactyl Nightmare games that involve groups of people battling it out in a virtual world. On a more benign level is, the Diaspar Virtual Reality Network has attempted to set up a educational/entertainment system called "The Lunar Tele-operation Model One (LTM1)." It is supposed to be a model of a model of a lunar base which people can connect to with their personal computers. Once connected they are able to interact with this simulated world and control model vehicles. An interesting idea, eh?

A.1.4.5 Conclusion

The technologies of virtual reality are presently at a point where they can be useful right now in planetary exploration as well as becoming more mature and refined in the future. By the time they become incorporated into the actual first manned mission to Mars they should be a very sophisticated and integral part of the mission operations. An example of the readiness of these technologies is that a telepresence system should be operational and used for the Mars 96 mission. Also there exist proposals for analyzing the Mars Observer data that will soon be available within a virtual reality setting. In the submersible industry, there is currently work on integrating virtual reality and telepresence into several ROV's and AUV's (untethered vehicles) at several locations. On the military side, a system known as Simnet is a simulation system built by BBN for training of military skills in tanks, helicopters, and other vehicles. It uses a combination of networked graphics displayed in physical mockups of the vehicles. There is also the wide variety of flight simulators for commercial and military use that rely on many of the same technologies.


Kurt Schwehr / schwehr _at_ cs stanford edu