Developed by the University of Illinios, the CAVE premiered at the ACM SIGGRAPH 92 conference, and has achieved national recognition as a compelling display environment and earned the reputation as the “second way” to virtual reality in response to MIT’s head-mounted devices.
The CAVE is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to "The Simile of the Cave" found in Plato's Republic, in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are.
Morris Chuckman has been working at the University of Illinois for several years. He participated in this online interview to explain what the CAVE is and how it is being used.
Collusion: What is the CAVE?
Morris: Physically, the CAVE is a metal frame outlining the edges of a 10x10x10 cube. Polymer plastic-like screens are stretched over the three vertical faces of the frame. You stand in the middle of this thing and wear shutter glasses. Outside of the cube there are four projectors pointing at the three polymer covered faces and one projector pointing down at the floor.
The refresh rate of the projectors is synched up with the flicker rate of the shutter glasses so that every other projected frame alternates between a left eye and right eye image. The shutter glasses block out the image not meant for the blocked out eye effectively supplying each eye with its own personalized customized 3d image.
The world is rendered onto the cube in such a way that the corners of the cube (the edges of the physical metal frame) are ignorable and the depth cues that your brain receives come from the 3d image. This makes the experience sort of like taking a glass elevator ride through a playstation game!
C: That architecture is beautiful. What do they do with it?
M: Mathematical and astronomical toys and simulations, even QUAKE II.
C: Really!! You can play Quake?!
M: Caterpillar uses it for earthmover design. Nalco uses it for boiler design. There are also medical applications. You can zoom in and walk through a 50-year-old dead cryo-woman. And, yes, QUAKE II!
C: Well, not many of us can say that we've swam through a heart. At least not yet. Have they tried integrating motion sensors?
M: Yeah. Your glasses are tracked and so is this wand that you hold. The wand also has a pressure sensitive thumb joystick and three trigger buttons. It can tell where you are in the cube, which way you're looking, where your wand is, and which way its pointing. It also knows the pitch, roll, and yaw orientation of the sensors.
Now that I’ve finished the coding project for the Psychology department I should be able to get some free time in there again. I'm thinking about what a 4 dimensional Rubik's hypercube might look like...
C: Hell yeah- why stick to 3 dimensions? Each point is it's own universe.
M: You know it, budduh! The building that the CAVE is housed in is huge.
C: What department runs this? Who Sponsored it?
M: The CAVE spawned from a multi-disciplinary human computer interaction cybernetics research complex that also has a center for studying eye-tracking. NCSA, Nation Center for Supercomputing Applications, gave the majority of funding. They have a lot of government clout. This Beckman guy donated tons of cash to build the cybernetics research compound. He's a 99yr old guy that started the chemical research industry (infrared spectrometers and things) in the 30s.
Many departments get to use the facilities. There is a really good animation class that gets in there through the art department. I originally gained access through a music professor to create a 3D visual music program.
C: What does the Art and Music department do with this stuff?
M: They have software that puts Maya animations in the CAVE. Its called Performer, and it works by grabbing the animation's polygon data and render it in the CAVE. I saw a 3D movie of a couple of colliding galaxies done like this. Performer is also a C++ programming library and I've seen it used to develop a driving simulation.
They use something called Virtual Director. It allows you to guide an audience through a virtual scene like the VR shit they have in the Omnimax. It's fun to play with because whatever virtual object you point the wand at shows up on a virtual TV in the corner of the CAVE.
C: Are any other schools doing similar work?
M: I have visited the University of Michigan at Ann Arbor and they seem to be right along-side U of I in all this. They focus more of the New Media side of things, whereas we are more scientifically oriented. U of I in Chicago developed the first CAVE and Argonne National Labatories is doing a lot of cool stuff along similar lines.
C: Has any effort been made to make cave animation accessible through the web in a real-time fashion, so a web user and a CAVE user could interact?
M: Not very much - unfortunately. CAVERNUS is an interCAVE communication system but they don’t have a WWW to CAVERNUS interface, yet. It's a good, often asked, question.
C: Do any of the CAVE environments include texture maps of the objects?
M: Yes, QUAKE II is texture mapped! So is Mitologies and Crayoland and probably most of the stuff on the EVL homepage. The rendering is all flat and I don't know when OpenGL plans on adding radiosity or the infamous Ray-Tracing.
Additional Technical Data:
The CAVE uses Electrohome Marquis 8000 projectors throwing full-color workstation fields (1024x768 stereo) at 96 Hz onto the screens. This gives approximately 2,000 linear pixel resolution to the surrounding composite image. Computer-controlled audio provides a sonification capability to multiple speakers. A user's head and hand are tracked with Ascension tethered electromagnetic sensors. Stereographics' LCD stereo shutter glasses are used to separate the alternate fields going to the eyes. A Silicon Graphics Onyx with three Reality Engines is used to create the imagery that is projected onto the walls and floor. The CAVE's theater area sits in a 30x20x13-foot light-tight room, provided that the projectors' optics are folded by mirrors.
Currently the EVL’s CAVE staff is working on developments in the following areas:
Hardware integration and development:
- Couple virtual environments (VE) to massively parallel processors, superworkstations, massive datastores, networks
- VE-to-VE tight coupling
- VE-to-VE transmission latency compensation
System software architecture design
- VE as a "scalable workstation" to the metacomputing environment
- Steer computations/ invoke programs on supercomputers
- Access massive datastores
- Collaborative environments (intraCAVE; interCAVE )
Human/computer interaction and navigation
- System-level software for interaction/navigation
Modes of interaction/navigation
- Graphical user interfaces
- Voice recognition
- Gesture recognition
- Tactile feedback
- Force feedback
- Motion control platforms
VE library and emulators
- Develop emulators in OpenGL
- Develop extensions to graphics libraries and toolkits to interface to VE devices (e.g., AVS, INVENTOR, NCSA Mosaic, PERFORMER, RENDERMAN, AutoCAD etc.)
- Upgrade emulators and libraries to include volume visualization
- Extend libraries to work with non-Cartesian data
- Sound/sonification tools for data analysis
CAVE development
- Design CAVE spaces larger than 10'x10'x10'
- Develop more durable, cost-effective VE systems for use in informal education
VE tools
- Virtual Director, Recorder, and Editor
- Quantitative analysis tools
- 3D user interface toolkit
URL
http://www.evl.uic.edu/pape/CAVE/
|