New virtual reality facility at Oxford is studying how humans perceive the world in 3D, aided by some state-of-the-art equipment
Virtalis has designed and implemented a virtual reality (VR)
facility for a new laboratory based at the University Laboratory
of Physiology, Oxford.
The Wellcome Trust provided a £500,000
grant to cover the cost of the specialist equipment and its
The Virtual Reality Research Group is headed by
Andrew Glennerster and is formulating experiments to test
alternative theories that explain how we perceive the 3D world.
He explained: "Very few experiments have ever been carried
out that test how the brain represents 3D while a person is
This forms part of a bigger question troubling
neuroscience - how is information from different times and places
in the brain linked together in a coherent way?" He
continued: "Normally, as we move around, our eyes jump from
object to object about three times a second, yet we are quite
unaware of any change.
We are also unaware of the swirling
patterns of motion that are generated on the retina as we move in
a static environment.
Of course, it would be disastrous if we did
perceive the dramatic retinal changes produced by saccadic eye
movements, or the subtler retinal flow produced by head
movements, but the question remains: how does the brain make
sense of all this rapidly changing visual information?" Dr
Glennerster's research team consists of experts from
differing spheres, including spatial vision, motor control and
The experiments they have already devised rely on the
'immersive virtual reality' created by the Virtalis
system, where virtual environments are experienced in real time
as the person moves.
They can subtly alter the way images change
as the observer moves so that they are no longer compatible with
a real, stationary 3D scene.
In this way, the researchers can
begin to work out the rules or algorithms the visual system uses.
It is already clear that the algorithms deployed by robots are
quite unlike anything the human visual system uses.
application of this work is in making computer vision systems
'see' in a way that is more like human vision.
experiment currently taking place in the laboratory, the scene
expands as the observer walks through it, but because this
happens slowly, the expansion is almost imperceptible.
be fooled into thinking that two objects are the same size when
in fact one is four times larger than the other.
that the room has stayed the same size is so strong, it seems to
overcome evidence from binocular stereopsis and motion parallax,
which are normally powerful cues telling us about 3D shape and
In another experiment, an object can turn towards or away
from the observer as they walk around the room.
surprisingly difficult in this situation to decide when the
object really is stationary unless there are other objects nearby
to compare it with.
Like the expanding room experiment, these
results suggest that information about the relation between
objects may be more important to the visual system than a global
reconstruction of the scene.
A third experimental strand analyses
the dynamic performance of a tracking system used for virtual
Dr Glennerster commented: "The team from
Virtalis, headed by Andy Connell, guided us through the choice of
hardware for our purposes, advising us on equipment that would be
upgradeable in the future.
They also set up our initial software
and measured the location of ultrasound emitters with a
theodolite to provide an accurate tracking system.
equipment at the Laboratory of Physiology consists of a Datavisor
80 head mounted display unit that displays a separate image to
each eye, providing a large overall field of view (FOV) of
approximately 112deg, including a 44deg binocular FOV.
images are updated at 60Hz, allowing real-time rendition of the
virtual environment, as subjects move their heads freely.
Position is measured using an Intersense IS-900 VET tracking
system that uses time-of-flight information from a series of
ultrasound emitters placed around the room, combined with an
inertia-based signal about the acceleration of the tracker.
position and orientation of the head-tracker is used to calculate
the two binocular optic centres, from which an SGI Onyx 3200
renders the appropriate images for each eye.
Virtalis's technical director, said: "This novel
solution enables the position of the users' eyes to be
tracked by micro cameras which look through the specially adapted
optics of the head mounted display into users' pupils.
data, when combined with the tracking information, gives the
researchers insight into how the brain processes the 3D world
presented by the VR system."