Anyone with normal vision knows that a ball that seems to quickly be growing larger is probably going to hit them on the nose.
But strap them into a virtual reality headset, and they still may need to take a few lumps before they pay attention to the visual cues that work so well in the real world, according to a new study from University of Wisconsin–Madison psychologists.
“The companies leading the virtual reality revolution have solved major engineering challenges—how do you build a small headset that does a good job presenting images of a virtual world,” says Bas Rokers, UW–Madison psychology professor. “But they have not thought as much about how the brain processes these images. How do people perceive a virtual world?”
Turns out, they don’t perceive it like the real world—at least not without training, according a study Rokers and postdoctoral psychology researcher Jacqueline Fulvio published recently in the journal Nature Scientific Reports.
In 2015, Fulvio found that people were flunking her simple test of three-dimensional perception using a flat screen and standard 3D movie glasses. They were not good at discerning which direction a target was moving.
The researchers decided to move the test to virtual reality to provide more realistic indications of motion in three dimensions—such as binocular cues, in which slightly different views from the left and right eye reveal depth, and parallax, where closer objects appear to be moving faster than those farther away.
Given a one-second snippet of the movement of a small, round target across a plane that stretched away from the viewer at roughly eye level, study participants correctly moved a virtual paddle to intercept the target’s course less than a quarter of the time.
What Fulvio and Rokers found was that when most people put on a virtual reality headset, they still treat what they see like it’s happening on any run-of-the-mill TV screen.
Fulvio began giving study subjects visual and audible feedback. Once they’d watched the one-second flight and set their virtual paddle to catch the target, the game would reveal the full path of the target and a cowbell noise for success or swish for a miss.
The visual feedback nearly doubled success rates. (The cowbell improved scores, too, but less so.)
When she turned off the VR system’s head tracking, taking away the effects of players’ head movements and making them passive viewers, they were bad again. When she gave a little of that freedom back—restoring the system’s response to head movements, but making the virtual world shifts lag behind players by as much as half a second—they were still bad.
Interestingly, even players who reported keeping their heads stock-still showed improvements when the virtual reality system was incorporating the smallest wobbles of their heads into the scene they were seeing.
The results—that tiny head movements and typical binocular cues of motion are there for the taking in virtual reality, but that most people only use them if they are actively shown how VR differs from a flat computer screen—should help virtual reality creators improve uptake of their products.
Rokers says showing the effects of teaching people to use cues to three-dimensional motion that they are otherwise ignoring may ultimately help refine treatment for vision disorders such as blind spots or amblyopia (“lazy eye”) in which the brain can be trained to compensate for perceptual limitations.