How Do We Know Where Things Are?

News subtitle

A new study examines how the visual system stabilizes what you see when you move your eyes.

Image
Image
photo of a blue eye with white digital images superimposed
(Image by Sergey Nivens)
6/24/2021
Body

Humans’ eyes move three times per second. Every time we move our eyes, the world in front of us flies across the retina at the back of our eyes, dramatically shifting the image the eyes send to the brain; yet as far as we can tell, nothing appears to move. A new study provides new insight into this process known as “visual stabilization”. The results are published in the Proceedings of the National Academy of Sciences.
 

Image
portrait of Patrick Cavanagh

 

“Our results show that a framing strategy is at work behind the scenes all the time, which helps stabilize our visual experience,” says senior author Patrick Cavanagh, a research professor in psychological and brain sciences at Dartmouth and a senior research fellow in psychology at both Glendon College and the Centre for Vision Research at York University. “The brain has its own type of steadycam, which uses all sorts of cues to stabilize what we see relative to available frames, so that we don’t see a shaky image like we do in handheld movies taken with a smartphone.”

The study consisted of two experiments conducted in-person and also online that showed that even a small square frame moving on a computer monitor stabilized participants’ judgments of location — equally for in-person and online versions.

In experiment one, a white, square frame moves left and right, back and forth, across a grey screen and the left and right edges of the square flash when the square reaches the end of its path: the right edge flashes blue at one end of the travel and the left edge flashes red at the other, as shown in the figure. Participants were asked to adjust a pair of markers at the top of the screen to indicate the distance they saw between the flashed edges. One of the conditions, demonstrated in Movie 1, evaluated how far apart the outer left and right edges of the square frame appeared. In this example, the frame travel is longer than the frame size so the red flash is physically to the left of the blue flash. However, once the frame is moving fast enough, blue is seen left of red, as if the frame were actually stationary. The moving frame fools us by stabilizing our judgments of location, illustrating what the researchers call the “paradoxical stabilization” produced by a moving frame. Further on in the movie, the frame briefly fades out to reveal that the red flash is actually to the left of the blue flash, which has been the case the entire time.

“The brain has its own type of steadycam, which uses all sorts of cues to stabilize what we see relative to available frames, so that we don’t see a shaky image like we do in handheld movies taken with a smartphone.”
Patrick Cavanagh, research professor of psychological and brain sciences

Experiment two again demonstrated the stabilizing power of a moving frame (see Movie 2) by flashing a red disc and a blue disc at the exact same location within a moving frame. The square frame moves back and forth from left to right while the discs flash red and blue in alternation. Even though there is no physical separation between the discs, the moving frame creates the appearance that they are located to the left and right of their true locations. In other words, participants perceived the location of the discs in relation the frame, as if it were stationary and this was true across a wide range of frame speeds, sizes, and path lengths.

“By using flashes inside a moving frame, our experiments triggered a paradoxical form of visual stabilization, which made the flashes appear in positions where they were never presented,” says Cavanagh. “Our results demonstrate a 100% stabilization effect triggered by the moving frames—the motion of the frame has been fully discounted.”

Based on the study’s results, the research team plans to explore visual stabilization further using brain imaging at Dartmouth.

Mert Özkan, Guarini ’23, a graduate student in the department of psychological and brain sciences at Dartmouth; Stuart Anstis, professor emeritus in psychology at University of California San Diego; Bernard M. ‘t Hart, a postdoctoral student at the Centre for Vision Research at York University; and Mark Wexler, Chargé de Recherche at the Integrative Neuroscience and Cognition Center at the Université de Paris, served as co-authors of the study.