Led by Dartmouth’s James Haxby, Neuroscientists Unlock Shared Brain Codes

Body

[[{“type”:“media”,“view_mode”:“media_large”,“fid”:null,“attributes”:{“class”:“media-image alignleft size-full wp-image-4848”,“typeof”:“foaf:Image”,“style”:“”,“width”:“75”,“height”:“75”,“alt”:“Dartmouth Shield”}}]]Dartmouth College Press Release Contact the Office of Public Affairs 603-646-3661 • 603-646-2850 (fax)office.of.public.affairs@dartmouth.edu

A team of neuroscientists at Dartmouth College has shown that different individuals’ brains use the same, common neural code to recognize complex visual images.

Image
Dartmouth neuroscientist James Haxby studies the encoding of visual images in the brain. (photo by Joseph Mehling ’69)

Their paper, “A common, high-dimensional model of the neural representational space in human ventral temporal cortex,” is published in the October 20, 2011, issue of the journal Neuron.  The paper’s lead author is James Haxby, the Evans Family Distinguished Professor of Cognitive Neuroscience in the Department of Psychological and Brain Sciences. Haxby is also the director of the Cognitive Neuroscience Center at Dartmouth and a professor in the Center for Mind/Brain Sciences at the University of Trento in Italy. Swaroop Guntupalli, a graduate student in Haxby’s laboratory, developed software for the project’s methods and ran the tests of their validity.

Haxby developed a new method called hyperalignment to create this common code and the parameters that transform an individual’s brain activity patterns into the code.

The parameters are a set of numbers that act like a combination that unlocks that individual’s brain’s code, Haxby said, allowing activity patterns in that person’s brain to be decoded – specifying the visual images that evoked those patterns – by comparing them to patterns in other people’s brains.

“For example, patterns of brain activity evoked by viewing a movie can be decoded to identify precisely which part of the movie an individual was watching by comparing his or her brain activity to the brain activity of other people watching the same movie,” said Haxby.

When someone looks at the world, visual images are encoded into patterns of brain activity that capture all of the subtleties that make it possible to recognize an unlimited variety of objects, animals, and actions.

“Although the goal of this work was to find the common code, these methods can now be used to see how brain codes vary across individuals because of differences in visual experience due to training, such as that for air traffic controllers or radiologists, to cultural background, or to factors such as genetics and clinical disorders,” he said.

Because of variability in brain anatomy, brain decoding had required separate analysis of each individual. Although detailed analysis of an individual could break that person’s brain code, it didn’t say anything about the brain code for a different person. In the paper, Haxby shows that all individuals use a common code for visual recognition, making it possible to identify specific patterns of brain activity for a wide range of visual images that are the same in all brains.

As a result of their research, the team showed that a pattern of brain activity in one individual can be decoded by finding the picture or movie that evoked the same pattern in other individuals.

Participants in the study watched the movie Raiders of the Lost Ark while their patterns of brain activity were measured using fMRI. In two separate experiments, they viewed still images of seven categories of faces and objects (male and female human faces, monkey faces, dog faces, shoes, chairs and houses) or six animal species (squirrel monkeys, ring-tailed lemurs, mallards, yellow-throated warblers, ladybugs and luna moths). Analysis of the brain activity patterns evoked by the movie produced the common code. Once the brain patterns were in the common code, including responses that were not evoked by the movie, distinct patterns were detected that were common across individuals and specific for fine distinctions, such as monkey versus dog faces, and squirrel monkeys versus lemurs.

This work is part of a five-year collaboration with signal processing scientists at Princeton University.

Latarsha Gatlin