The overall goal of Jack Gallant's research program is to discover how information is represented in the brain under natural conditions (i.e., natural stimulation and natural tasks). He addresses two broad questions in that work: How is information about the visual world represented across the brain? And how are these representations modulated by attention, learning and memory? His laboratory currently focuses on using fMRI, statistical and computational modeling to produce detailed human cortical maps of information related to vision, language and decision making; systematizing human functional anatomy; characterizing individual differences in cortical organization in humans; understanding dynamic thought processes in the human brain; and decoding human brain activity. They also have an active program of research aimed at developing next-generation non-invasive brain measurement technologies that could form the basis of powerful new brain-machine interfaces.
In the News
Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.
UC Berkeley scientists Jack Gallant and Shinji Nishimoto have wowed the world by using brain scans and computer modeling to reconstruct images of what we see when we’re watching movies. UC Berkeley broadcast manager Roxanne Makasdjian has produced a video of how they achieved this breakthrough, and where they’re headed.
Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, UC Berkeley scientists are bringing these futuristic scenarios within reach. Using functional Magnetic Resonance Imaging (fMRI) and computational models, researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.