Jack L. Gallant

Title
Professor of Psychology and Vision Science;, Helen Wills Neuroscience Institute
Department
Department of Psychology
Phone
510-642-2606
Fax
510-642-5293
Research Expertise and Interest
vision science, form vision, attention, fMRI, computational neuroscience, natural scene perception, brain encoding, brain decoding
Description

The work in our laboratory focuses on computational modeling of the visual system. Our goal is to formulate models that describe how the brain encodes visual information, and that accurately predict how the brain responds during natural vision. We study the visual system because it is more approachable than the cognitive systems that mediate complex thought.

The human visual system consists of a hierarchically organized, highly interconnected network of several dozen distinct areas. Each area can be viewed as a computational module that represents different aspects of the visual scene. Some areas process the simple structural features of a scene, such as the edge orientation, local motion and texture. Others process complex semantic features, such as faces, animals and places. Our laboratory focuses on discovering the way each of these areas represents the visual world, and on how these multiple representations are modulated by attention, learning and memory. Because the human visual system is exquisitely adapted to process natural images and movies we focus most of our effort on natural stimuli.

One way to think about visual processing is in terms of neural coding. Each visual area encodes certain information about a visual scene, and that information must be decoded by downstream areas. Both encoding and decoding processes can, in theory, be described by an appropriate computational model of the stimulus-response mapping function of each area. Therefore, our descriptions of visual function are posed in terms of quantitative computational encoding models. However, once an accurate encoding model has been developed, it is fairly straightforward to convert it into a decoding model that can be used to read out brain activity, in order to classify, identify or reconstruct mental events. In the popular press this is often called “brain reading”. Our laboratory has a large brain reading program, which you can read about elsewhere on this site.

Much of the work in our laboratory involves functional magnetic resonance imaging (fMRI), a rapidly developing technique for making non-invasive measurements of brain activity. Because the accuracy of encoding and decoding models will inevitably depend on the quality of brain activity measurements, we have a large research program that is concerned with developing new methods for collecting and processing large, high-quality fMRI datasets. Our computational models leverage many different statistical and machine learning tools, including nonlinear system identification, Bayesian estimation theory and information theory. Therefore we also undertake fundamental research on statistical methods and estimation theory.

In the News

December 21, 2011

New video shows reconstruction of 'brain movies'

UC Berkeley scientists Jack Gallant and Shinji Nishimoto have wowed the world by using brain scans and computer modeling to reconstruct images of what we see when we’re watching movies. UC Berkeley broadcast manager Roxanne Makasdjian has produced a video of how they achieved this breakthrough, and where they’re headed.

September 22, 2011

Scientists use brain imaging to reveal the movies in our mind

Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, UC Berkeley scientists are bringing these futuristic scenarios within reach. Using functional Magnetic Resonance Imaging (fMRI) and computational models, researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.