headshot of Sergey Levine

Research Expertise and Interest

artificial intelligence, intelligent systems and robotics

Research Description

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

In the News

Learning to learn

When children play with toys, they learn about the world around them — and today’s robots aren’t all that different. At UC Berkeley’s Robot Learning Lab, groups of robots are working to master the same kinds of tasks that kids do: placing wood blocks in the correct slot of a shape-sorting cube, connecting one plastic Lego brick to another, attaching stray parts to a toy airplane.

Seven early-career faculty win Sloan Research Fellowships

Seven assistant professors from the fields of astronomy, biology, computer science, economics and statistics have been named 2019 Sloan Research Fellows. They are among 126 scholars from the United States and Canada whose early-career achievements mark them as being among today’s very best scientific minds. Winners receive $70,000 over the course of two years toward a research project.

Featured in the Media

Please note: The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the official policy or positions of UC Berkeley.
January 9, 2024
Sergey Levine and Karol Hausman

How 34 labs are teaming up to tackle robotic learning.

March 18, 2020
Eugene Demaitre
Berkeley researchers have developed a mobile robot called BADGR, which learns to navigate independently. "Most mobile robots think purely in terms of geometry; they detect where obstacles are, and plan paths around these perceived obstacles in order to reach the goal," says doctoral electrical engineering and computer sciences student Gregory Kahn, one of the study's co-authors. "This purely geometric view of the world is insufficient for many navigation problems." Kahn worked on the robot with electrical engineering and computer sciences professor Pieter Abbeel, director of the Berkeley Robot Learning Lab and co-director of the Berkeley Artificial Intelligence Research (BAIR) lab, and assistant electrical engineering and computer sciences professor Sergey Levine. With just 42 hours of autonomously collected data, BADGR outperformed Simultaneous Localization and Mapping (SLAM) approaches in a test, and it had less data to work with than other navigation methods, according to the researchers. Link to video.
Loading Class list ...