Stuart Russell in outdoor environment
Photo credit: Brittany Hosea-Small

Research Expertise and Interest

artificial intelligence, computational biology, algorithms, machine learning, real-time decision-making, probabilistic reasoning

Research Description

Stuart Russell is the Michael H. Smith and Lotfi A. Zadeh Chair in Engineering and a professor in the Division of Computer Science, EECS. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and philosophical foundations. He has also worked with the United Nations to create a new global seismic monitoring system for the Comprehensive Nuclear-Test-Ban Treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity. The latter topic is the subject of his new book, "Human Compatible: AI and the Problem of Control" (Viking/Penguin, 2019), which was excerpted in the New York Times and listed among Best Books of 2019 by the Guardian, Forbes, the Daily Telegraph, and the Financial Times.

Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum's Council on AI and Robotics. He is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the IJCAI Research Excellence Award, the World Technology Award (Policy category), the Mitchell Prize of the American Statistical Association, the Feigenbaum Prize of the Association for the Advancement of Artificial Intelligence, and Outstanding Educator Awards from both ACM and AAAI. From 2012 to 2014 he held the Chaire Blaise Pascal in Paris, and from 2019 to 2021 he held the Andrew Carnegie Fellowship. In 2021 he was appointed by Her Majesty The Queen as an Officer of the Most Excellent Order of the British Empire (OBE) and was selected as Reith Lecturer. He is an Honorary Fellow of Wadham College, Oxford; Distinguished Fellow of the Stanford Institute for Human-Centered AI; Associate Fellow of the Royal Institute for International Affairs (Chatham House); and Fellow of the Association for the Advancement of Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI; it has been translated into 14 languages and is used in 1500 universities in 135 countries.

In the News

How To Keep AI From Killing Us All

In a new paper, UC Berkeley researchers argue that companies should not be allowed to create advanced AI systems until they can prove they are safe.

Featured in the Media

Please note: The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the official policy or positions of UC Berkeley.
May 18, 2022

An emerging arms race between major powers in the area of lethal autonomous weapons systems is attracting a great deal of attention. Negotiations on a potential treaty to ban such weapons have stalled while the technology rapidly advances.

Nov 28, 2021
Madhumita Murgia

The computer scientist Stuart Russell met with officials from the UK’s Ministry of Defence in October to deliver a stark warning: building artificial intelligence into weapons could wipe out humanity.

January 13, 2020
Peter High
Interviewed about his new book, Human Compatible: AI and the Problem of Control, electrical engineering and computer sciences professor Stuart Russell, director of Berkeley's Center for Human-Compatible AI, talks about his background in the AI field and the rapid-fire developments that have occurred in recent years, and he looks to the future with some warnings about crucial questions that need to be addressed. Asked about his optimism that AI development can regulated to assure society's protection, he says: "Yes, I believe we can do that. It is part of a gradual maturing of the entire IT industry. Other areas such as civil engineering in which people build bridges and skyscrapers that people's lives depend on have developed codes of conduct and legal building codes over the centuries. That is perfectly normal. We do not think about that as onerous government regulation, but we are glad that the skyscraper conforms to the building codes when we are on the 72nd floor of a building. The IT industry is going to have to start becoming more similar to civil engineering and medicine where people have a mature acceptance that the wellbeing of humans are in their hands, and they take that responsibility seriously. I do not believe the IT industry has quite gotten around to this idea that they have a serious effect on the world and not necessarily a good one."
October 7, 2019
Ned Desmond
Asked in an interview why he wrote his new book, "Human Compatible: Artificial Intelligence and the Problem of Control, electrical engineering and computer sciences professor Stuart Russell says: "I've been thinking about this problem -- what if we succeed with AI? -- on and off since the early 90s. The more I thought about it, the more I saw that the path we were on doesn't end well." Asked who should read it, he says: "I think everyone, because everyone is going to be affected by this. As progress occurs towards human level (AI), each big step is going to magnify the impact by another factor of 10, or another factor of 100. Everyone's life is going to be radically affected by this. People need to understand it. More specifically, it would be policymakers, the people who run the large companies like Google and Amazon, and people in AI, related disciplines, like control theory, cognitive science and so on." Professor Russell's book goes on sale Tuesday, October 8.
Loading Class list ...