A Path to Resourceful Autonomous Agents
On Wednesday, April 12, Sergey Levine, associate professor of electrical engineering and computer sciences and the leader of the Robotic AI & Learning (RAIL) Lab at UC Berkeley, delivered the second of four Distinguished Lectures on the Status and Future of AI, co-hosted by CITRIS Research Exchange and the Berkeley Artificial Intelligence Research Group (BAIR).
Levine’s lecture examined algorithmic advances that can help machine learning systems retain both discernment and flexibility. By training machines with offline reinforcement learning (RL) methods, machines can solve problems in new environments by drawing on large sets of data and lessons previously learned while still maintaining the adaptability to introduce new behaviors, and thus new solutions.
As Levine explained, data-driven, or generative, AI techniques, such as the image generator DALL-E 2, are capable of producing seemingly human-made creations, while RL methods, such as the algorithms that control robots and beat humans at board games, can develop solutions that solve problems in unexpected ways. His research aims to discover how machine learning systems can adapt to unknown situations and make ideal decisions when faced with the full complexity of the real world.
“If we really want agents that are goal-directed, that have purpose, that can come up with inventive solutions, it’ll take more than just learning,” said Levine. “Learning is important, and the data is important, but the combination of learning and search is a really powerful recipe.
“Data without optimization doesn’t allow us to solve new problems in new ways. Optimization without data is hard to apply to the real world outside of simulators,” he said. “If we can get both of those things, maybe we can get closer to this space explorer robot and actually have it come up with novel solutions to new and unexpected problems.”