News

Life with machine: Robot relationships get real

May 1, 2016

Berkeley’s renowned programs in artificial intelligence and robotics involve scores of professors in the College of Engineering. Not one of them is typical — as here, where three quite different researchers discuss technologies that are bringing machines and humans into closer relationships. Rikky Muller works at the boundary where sensitive machines and human brains make physical contact. Ken Goldberg trains robots to work independently and act competently in the world around them. Anca Dragan coaches robots and humans to understand one another’s intentions. Their aim is to create machines with the intelligence to better serve and work with human beings.

Machines that listen

Imagine, floating on the surface of your brain, there’s a fleck of polymer the diameter of a pencil eraser and as thin as Saran Wrap; it carries an array of microelectrodes that listen to signals from your motor cortex. The neurons in that region, at the top of your head, fire when you walk around, pick up a glass of water, type a text message or move in other ways.

Rikky Muller stylized headshotThe neuronal signals are digitized and sent from your head to a computer by a tiny loop antenna; the same antenna transmits data and receives power for the implant, rendering internal batteries and skull-piercing wires unnecessary. The whole brain-machine interface resides on a chip safely sealed inside your head.

“Signals recorded from the motor cortex can be used to control a multitude of external devices,” says Rikky Muller, an assistant professor of electrical engineering and computer sciences (EECS) since January 2016. “That includes robotic prosthetic arms: if you’re paralyzed, it’s a way to bypass any ‘open circuits’ in your body and connect signals directly from your brain.”

With bachelor’s and master’s degrees in electrical engineering from MIT, Muller came to Berkeley in 2007 to pursue her Ph.D., concentrating on integrated circuit design and neuroengineering. Intent on developing a new kind of neural implant, she says, “I took a very clinical focus on how we can make something that really lasts a long time and is safe inside the brain.”

Muller’s close collaboration with her advisor, EECS professor Jan Rabaey, and others including EECS associate professor Michel Maharbiz, led to the design of an ultra-small, minimally invasive wireless implant. Its sensors are offered to researchers as the first product of Cortera Neurotechnologies, which the inventors founded in 2013. Muller was Cortera’s first CEO and later its chief technology officer.

Cortera participated in one of the first grants from President Obama’s 2013 BRAIN initiative, charged with developing an implant that not only records but actually stimulates the brain. The goal is to treat serious neuropsychiatric disorders where treatment has been elusive, including post-traumatic stress disorder and major depression. The project comes with profound technical challenges. 

For one, deep-brain stimulating electrodes require much more power than recording electrodes. “You have to put a lot of thought into how you power these systems and design for efficiency,” says Muller. The objective is to adjust the delicate balance of sensitive recording and intrusive stimulation automatically and individually, for each patient.

Muller is nothing if not determined. Whether the recipient is quadriplegic or suffering with PTSD, she says, “What’s important to me is to get the technology into the hands of patients — to give options to people who currently don’t have any options.”  

Machines that touch

Ken Goldberg, a professor of industrial engineering and operations research (IEOR), seeks to extend close cooperation between humans and machines to the widest possible contexts.

A year ago, for the Berkeley-based Center for Information Technology Research in the Interest of Society (CITRIS), Goldberg launched the “People and Robots” initiative (CPAR), partly in response to the much-discussed singularity — the fear that runaway machine intelligence could threaten the human race. Countering with the concept of multiplicity, CPAR brings together diverse groups of robots, humans and algorithms to solve problems efficiently through collaborative learning.

Ken Goldberg holding instrumentGoldberg traces his enthusiasm for robots all the way back to the TV show Lost in Space. He studied economics and electrical engineering at the University of Pennsylvania, then completed his Ph.D. in robotics at Carnegie Mellon. He joined the Berkeley faculty in 1995 and has appointments in the School of Information, new media, art practice (independently, he’s a recognized artist) and the department of radiation oncology at UC San Francisco.

Medical robotics is a leading example of machines collaborating intimately with humans. Since 2000, the da Vinci Surgical System, a robotic device guided by surgeons working from nearby consoles, and made by Sunnyvale-based Intuitive Surgical, has performed three million minimally invasive laparoscopic surgeries. 

A da Vinci system has two or more articulated arms ending in slender probes. One is mounted with a tiny camera; others wield forceps, needles, cauterizers or other instruments. The surgeon watches high-definition video from the camera while manipulating handles that cause the instruments to reproduce the movement of wrists, hands and fingers.

In 2014, Intuitive Surgical, which is led by CEO Gary Guthart, a Berkeley engineering physics alumnus, made a first-generation da Vinci research kit available to Goldberg and EECS professor Pieter Abbeel. Later that year, Goldberg and Abbeel founded the Center for Automation and Learning for Medical Robots (Cal-MR), which aims to extend the ability of both humans and robots to cooperate in performing new tasks and learning from one another.

Although Goldberg wants medical robots to have some ability to act on their own, he’s firm that “we’re trying to assist surgeons, not replace them.” He targets tedious tasks that robots could do as well as surgeons, such as debridement — the removal of dead tissue and other debris. “It can take hours,” says Goldberg, “and it’s not using the best skills of the surgeon.”

To teach the robots what to do, Goldberg and Abbeel had them learn from demonstration videos by expert surgeons. Robots then autonomously removed debris from lifelike plastic models of tissue called phantoms. On early tests, the robots performed slower than surgeons but with equal dexterity.

Dexterity is essential for many robots now in the planning stage, such as a decluttering robot that simplifies housework or makes a home safer for the elderly by picking up what’s dropped on the floor.

“Humans can pick up a wine glass or a salt shaker easily, because we have evolved complex manipulators,” says Goldberg. “When a robot tries to do that, the table is soon littered and everything is on the floor.” He and his students, working with colleagues at Google, are developing Dex-Net, the Dexterity Network, intended to link numerous robots together with the computing power of the cloud. By using the cloud’s vast, constantly updated storage for 3-D models, Dex-Net will identify robust grasps for hundreds of thousands of objects.

For most, the idea of robots working with people conjures visions of humanoid machines. But in reality, helping robots will probably look more ordinary. Goldberg says, “For systems that combine the best of what humans can do with the best of what robots can do, it’s unlikely the robots of the future will look anything like a human.”

Machines that converse

Still, when it comes to anthropomorphizing robots, “We can’t help it!” says Anca Dragan. Happily, that human tendency plays a significant role in her research. An assistant professor in EECS, Dragan came to Berkeley last fall from Carnegie Mellon with a Ph.D. in robotics and human-robot interaction (HRI).

Anca Dragan Dragan cites the robot character BB-8 in the blockbuster Star Wars: The Force Awakens: “I loved the interaction between BB-8 and the human characters,” she says. “Its shape is just a sphere and a hemisphere, yet it’s incredibly expressive. You can’t help but read into its movements what it’s ‘thinking’ and what it’s about to do.” BB-8’s internal states are apparent to the humans, and — as Dragan’s work shows — that kind of understanding is essential when people communicate with nonfictional robots.

Yet robot-human interaction didn’t top the list of Dragan’s interests when she started at Carnegie Mellon. At first, she focused on robot manipulation. In 2010, she and her colleagues demonstrated their work by having their robot pick up and move bottled drinks — over 490 of them, with better than 90 percent success. “Then we had to do something with the bottles, so we wrote a last-minute hack for the robot to hand them out to people. Some failures were so interesting they eventually led me to pursue human-robot interaction for my Ph.D.”

Of 150 attempted hand-offs to those passers-by who were unfamiliar with robots, just seven were successful. “Over and over the robot would say, ‘Please take the drink,’ and wait for the person to reach over and take the drink. Instead they’d just stand there. They didn’t know what they were supposed to do.” 

Intrigued, Dragan’s lab at CMU reached out to HRI expert Maya Cakmak, who suggested spatial-temporal contrast: the robot should pick up the drink one way, then emphatically hold it out to communicate the intention to give it away. Dragan was reminded of Walt Disney’s principles of animation, which include exaggeration — like when a cartoon character’s eyes bug out in surprise.

With that one change, she says, “People understood. Like magic, a subtle difference made everything work.” Dragan then made it her goal to enable robots to devise such strategies by themselves, needing no designers to think them up.

Enabling better interaction is the traditional focus of HRI. Traditional robotics, on the other hand, emphasizes autonomy, concentrating more on function and less on interaction. Dragan aims to bridge this gap. “A robot must account for the effect it has on people, much like it accounts for the effect it has on the physical world. How people perceive the robot’s plan in turn affects what the robot does.”

Short of a brain-machine interface like that designed by Rikky Muller, robots must also be able to clearly express their own capabilities to human collaborators.  These challenges are at the core of much current research.

For example, autonomous cars are currently designed with protective passivity; in 2015 a Google robot car was pulled over for driving so slowly it impeded traffic. Recently, Dragan has shown that autonomous cars could potentially increase safety and performance by assertive moves that signal intent — like speeding up to change lanes or backing away from an intersection to yield the right of way.

Dragan says, “If we want robots to go out into the world, then our planning and learning algorithms have to reason over not just the physical space, but the human space as well.” From an HRI standpoint, she says, “We are starting to see that just designing interactions specific to a task is not enough. We need both, where we have algorithms that think about functional goals but also about interactions with humans.”

BB-8 achieved that ideal a long time ago, in a galaxy far, far away. Smart machines here and now are steadily catching up.