Ziad Obermeyer

Research Expertise and Interest

machine learning, and medicine, health policy

Research Description

Ziad Obermeyer works at the intersection of machine learning and health. His research focuses on how machine learning can help doctors make better decisions (like whom to test for heart attack), and help researchers make new discoveries—by ‘seeing’ the world the way algorithms do (like finding new causes of pain that doctors miss, or linking individual body temperature set points to health outcomes). He has also shown how widely-used algorithms affecting millions of patients automate and scale up racial bias. That work has impacted how many organizations build and use algorithms, and how lawmakers and regulators hold AI accountable. That work has impacted how many organizations build and use algorithms, and how lawmakers and regulators hold AI accountable, culminating in testimony before the Senate Finance Committee in 2024. 

He is one of TIME Magazine's 100 most influential people in AI, a Chan–Zuckerberg Biohub Investigator, a Research Associate at the National Bureau of Economic Research, and was named an Emerging Leader by the National Academy of Medicine. Previously, he was Assistant Professor at Harvard Medical School, and continues to practice emergency medicine in underserved communities.

See Ziad Obermeyer's personal website.

In the News

Understanding and seeking equity amid COVID-19

In today’s Berkeley Conversations: COVID-19 event, Jennifer Chayes, associate provost of the Division of Computing, Data Science, and Society and dean of the School of Information, spoke with three UC Berkeley experts about how relying on data and algorithms to guide pandemic response may actually serve to perpetuate these inequities — and what researchers and data scientists can do to reverse the patterns.

Widely used health care prediction algorithm biased against black people

From predicting who will be a repeat offender to who’s the best candidate for a job, computer algorithms are now making complex decisions in lieu of humans. But increasingly, many of these algorithms are being found to replicate the same racial, socioeconomic or gender-based biases they were built to overcome.

Featured in the Media

Please note: The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the official policy or positions of UC Berkeley.
January 26, 2021
Tom Simonite
Researchers trying to improve health care with artificial intelligence usually subject their algorithms to a form of machine med school. Software learns from doctors by digesting thousands or millions of x-rays or other data labeled by expert humans until it can accurately flag suspect moles or lungs showing signs of COVID-19 by itself. A study published this month took a different approach—training algorithms to read knee x-rays for arthritis by using patients as the AI arbiters of truth instead of doctors. The results revealed that radiologists may have literal blind spots when it comes to reading Black patients' x-rays. Ziad Obermeyer, an author of the study and a professor at the University of California Berkeley's School of Public Health, was inspired to use AI to probe what radiologists weren't seeing by a medical puzzle.
January 14, 2021
In this video piece from the Washington Post, Ziad Obermeyer, professor of health policy and management at the UC Berkeley School of Public Health, said government regulation of artificial intelligence can have a positive impact, but it can't get ahead of the "many creative and potentially dangerous uses that people are going to put algorithms toward...In a lot of our work what we've found is there is a substantial amount of racial bias in algorithms that are fairly widespread...that's the kind of thing that certainly suggests a role for regulation."
FullStory (*requires registration)

August 10, 2020
Casey Ross
The federal government has systematically shortchanged communities with large Black populations in the distribution of billions of dollars in COVID-19 relief aid meant to help hospitals struggling to manage the effects of the pandemic, according to a recently published study. "We are finding large-scale racial bias in the way the federal government is distributing" the funds to hospitals, said Ziad Obermeyer, a physician and a co-author of the study from the University of California, Berkeley. "If you take two hospitals getting the same amount of funding under the CARES Act, the dollars have to go further in Black counties than they do elsewhere," he said. "Effectively that means there are fewer things the health systems can do in those counties, like testing, buying more personal protective equipment, or doing outreach to make sure people are being tested."
April 6, 2020
Sharon Begley
A study co-led by acting associate public health professor Ziad Obermeyer MD, finding that a software program commonly used in the health care industry is racially biased, has won the Editor's Pick award in the 2020 STAT Madness contest for the best innovations in science and medicine for the year. According to this reporter: "The artificial intelligence software equated health care spending with health, and it had a disturbing result: It routinely let healthier white patients into the programs ahead of black patients who were sicker and needed them more. ... It was one of the clearest demonstrations yet that some, and perhaps many, of the algorithms that guide the health care given to tens of millions of Americans unintentionally replicate the racial blind spots and even biases of their developers. ... The researchers didn't just publish their work and move on. Instead, they worked with the builders of the algorithm to fix it. And after hearing from insurers, hospitals, and others concerned that their algorithms, too, might be racially biased, they established an initiative at the Booth School to work pro bono with health systems and others to remedy that." For more on this study, see our press release at Berkeley News.
November 18, 2019
Amina Khan
An algorithm widely used by health insurers to make critical care decisions reflects strong racial biases, a team of scientists led by acting associate public health professor Ziad Obermeyer MD has found, and that has led to poorer outcomes for black patients, compared to white patients. "We shouldn't be blaming the algorithm," Dr. Obermeyer says. "We should be blaming ourselves, because the algorithm is just learning from the data we give it." Setting out to fix the problem, Dr. Obermeyer's team developed an alternative model that reduced the bias by 84%, and shared it with the algorithm's manufacturer. For more on this, see our press release at Berkeley News. Stories on this topic have appeared in dozens of sources around the world, including KQED Radio's Forum (link to audio), Managed Healthcare Executive, Health IT Analytics, News-Medical, Market Screener, and Mic.
FullStory (*requires registration)

Loading Class list ...