Ziad Obermeyer

Ziad Obermeyer

Title
Associate Professor
Department
School of Public Health
Research Expertise and Interest
machine learning, and medicine, health policy
Research Description

Ziad Obermeyer is Associate Professor at UC Berkeley, where he does research at the intersection of machine learning, medicine, and health policy. He was named an Emerging Leader by the National Academy of Medicine, and has received numerous awards including the Early Independence Award -- the National Institutes of Health’s most prestigious award for exceptional junior scientists -- and the Young Investigator Award from the Society for Academic Emergency Medicine. Previously, he was an Assistant Professor at Harvard Medical School. He continues to practice emergency medicine in underserved communities. 

See Ziad Obermeyer's personal website.

In the News

April 21, 2020

Understanding and seeking equity amid COVID-19

In today’s Berkeley Conversations: COVID-19 event, Jennifer Chayes, associate provost of the Division of Computing, Data Science, and Society and dean of the School of Information, spoke with three UC Berkeley experts about how relying on data and algorithms to guide pandemic response may actually serve to perpetuate these inequities — and what researchers and data scientists can do to reverse the patterns.
October 24, 2019

Widely used health care prediction algorithm biased against black people

From predicting who will be a repeat offender to who’s the best candidate for a job, computer algorithms are now making complex decisions in lieu of humans. But increasingly, many of these algorithms are being found to replicate the same racial, socioeconomic or gender-based biases they were built to overcome.

In the News

April 21, 2020

Understanding and seeking equity amid COVID-19

In today’s Berkeley Conversations: COVID-19 event, Jennifer Chayes, associate provost of the Division of Computing, Data Science, and Society and dean of the School of Information, spoke with three UC Berkeley experts about how relying on data and algorithms to guide pandemic response may actually serve to perpetuate these inequities — and what researchers and data scientists can do to reverse the patterns.
October 24, 2019

Widely used health care prediction algorithm biased against black people

From predicting who will be a repeat offender to who’s the best candidate for a job, computer algorithms are now making complex decisions in lieu of humans. But increasingly, many of these algorithms are being found to replicate the same racial, socioeconomic or gender-based biases they were built to overcome.

Featured in the Media

Please note: The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the official policy or positions of UC Berkeley.
January 26, 2021
Tom Simonite
Researchers trying to improve health care with artificial intelligence usually subject their algorithms to a form of machine med school. Software learns from doctors by digesting thousands or millions of x-rays or other data labeled by expert humans until it can accurately flag suspect moles or lungs showing signs of COVID-19 by itself. A study published this month took a different approach—training algorithms to read knee x-rays for arthritis by using patients as the AI arbiters of truth instead of doctors. The results revealed that radiologists may have literal blind spots when it comes to reading Black patients' x-rays. Ziad Obermeyer, an author of the study and a professor at the University of California Berkeley's School of Public Health, was inspired to use AI to probe what radiologists weren't seeing by a medical puzzle.
January 14, 2021
In this video piece from the Washington Post, Ziad Obermeyer, professor of health policy and management at the UC Berkeley School of Public Health, said government regulation of artificial intelligence can have a positive impact, but it can't get ahead of the "many creative and potentially dangerous uses that people are going to put algorithms toward...In a lot of our work what we've found is there is a substantial amount of racial bias in algorithms that are fairly widespread...that's the kind of thing that certainly suggests a role for regulation."
FullStory (*requires registration)

August 10, 2020
Casey Ross
The federal government has systematically shortchanged communities with large Black populations in the distribution of billions of dollars in COVID-19 relief aid meant to help hospitals struggling to manage the effects of the pandemic, according to a recently published study. "We are finding large-scale racial bias in the way the federal government is distributing" the funds to hospitals, said Ziad Obermeyer, a physician and a co-author of the study from the University of California, Berkeley. "If you take two hospitals getting the same amount of funding under the CARES Act, the dollars have to go further in Black counties than they do elsewhere," he said. "Effectively that means there are fewer things the health systems can do in those counties, like testing, buying more personal protective equipment, or doing outreach to make sure people are being tested."
Loading Class list ...
.