Hany Farid

Research Expertise and Interest

digital forensics, forensic science, misinformation, human perception

Research Description

Hany Farid's research focuses on digital forensics, forensic science, misinformation, and human perception. We are living in an exciting digital age where nearly every aspect of our lives are being affected by breakthroughs in technology.  At the same time, these breakthroughs have given rise to complex ethical, legal, and technological questions.  Many of these issues arise from the inherent malleability of digital media that allows it to be so easily altered, and from the speed and ease with which material can be distributed online. Farid's lab has pioneered a new field of study termed digital forensics whose goal is the development of computational and mathematical techniques for authenticating digital media.

Farid received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. He is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.

In the News

As Online Harms Surge, Our Better Web Initiative Advances at UC Berkeley

With the U.S. midterm elections approaching and political disinformation posing a continued threat to democracy, UC Berkeley’s ambitious new Our Better Web initiative, launched on a small scale in April, is advancing efforts to study and combat online harms including deception, discrimination and child exploitation.

Featured in the Media

Please note: The views and opinions expressed in these articles are those of the authors and do not necessarily reflect the official policy or positions of UC Berkeley.
June 27, 2024
Will Knight

Hany Farid, a professor at the School of Information and a leading expert on image and video manipulation, says that detecting deepfakes will take more than AI alone.

May 19, 2022
Johana Bhuiyan
In the aftermath of yet another racially motivated shooting that was live-streamed on social media, tech companies are facing fresh questions about their ability to effectively moderate their platforms. Payton Gendron, the 18-year-old gunman who killed 10 people in a largely Black neighborhood in Buffalo, New York, on Saturday, broadcasted his violent rampage on the video-game streaming service Twitch. So how do tech companies work to flag and take down videos of violence that have been altered and spread on other platforms in different forms ? forms that may be unrecognizable from the original video in the eyes of automated systems? On its face, the problem appears complicated. But according to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren't financially motivated to invest resources into developing it.
March 19, 2021
Emma Bowman
The recent proliferation of believable deepfake videos - including ones featuring actor Tom Cruise - have raised new fears about phony events that can be used to sway public opinion. Hany Farid, a professor at the University of California, Berkeley, says that the Cruise videos demonstrate a step up in the technology's evolving sophistication. "This is clearly a new category of deepfake that we have not seen before," said Farid, who researches digital forensics and misinformation. Deepfakes have been around for years, but, Farid says, the technology has been steadily advancing. "Every three to four months a video hits Tik Tok, YouTube, whatever, and it's just — wow, this is much, much better than before," he said. Another story on this topic appeared on ABC News.
June 30, 2020
The chair of Germany's artificial intelligence committee has called for tougher measures to curb the spread of fake news. Her concern came as the CEP published a report called "The Creation, Weaponization and Detection of Deep Fakes". The report's author, Professor Hany Farid from the University of California, Berkeley, explained that deep fakes, which see famous people such as U.S. President Donald Trump having words placed over official video footage, are growing in sophistication. "I think everyone can see the damage that this type of content can create for misinformation campaigns," he said. "Now we have videos of someone saying whatever you want them to say to bolster that misinformation." For more on this, see our press release at Berkeley News.
April 29, 2020
Jill Tucker
A lot of the finest minds of the world have turned to COVID-19, information and electrical engineering and computer sciences professor Hany Farid says about the unprecedented galvanization of academia to help solve the global crisis. Professor Farid, an expert on deepfake imagery and online misinformation, has also switched gears to research the spread of misinformation and conspiracies related to the coronavirus. Noting that the problem is global, and that people are scared and turning to social media, he says that that provides the conditions for a perfect "s—storm." Referring to social media posts promoting dangerously bogus ideas about COVID-19 cures, he says: "This is why we have people drinking bleach thinking they're going to be cured. ... We need to undo the stupidity that's out there." To that end, he developed a survey last week, asking some 500 people to read 40 COVID-19-related headlines, of which half were true and half were not, and report if they had seen the headlines, believed them, or knew someone who would believe them. About 15% of the respondents said they knew someone who would believe the gargling bleach cure is valid, according to preliminary results. "That's shocking," he says. "Now the question is what's next." Answering that call, he's developing early detection systems to spot online misinformation campaigns and flag them for social media companies.
FullStory (*requires registration)

April 8, 2020
Amos Zeeberg
After a 1997 criminal case was largely decided by an unusual dark-and-light pattern in denim jeans captured by a security camera, a study of the case was used to set legal precedent for how patterns in photographs could be used as evidence. But now a study co-authored by postdoctoral information scholar Sophie Nightingale and information and electrical engineering and computer sciences professor Hany Farid, an expert on deepfake images, questions the reliability of such evidence. "Even under ideal conditions, trying to get an exact match is difficult," Professor Farid says. "This technique should be used with extreme caution, if at all." To conduct the study, they bought 100 pairs of jeans from local thrift stores, and they had 111 workers send in pictures of their own jeans. They then compared images of similar seams. According to the reporter: "The data showed that two images of the same seam often looked quite different -- so much so that it was often impossible to tell whether a pair of images were of the same seam or different ones. Much of the problem, the researchers concluded, comes down to the fact that cloth is flexible: it stretches, folds and drapes in complicated ways, which changes how it looks in photos. ... The lack of distinctiveness in images of seams significantly limits the accuracy of jeans identification, according to the study. The algorithm made a significant number of false matches between different pairs of jeans." Their study is available at PNAS.org.
FullStory (*requires registration)

March 2, 2020
Jack Nicas
YouTube, the world's largest video website, announced in January 2019 that it would limit the spread of videos that "could misinform users in harmful ways," but a year later, a new Berkeley study reveals that the site's progress has been uneven and insufficient. The study, co-authored by information and electrical engineering and computer sciences professor Hany Farid, looked at 8 million recommendations over 15 months, finding that YouTube had nearly eradicated some conspiracy theories by June -- including flat-Earth claims and theories that the U.S. government was behind the Sept. 11 terrorist attacks -- but that they rebounded later, and that other fables played unhindered, such as claims that aliens created the pyramids, the government is hiding secret technologies, and that climate change is a myth. The researchers say their findings suggest that YouTube has been selective about the types of misinformation it goes after. "It is a technological problem, but it is really at the end of the day also a policy problem," Professor Farid says. "If you have the ability to essentially drive some of the particularly problematic content close to zero, well then you can do more on lots of things. ... They use the word 'can't' when they mean 'won't.'"
FullStory (*requires registration)

October 16, 2019
Electrical engineering and computer sciences professor Dawn Song and information and electrical engineering and computer sciences professor Hany Farid are honored in this WIRED list of 25 people who are working to solve some of humanity's most challenging problems. Professor Song is exploring ways to help people protect their privacy and control and profit from their data. "I think most people don't even know that their data can be valuable,' she says. Professor Farid's profile indicates that he's "one of the leading authorities on detecting fake photos." He says: "This used to be a boutique little field, but now we're defending democracy. ... What happens when more than half the content you see and hear online is fake?"
September 27, 2019
Cat Zakrzewski
Information and electrical engineering and computer sciences professor Hany Farid was in Washington Thursday, testifying before lawmakers about the increasing risks of deepfake technology, which convincingly alters images and video to convey false messages. He warned that major technology platforms like Facebook and Google need to be involved in countering the menace, but that the companies have been slow to act. "You have to understand here that we are fighting against business interests," he said. "In the last six months, the language coming out of the technology sector is encouraging, but I don't know there's a lot of action yet." As an example, he pointed to Facebook's negligence in allowing a fake video of House Speaker Nancy Pelosi, which made her seem drunk, to remain on the platform. The company has defended its decision by saying it doesn't want to be responsible for separating reality from fiction online. "I can help with the technology problem," Professor Farid said, "but I don't know what I can do with the policy problem when you say you aren't arbiters of the truth. ... They have to start getting serious about how their platforms are being weaponized to great effect and disrupting elections, inciting violence and sowing civil unrest." Acknowledging it's difficult to predict when a convincing deepfake will disrupt the election, he said: "I think it's coming, but I don't know whether it will be in 2020, 2022 or 2024. ... Largely because the cheap stuff still works. I think we'll eventually get ahead of that and then this will be the next front." For more on Professor Farid's work countering deepfakes, see our press release from June at Berkeley News.
FullStory (*requires registration)

July 10, 2019
Kate Larsen
Information and electrical engineering and computer sciences professor Hany Farid and graduate student Shruti Agarwal appeared on KGO TV, discussing their work fighting fake news technology. A recent example of the problem arose when video of House Speaker Nancy Pelosi was doctored to make it appear that she was drunk. "That really wreaks havoc on democracy, on society and our personal safety," Professor Farid says. "This technology allows you to create highly sophisticated fake content and it doesn't require a lot of skill to create the content." Talking about the software they've developed to identify fakes, Agarwal says: "We started tracking when a person is talking, how their eyebrows are moving, how their lips are moving, how their cheeks are moving." That provides a point of reference that can be used to identify manipulations. Link to video. For more on this, see our press release at Berkeley News. Other stories on this topic appeared in Biometric Update.com, Journalist's Resource, and Neue Zürcher Zeitung (Switzerland).
April 26, 2019
Rachel Metz, CNN Business

In hopes of stopping deepfake-related misinformation from circulating, Hany Farid, a professor and image-forensics expert, is building software that can spot political deepfakes, and perhaps authenticate genuine videos called out as fakes as well.

January 19, 2018
Niraj Chokshi

In a study published in Science Advances, Julia Dressel, who conducted the study for her Dartmouth College undergraduate thesis with Hany Farid, a computer science professor, found that small groups of randomly chosen people could predict whether a criminal defendant would be convicted of a future crime with about 67 percent accuracy, a rate virtually identical to Compas, software some American judges use to inform their decisions.

Loading Class list ...