Study Finds Persistent Spike in Hate Speech on X
The new analysis contradicts the social media platform’s claims that exposure to hate speech and bot-like activity decreased during Elon Musk’s tenure.
![An overlay of the Twitter logo and the X logo](/sites/default/files/inline-images/Twitter_becomes_X_23206373018044_BN-768x439.jpg)
A new analysis has found that weekly rates of hate speech on the social media platform X rose about 50% in the months after its purchase in October 2022 by Elon Musk, and that the number of bots and bot-like accounts did not decrease, despite Musk’s earlier pledge to reduce bot activity.
Published this week in the open-access journal PLOS One, the study confirms that a sudden spike in hate speech observed by researchers around the time of Musk’s takeover of the company continued through at least May 2023, contradicting claims by X that hate speech on the platform decreased after Musk’s purchase.
“It’s important that we monitor the amount of hate speech or inauthentic activity on platforms, because this content can sow division in our information environment,” said study lead author Daniel Hickey, a first-year doctoral student in the School of Information at the University of California, Berkeley. “We need to know when we’re getting content moderation right and when we’re getting it wrong.”
Hate speech and inauthentic activity on social media can have real-world impacts. Research has linked online hate speech to offline hate crimes, and bots and bot-like accounts can also promote misinformation and spam, causing harm by contributing to scams, interfering with real-world elections or hindering public health campaigns.
An earlier analysis led by Hickey documented an increase in hate speech and certain types of bots on X in the month after Musk purchased the social media platform. The new study extends this analysis through May 2023, a month before Musk stepped down as CEO of the company. Changes to X’s API, or application programming interface, limited the researchers’ ability to analyze posts on the platform beyond that date.
To carry out the study, Hickey and his colleagues at the University of California, Los Angeles (UCLA), and the University of Southern California (USC) collected approximately 3,000 posts on X containing a list of specific homophobic, transphobic and racist slurs and then applied a “toxicity detection model” to identify which of those posts were using slurs in an aggressive or hostile manner. They then compared the hateful posts against a benchmark of nearly 5 million posts containing common English words to determine whether or not any increases in hate speech were simply due to overall changes in post volume.
“We found that the relative increase in hate speech was much higher than the increase in general activity on the platform,” Hickey said.
The number of “likes” on posts with hate speech also doubled, suggesting that more people were exposed to such speech across the platform.
To identify bot and bot-like accounts, the researchers searched X posts for accounts that appeared to be coordinated, or that were “sharing very, very similar content to the point where it would be very unlikely for that to just be a random coincidence,” Hickey said. They found that the presence of bot accounts and other inauthentic accounts did not decrease and instead may have increased.
Because information on specific internal changes at X is limited, it is not possible to attribute the changes in hate speech or bot activity to any specific policy implemented during Musk’s tenure. Nonetheless, the researchers express concern about the safety of online platforms and call for increased moderation on X, as well as further research to illuminate activity across social media platforms.
“To solve society’s most pressing challenges, such as climate change, poverty and access to health care, we need cooperation, and so it’s important to build information environments that foster pro-social interaction and cooperation,” Hickey said. “It’s important to keep research like this alive and to continue monitoring social media platforms for antisocial behavior.”
Additional co-authors include Daniel Fessler at UCLA and Kristina Lerman and Keith Burghardt at USC.
Adapted from a press release by PLOS.