News

Scientists detect how words grow new meanings. Maybe computers will, too

February 27, 2018
By: Yasmin Anwar

What are voice-controlled personal assistants like Alexa and Siri to do when faced with words like “face” that have multiple meanings ranging from a body part to an action?

Top left, picture of old man with beard. Top right, a clock. Bottom left, a canyon. Bottom right, two men looking at each other
Words like “face” have many meanings. A new study tracks and replicates their cognitive evolution.

Scientists from UC Berkeley, the University of Toronto and Lehigh University in Pennsylvania have begun to identify the algorithms humans have used over the last thousand years to give words new meanings.

Their findings, published this month in the online issue of the Proceedings of the National Academy of Sciences, provide new insights into how language evolves, and could help digital assistants step up their game in natural language processing.

Researchers examined over 1,000 years of English language evolution and created computational models to track how words have grown multiple meanings over time. Their discovery has the potential to teach machines to follow the cognitive steps that humans have taken to add new definitions to their lexicon.

For the study, researchers tested their computational models’ ability to predict the order in which new meanings of English words have emerged over the centuries. They then checked these predictions against the Historical Thesaurus of English, which documents the dates in which English word meanings first entered the language.

One of the models, known as “nearest-neighbor chaining” — which links new word meanings to the closest, existing word meanings — best predicted the historical data.

“We used observed historical data from English to reverse-engineer the structures that relate words to their different meanings,” said study co-author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley and director of the department’s Language and Cognitive Development Lab.

The same research team previously mapped 1,100 years of metaphoric English language to track how native speakers have added figurative word meanings to their vocabulary.

“Our studies are beginning to show that the ways in which words have developed new meanings are not arbitrary, but instead reflect fundamental properties of how we think and communicate with one another,” Srinivasan said.

Other authors of the study are Yang Xu, a computational linguist at the University of Toronto and former postdoctoral researcher at UC Berkeley, Barbara Malt, director of the Cognitive Science Program at Lehigh University and Christian Ramiro, a cognitive science major at UC Berkeley.

For more details about the study, read the Lehigh University press release