News

From Tort Law to Cheating, What is ChatGPT’s Future in Higher Education?

March 21, 2023
By: Jason Pohl
person with computer code printed on their face
Berkeley experts in artificial intelligence are studying how things like ChatGPT will transform everything from admissions screening and research to writing college essays. (Pexels photo by Cottonbro Studio)

It passed the bar exam, first with a mediocre score and then with a ranking among the top tier of newly minted lawyers. It scored better than 90% of SAT takers. It nearly aced the verbal section of the GRE — though it has room for improvement with AP Composition. 

In the months since the machine-learning interface ChatGPT debuted, hundreds of headlines and hot-takes have whirled about how artificial intelligence will overhaul everything from health care and business to legal affairs and shopping. But when it comes to higher education, reviews have been more mixed, a blend of upbeat and uneasy. 

Many have forecast the “death of the college essay,” though it’s still very much alive.

But what’s become increasingly clear, UC Berkeley experts say, is that ChatGPT and similar tools are going to transform education. In some ways, the changes are easy to imagine, like adding layers of nuance to essay prompts that can’t be so easily answered by a computer. In other ways, however, we’re only beginning to imagine how this fast-changing technology will upend course assignments, be deployed as a 24/7 tutor, and change the way we think about knowledge work — the bedrock of higher ed. 

“It will be exhilarating, destabilizing and transformational,” said Brian Christian, a visiting scholar at Berkeley and an influential author on machine learning, AI and human values. To keep up with those changes, he said now is the time for educators to teach students how to meaningfully use these tools.

Brian Christian profile picture
Brian Christian, a Berkeley scholar and influential thinker and author on artificial intelligence, says the forthcoming changes will be transformative. (Photo courtesy Eileen Meny)

“To the extent we are preparing students for the job market,” he said, “we owe it to them, ethically, to try as best we can to adapt to the changing requirements of what employment will look like for them as society begins the process of automating knowledge work.”  

Efforts have been underway for years at Berkeley to understand how to chart a course for these changes. Interdisciplinary expert committees have created in-depth action plans for the ethical use of AI in university operations. Faculty have convened meetings to discuss new strategies to teach with AI in the classroom and anticipate how it might be used for cheating. Courses have sprung up to help students explore the ethical implications of AI technologies. And as students take to Reddit forums to discuss how they’ve embraced and abused ChatGPT, some professors are encouraging students to use it as a starting point from which to critique essay responses. 

As one Berkeley Law professor summed up the task for his class: Try to “beat it.”

“The cat’s out of the bag. There’s no going back,” said Brandie Nonnecke, founding director of the CITRIS Policy Lab and associate research professor at Berkeley. An expert in AI governance and ethics, Nonnecke co-chaired the systemwide University of California working group that assessed the risks and potential of things like ChatGPT. Now, she’s helping to lead the effort to ensure equity and accountability are prioritized and risks are minimized for a technology that she said “we can’t not use.”

“The question now is how to set appropriate guardrails to ensure that our students are able to learn from these types of models,” Nonnecke said, “rather than burying our heads in the sand and saying, ‘You can’t use them at all.’”

A photo of Brandie Nonnecke sitting at a conference room table
Brandie Nonnecke, an associate research professor, is the founding director of the CITRIS Policy Lab at Berkeley and is helping assess risks and the future for AI on campus. (Photo by Brittany Hosea-Small)

ChatGPT, plagiarism and the future of the essay

It’s not like the current iteration of AI systems came out of nowhere. 

Researchers since the 1960s have researched tools that have become increasingly good at simulating human speech. A longstanding goal was to pass the Turing Test — to create computer-generated passages largely indistinguishable from one written by a human.

Incremental progress was real. But it’s been within the last year that the technology took its most major public leap forward, grabbing the world’s attention more than any other recent technology. From art creators to language generators, artificial generative technologies like ChatGPT had a certain novelty. Suddenly, people could see the advances for themselves.

Examples abound of its new power, from creating video game code in seconds to creating a website based on a sketch on a piece of paper. Microsoft announced plans to build a machine-learning system like ChatGPT directly into Microsoft Word that could crank out essays, business plans and marketing copy in a flash. Google has likewise revealed plans to fold something similar into Google Docs. Promotional announcements have touted how much time these often perfunctory writing assignments will free up for deeper thinking and problem-solving. 

ChatGPT quickly became one of the most significant changes to hit colleges and universities since Google launched in 1998 and changed the research game forever. 

But for all the promise some portend, others have dubbed ChatGPT as the death knell for learning — a system that will usher in a cheater’s paradise and surely stymie critical thinking, innovation and progress. If anyone could generate a literary examination of Ulysses, or write a five-point essay on early 19th century economic theory, why bother to read books? Copy-and-paste AI essays would be the scourge of this century’s students, the argument went. 

It’s more complicated than that, Berkeley scholars say.

Plagiarism detection programs like Turnitin have already said they will incorporate programs to detect AI-generated, ripped-off text. OpenAI, the company behind ChatGPT, has likewise created an option to determine whether a text was created with the platform. There’s even talk of including a digital watermark within ChatGPT text, a thumbprint of sorts for easier detection.   

headshot of Cathryn Carson
Professor Cathryn Carson, a historian of science at Berkeley, researches how technology transforms societies — for good and for bad. “Whenever you think about technology, think about how power is shifting. (UC Berkeley Photo by Noah Berger)

Professor Cathryn Carson, a historian of science and chair of Berkeley’s history department, said there are “huge academic integrity concerns” with tools like ChatGPT. Sometimes, the prose they generate is riddled with falsehoods and presented as truth. Plus, by their very nature, these machine-learning programs suck up troves of unattributed online text and — through a sophisticated series of calculations — spew it out as a new piece of writing. 

“In one sense, it’s a plagiarism engine,” Carson said. “In another sense, my faculty colleagues would not ever ask questions in the future that would be easily answered by ChatGPT. We’re not interested in getting students to reproduce boilerplate. We’re interested in getting them to think and assisting them in learning how to do that better.”

As an example, instead of asking students to write a five-paragraph essay about the causes of the Revolutionary War, students will be asked to tie concepts that were talked about in class to a broader understanding of the conflict — in other words, writing about things that machine-learning tools do not know the particulars of, Carson said. 

“The fluency of generative AI will improve,” Carson said. “But the untetheredness, or the relative untetheredness, to empirical reality will remain because the model is just predicting the words, rather than tying them back to the reality underneath.”

‘Assume that students are going to use it’

In the months since ChatGPT took the world by storm, Berkeley’s Center for Teaching and Learning has organized webinars for instructors to discuss best practices around ChatGPT and its risks. They launched a website outlining the programs and the threats the tools pose.

Camille Crittenden, executive director of CITRIS and the Banatao Institute at Berkeley, co-chaired a group focused on “student experience” related to AI in the UC system. She said she was initially skeptical about what ChatGPT might mean for the future of writing. Wordsmithing, after all, helps hone a person’s ideas about a topic. That’s the hallmark of higher education. 

“I think from now on we should just fundamentally assume that students are going to use it,” Crittenden said.

headshot of Camille Crittenden
Camille Crittenden, executive director of CITRIS and the Banatao Institute at Berkeley, co-chaired a group focused on “student experience” related to AI in the UC system. (UC Berkeley photo)

She’s increasingly warmed to its use, too. “It can be a tool if we can embrace it in that way and have teachers help their students understand how it can be used as a tool,” she said, “not as a replacement for what they might be creating, but as an aid.”

Christian, whose 2020 book, The Alignment Problem, was widely praised for its assessment on the implications of AI, likened the current discussion about academic integrity to a game of cat-and-mouse. Educators are using computer-based tools to root out computer-generated homework. It likely won’t stay that way, he said. 

Instead, he thinks that detection-and-evasion games will give way to a combination of old-school evaluation methods — think written and oral, in-person exams — and a general acceptance that AI tools will have a role in at-home assignments. 

Ultimately, that will likely renew interest in creative collaboration across the university.

“Machine-learning models are increasingly powerful but by their very nature tend to reflect, rather than challenge, habitual narratives and modes of knowledge-making,” Christian said. “They operate largely in a vacuum. I think large-language models will likely shift undergraduate instruction at the university to emphasize the interactive and social dimensions of original research.”

Democratizing legal aid, a new way to teach

Professor Chris Hoofnagle has witnessed that interactive dimension. 

Hoofnagle, a Berkeley Law professor and an expert on the intersection of technology and law, is among a growing number of instructors encouraging students to use tools like ChatGPT. The way he sees it, the current generational divide is reminiscent of one that divided partners at law firms long ago: whether to use email. Some attorneys will be successful enough without it, Hoofnagle said. But the best ones will use the technology to their advantage.

headshot of Chris Hoofnagle
Berkeley Law Professor Chris Hoofnagle is encouraging his students to use AI tools. “The question will be, can they beat ChatGPT?” (UC Berkeley photo)

Hoofnagle said he has posed legal questions to ChatGPT, then instructed his class to assess the strengths and weaknesses of the answers it cranks out. 

“The question will be, can they beat ChatGPT?” Hoofnagle said. 

In fields like law that require a significant amount of by-the-book, pre-formatted writing, programs like ChatGPT could expedite some of the most tedious, essential parts of the profession. Drafting initial client response letters or reciting case law during a property dispute could suddenly be done in mere minutes, allowing for deeper thought about other more vexing cases.

Hoofnagle said it could even democratize legal aid — anyone who needs to hand a stern letter filled with legalese to a landlord about a rat infestation could draft it free. It won’t be perfect and won’t eliminate the need for a good attorney in other circumstances. That’s not the point, he said. It just needs to be “good enough.” 

“As lawyers,” Hoofnagle said, “their job is going to be to do the hard questions that these statistical models won’t be able to answer.” 

As for cheating, Hoofnagle said some students might abuse the system, “but those people will probably never have learned the skills they need to be a good lawyer.”

ChatGPT’s strengths go beyond writing a 500-word essay for students. In a few seconds and with some detailed direction, it can generate writing prompts and take a first pass on a course syllabus. It might even help expedite complex grant applications. 

“In some ways, it could help to alleviate some of the more tedious administrative writing,” Crittenden said. “There’s just a lot of tedious writing that has to be done. And why not use something like that?”

A 24/7 teaching assistant

Things like ChatGPT go far beyond ethical concerns about cheating on essays or taking shortcuts to a degree. Higher education officials across the country are looking at how tools might revolutionize how universities operate, from using AI-integrated chatbots that can immediately answer enrollment questions to harnessing virtual teaching-assistant bots with 24/7 office hours.

In 2020, UC President Michael Drake tasked a panel of 32 faculty and staff experts across all 10 campuses to develop recommendations for how to oversee AI in university operations. Nonnecke and another Berkeley AI expert, Stuart Russell, co-chaired the group. A year later, in October 2021, the group finalized its 77-page report, and Drake vowed to implement it. 

The report made four primary recommendations: Institutionalize the UC’s responsible AI principles in procurement and oversight; establish campus-level AI councils, develop a strategy to evaluate AI-enabled technology, and document such tools in a public database. 

“We need to put in place appropriate guardrails for the university and transparency on how the university is using any of these tools in its services,” said Nonnecke, who now is co-chairing a council aimed at implementing the panel’s plan across the UC system. 

It remains to be seen how exactly new and existing academic vendors will incorporate chatbots, like ChatGPT. It’s also unclear how legislation might factor into all of these discussions. 

Much like the unforeseen consequences — from misinformation to enhanced distrust — that came from social media giants over a decade ago, it’s challenging to know what we don’t yet know, Crittenden said. “There are going to be tradeoffs that we’re not going to be able to fully appreciate in advance,” she said.

Transparency, mitigating risks and ensuring users have the option to raise questions and push back are paramount, Nonnecke said. As it stands, these tools are solely in the hands of private, for-profit companies with lackluster regulation and oversight. That raises another question in Nonnecke’s mind. 

“How do we strengthen public-private partnerships with these companies,” she said, “to better ensure that our public institutions are able to reap the benefits of this transformative technology so that it’s not just in the hands of industry?”

Carson, the historian, has ethical concerns, particularly with how companies have used exploited labor to filter toxic content and build AI platforms. Carson hopes there’s an opportunity for choice and critique — especially as technology giants and higher education collide in the fast-evolving space of machine learning and AI. 

“We’re in the midst of an explosion of this technology and an explosion of marketing that has an established track record of getting ahead of, and sometimes sidelining, careful thoughts about downstream consequences,” Carson said. “Whenever you think about technology, think about how power is shifting. This is part of our responsibility as a university.”