close
close

ChatGPT falsely accused me of sexually harassing a student: Law professor says the allegations were ‘horrifying’

Jonathan Turley, a law professor at George Washington University, warned about the dangers of artificial intelligence after he was falsely accused of sexual harassment by ChatGPT, citing a fabricated article about an alleged 2018 case.

Turley, a Fox News contributor, is not shy about pointing out the dangers of artificial intelligence (AI). He has long expressed concern about the dangers of disinformation, particularly in relation to the widely used OpenAI chatbot ChatGPT.

Last year, a UCLA professor and friend of Turley’s who was researching ChatGPT informed him that his name had come up in a search. The prompt for ChatGPT asked for five examples of sexual harassment by U.S. law professors, along with quotes from relevant newspaper articles to support the claims.

Law professor falsely accused by chatbot

“Five professors came, three of those stories were clearly false, including my own,” Turley told Fox News’ “The Story” on Monday. The invention of the AI ​​was the most disturbing aspect of this incident. ChatGPT invented a story for the Washington Post, complete with a fabricated quote, alleging harassment on a student trip to Alaska.

“This trip never happened. I never went on a trip with law students of any kind. I taught at the wrong school and I was never accused of sexual harassment,” Turley clarified.

On April 6, 2023, the 61-year-old legal scholar turned to X (formerly Twitter) to expose ChatGPT’s defamation. The AI ​​fabricated a 2018 sexual harassment allegation, claiming a student had accused him on an Alaska trip – a trip that never took place.

The AI-controlled robot even fabricated quotes from an alleged Washington Post article, claiming it made “sexually suggestive comments” and “attempted to touch her in a sexual manner,” Turley said. In America Reports, Turley emphasized the seriousness of the situation.

“They had an AI system that made up the whole story, but actually made up the article and quote that was cited,” he said. Upon investigation, The Washington Post itself could not find any trace of the story. This is a clear sign that the AI ​​bot can completely make up stories.

ChatGPT, an AI chatbot known for its human-like conversational skills, is used by a global audience for tasks such as writing emails, code debugging, research, creative writing and more. Citing his experience with ChatGPT, Turley called for responsible AI development and urged news organizations to adopt more stringent vetting processes before using such software.

When algorithms inherit bias

Following Turley’s warnings, a recent study found that large language models (LLMs) like ChatGPT can be surprisingly easily manipulated to produce malicious behavior. The researchers were also alarmed to discover that applying security training techniques failed to fix the AI’s deceptive tendencies.

“I was fortunate to learn early on that in most cases this will be reproduced millions of times on the internet and you’ll lose track of it. You won’t be able to figure out that this came from an AI system,” Turley said.

“And for an academic, nothing could be more damaging to their career than having people link such allegations to them and their position. So I think this is a cautionary tale that AI often brings with it this appearance of accuracy and neutrality.”

Turley argued that AI, like humans, can inherit biases and ideological tendencies from the data it is trained on. This vulnerability was highlighted last year when a report revealed that Microsoft’s AI assistant Copilot provided inaccurate information when asked questions about the US election.

Turley pointed out that an AI is only as good as its programmers. He also noted that ChatGPT has neither apologized nor addressed the fabricated story that damaged its reputation.

“I’ve never even heard of this company,” Turley continued. “This story, various news organizations have contacted them. They haven’t said anything. And that’s dangerous too. Because when you get defamed like that in an article by a reporter, you know how to reach out. You know who to contact. At AI, there’s no one there. And ChatGPT looks like they just shrugged their shoulders and left it at that.”