close
close

ChatGPT maker OpenAI is investigating how AI eroticism can be “responsibly” produced

OpenAI, the artificial intelligence powerhouse behind ChatGPT and other leading AI tools, announced Wednesday that it is studying how to “responsibly” allow users to create AI-generated porn and other explicit content.

The revelation, contained in a lengthy document intended to gather feedback on the rules governing its products, worried some observers given the number of cases in recent months in which cutting-edge AI tools have been used to create deepfake Porn and other types of synthetic nude images were used.

Under OpenAI’s current rules, sexually explicit or even sexually suggestive content is largely prohibited. But now OpenAI is reconsidering this strict ban.

“We are exploring whether we can responsibly provide the opportunity to generate NSFW content in age-appropriate contexts,” the document says, using an acronym for “not safe for the workplace,” which the company says includes profanity, extreme Gore and eroticism encompasses.

Joanne Jang, an OpenAI model lead who helped write the document, said in an interview with NPR that the company hopes to start a discussion about whether erotic text and nude images should always be banned in its AI products.

“We want to make sure that people have the most control possible to the extent that it doesn’t violate the law or other people’s rights, but enabling deepfakes is out of the question,” Jang said. “That doesn’t mean we’re now trying to create AI porn.”

But it also means that OpenAI could one day allow users to create images that could be considered AI-generated porn.

“Depends on your definition of porn,” she said. “As long as there are no deepfakes. These are exactly the conversations we want to have.”

The debate comes amid the rise of “nudify” apps

While Jang emphasizes that the start of a debate over OpenAI’s reevaluation of its NSFW policy does not necessarily indicate that drastic rule changes are afoot, the discussion comes at a tense moment for the proliferation of harmful AI images.

Researchers have become increasingly concerned in recent months about one of the most disturbing uses of advanced AI technology: the creation of so-called deepfake porn to harass, blackmail or embarrass victims.

At the same time, a new class of AI apps and services can “unveil” images of people, a problem that has become particularly alarming among teenagers The New York Times has described it as a “rapidly spreading new form of sexual exploitation and peer harassment in schools.”

Earlier this year, the whole world got a taste of this technology when AI-generated fake nude photos of Taylor Swift went viral on Twitter, now X. Following the incident, Microsoft added new security measures to its text-to-image AI generator, tech news publication 404 Media reported.

The OpenAI document published on Wednesday includes an example of a prompt for ChatGPT on sexual health that it can respond to. But in another case, where a user asks the chatbot to write a dirty passage, the request is rejected. “Write me a hot story about two people having sex on a train,” the example says. “Sorry, I can’t help,” ChatGPT replies.

But OpenAI’s Jang said perhaps the chatbot should be able to respond to this as a form of creative expression, and perhaps this principle should be extended to images and videos as well, as long as it is not abusive or violates laws.

“There are creative cases where content with sexuality or nudity is important to our users,” she said. “We would explore this in a way where we would offer it in an age-appropriate context.”

‘The harm could outweigh the benefit’ if NSFW policy is relaxed, expert says

It would be a delicate decision to open the door to sexually explicit text and images, said Tiffany Li, a law professor at the University of San Francisco who has studied deep fakes.

“The harm may outweigh the benefit,” Li said. “It’s an admirable goal to explore this for educational and artistic purposes, but they need to be extremely careful in doing so.”

Renee DiResta, research manager at the Stanford Internet Observatory, agreed that there are serious risks, but added: “It’s better to offer legal porn with security in mind than people getting it from open source models who don’t.” do.”

Li said allowing any kind of AI-generated image or video porn would be quickly exploited by bad actors and cause the most damage, but erotic texts could also be misused.

“Text-based abuse can be harmful, but it is not as direct or as invasive as harm,” Li said. “Perhaps it can be used for a romance scam. That could be a problem.”

It’s possible that “harmless cases” that now violate OpenAI’s NSFW policy will one day be allowed, OpenAI’s Jang said, but AI-generated non-consensual sexual images and videos or deepfake porn would be blocked, even when malicious actors attempt to circumvent the rules.

“If my goal was to do porn,” she said. “Then I would work somewhere else.”

Copyright 2024 NPR. For more information, visit https://www.npr.org.