close
close

Biden administration urges companies to crack down on sexually abusive deepfakes: Technology: Tech Times

The United States is reportedly looking to put an end to sexually abusive artificial intelligence-based deepfakes by requiring companies to voluntarily cooperate in stopping this problematic AI feature.

Since there are no corresponding draft laws from the government in this regard, the White House called on the companies on Thursday to cooperate voluntarily.

Authorities believe that by agreeing to a set of specific steps, the private sector will be able to stop the production, distribution and monetization of such non-consensual AI photos, which include explicit photos of children.


(Photo: Andres Siimon from Unsplash) Deepfakes are already dangerous because of their ability to mislead people, but when combined with a banking trojan, things only get worse for victims.

In addition, the White House recommended that technology companies impose restrictions on web services and apps that allegedly allow their users to create and modify sexually explicit photos without the consent of the person concerned.

Similarly, cloud service providers can prohibit explicit deepfake websites and applications from using their services.

With the help of new generative AI techniques, it is now easy to transform a person’s likeness into a sexually explicit AI deepfake and spread these lifelike images on social media or chat rooms. Whether it’s teenagers or celebrities, the victims have few options to put a stop to it.

The White House reportedly claims that depicting sexual assault on video is one of the fastest-spreading harmful applications of AI.

Furthermore, malicious actors can use AI to distribute actual photos without consent at frighteningly high speeds and in frighteningly high quantities.

Also read: Pedophiles use AI to create deepfake nude images of children and blackmail them, dark web discovery shows

Officials behind the call

The proclamation was written by White House officials Arati Prabhakar, director of the Office of Science and Technology Policy, and Jennifer Klein, chair of the Gender Policy Council.

In a phone interview, Prabhakar told sources that the White House hopes companies linked to the rise in image-based sexual assault will act quickly and decisively to put an end to it.

AI deepfake cases

Concerns about sexual images generated by AI continue to be heard around the world. In April last year, students in London called for classes to be designed around artificial intelligence and for the focus to be on preventing fake nude images rather than just cheating on exams.

London police launched an investigation after fictitious nude photos of students were created and circulated. Although she does not know the perpetrators, one of the students who claimed to have been the victim of AI-generated nude images says she is ashamed of her friends’ actions.

Although the teen says it’s “super frustrating” and he wants people to think it’s not him, he gets embarrassed knowing it’s not him. The teen said that while educators prohibit students from using artificial intelligence to cheat on tests or assignments, they hardly ever address the more sophisticated and creative ways it could be used.

In Tasmania, Australia, a man recently made history by becoming the first person in his jurisdiction to be charged with child abuse using material created by artificial intelligence.

The 48-year-old man was found guilty after he was discovered to have downloaded, uploaded and possessed hundreds of illegal AI-generated videos.

The Gravelly Beach man pleaded guilty on March 26, 2024, to accessing and maintaining child abuse material after his arrest and prosecution.

Related article: OpenAI considers AI-generated adult content amid debate

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. No reproduction without permission.