close
close

OpenAI, Hacking 2023, information stolen from internal forum. Details of the incident

Description

In 2023 a hacker wonAccess to OpenAI’s internal messaging systems, to steal details about the design of the company’s artificial intelligence technologies. According to an article in the New York Times on Thursday, July 4, 2024, the attack disrupted discussions about a internal online forum where OpenAI employees shared information about the latest technologies developed by the company.

Attack details

The Theft reported by two people With knowledge of the incident, OpenAI’s core systems where AI technologies are developed and hosted were not compromised, reducing the potential impact of the attack, which did not affect customer or partner data.

Internal communication and response

OpenAI Executives informed employees about the incident during a general meeting in April last year and subsequently the Board of Directors of the companyHowever, they decided not to make the news publicbecause no sensitive customer or partner data was stolen and the incident did not pose a threat to national security. According to the New York Times article, executives believed the hacker was a private citizen with no ties to foreign governments. For this reason, the company did not involve federal law enforcement.

Background and security measures

In May 2023 OpenAI announced that it had disrupted five covert influence operations that attempted to use its AI models for fraudulent activities online. These incidents raised concerns about the security and misuse of AI technology. It seems that a few weeks ago, some hackers launched a ‘modified’ version of ChatGptcalled “God Mod Gpt”with which they could ask the chatbot anything, without restrictions. As a result, the AI ​​even ended up giving answers on illegal topics such as drugs and weapons. Apparently, this version was created by decrypting the model’s development code by a so-called Computer jailbreak.

Regulatory context

In the context of technology security The Biden administration is developing preliminary plans to introduce safeguards for advanced AI models, including ChatGPT. These measures aim to protect U.S. AI technologies from potential threats from China and Russia. In addition, in May 2023, sixteen AI-developing companies committed at a global meeting to develop the technology in a secure manner and coordinate with regulators trying to keep pace with rapid innovation and emerging risks.

Future impacts

The OpenAI vulnerability highlights the importance of protecting sensitive information and developing effective response strategies for similar incidents. With the rapid development of artificial intelligence technologies, companies must remain vigilant and cooperate with authorities to ensure the security and integrity of their systems.