close
close

A hacker has stolen OpenAI secrets, raising fears that China could do the same

Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s AI technologies.

The hacker stole details from discussions in an online forum where employees discussed OpenAI’s latest technologies, two people familiar with the incident said, but did not gain access to the systems where the company hosts and develops its artificial intelligence.

OpenAI executives disclosed the incident to employees during a general meeting at the company’s San Francisco office in April 2023 and informed the board, according to the two people, who discussed confidential information about the company on condition of anonymity.

But executives decided not to make the news public because no information about customers or partners was stolen, the two people said. Executives did not view the incident as a national security threat because they believed the hacker was a private citizen with no known ties to a foreign government. The company did not notify the FBI or other law enforcement agencies.

The news raised fears among some OpenAI employees that foreign adversaries like China could steal AI technology that — while currently used primarily as a work and research tool — could ultimately threaten U.S. national security. It also raised questions about how seriously OpenAI takes security and revealed internal disagreements about the risks of artificial intelligence.

Following the data leak, Leopold Aschenbrenner, an OpenAI technical program manager whose job is to ensure that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.

Leopold Aschenbrenner, a former OpenAI researcher, alluded to the breach in a podcast last month and reiterated his concerns.Credit…via YouTube

Mr. Aschenbrenner said OpenAI fired him in the spring for leaking other information to the outside world, and argued his dismissal was politically motivated. He alluded to the data theft in a recent podcast, but details of the incident have not been disclosed. He said OpenAI’s security measures are not strong enough to protect against the theft of important secrets if foreign actors infiltrate the company.

“We appreciate the concerns Leopold raised during his time at OpenAI, and this did not lead to his termination,” said OpenAI spokeswoman Liz Bourgeois. Referring to the company’s efforts to develop artificial general intelligence, a machine that can do anything the human brain can, she added, “While we share his commitment to building secure AGI, we disagree with many of the claims he has made since then about our work. This includes his descriptions of our security, particularly this incident, which we addressed and shared with our board before he joined the company.”

Fears that the hack of an American technology company may have ties to China are not unfounded. Last month, Microsoft President Brad Smith testified on Capitol Hill how Chinese hackers used the tech giant’s systems to launch a large-scale attack on federal government networks.

However, under federal and California law, OpenAI cannot exclude people from working at the company based on their nationality. Political scientists point out that excluding foreign talent from U.S. projects could significantly hinder AI progress in the United States.

“We need the best and brightest minds working on this technology,” said Matt Knight, OpenAI’s security chief, in an interview with the New York Times. “It brings some risks, and we need to address those.”

(The Times has sued OpenAI and its partner Microsoft for copyright infringement on news content related to AI systems.)

OpenAI isn’t the only company building increasingly powerful systems using rapidly improving AI technology. Some of them – most notably Meta, the owner of Facebook and Instagram – are making their designs available to the rest of the world for free as open source software. They believe that the dangers posed by today’s AI technologies are small and that by sharing code, engineers and researchers across the industry can identify and fix problems.

Today’s AI systems can help spread misinformation online, including text, still images, and increasingly video. They are also starting to take away some jobs.

Companies like OpenAI and its competitors Anthropic and Google are adding guardrails to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems.

But there is little evidence that today’s AI technologies pose a significant risk to national security. Studies by OpenAI, Anthropic and others last year showed that AI is not significantly more dangerous than search engines. Daniela Amodei, co-founder and president of Anthropic, said the company’s latest AI technology would not pose a major risk if its designs were stolen or freely shared with others.

“If it belonged to someone else, could that be enormously damaging to a large part of society? Our answer is no, probably not,” she told the Times last month. “Could it accelerate the development of a bad actor? Maybe. That’s really speculative.”

Yet researchers and technology executives have long feared that artificial intelligence could one day help develop new bioweapons or help break into government computer systems. Some even believe it could destroy humanity.

A number of companies, including OpenAI and Anthropic, are already restricting their engineering operations. OpenAI recently set up a security committee to study how it should deal with the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber ​​Command. He has also been appointed to OpenAI’s board of directors.

“We started investing in security years before ChatGPT,” Knight said. “We are not only in the process of understanding and getting ahead of the risks, but also building our resilience.”

Federal and state politicians are also pushing for government regulations that would prohibit companies from releasing certain AI technologies and impose millions in fines if their technologies cause harm. But experts say those dangers are years or even decades away.

Chinese companies are building their own systems that are almost as powerful as the leading U.S. systems. By some figures, China has overtaken the U.S. as the largest producer of AI talent, with nearly half of the world’s leading AI researchers coming from the country.

“It’s not crazy to think that China will soon be ahead of the United States,” said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world’s open-source AI projects.

Some researchers and national security leaders argue that the mathematical algorithms at the core of current AI systems, while safe today, could become dangerous, and are calling for tighter controls on AI labs.

“Even if the probability of the worst-case scenarios is relatively low, when they have serious consequences, it is our responsibility to take them seriously,” Susan Rice, former domestic policy adviser to President Biden and former national security adviser to President Barack Obama, said at an event in Silicon Valley last month. “I don’t think it’s science fiction, as many like to claim.”