A hacker stole OpenAI’s secrets, raising fears that China could too

Early last year, a hacker gained access to the internal messaging systems of OpenAI, the creator of ChatGPT, and stole details about the design of the company’s AI technologies.

The hacker lifted details from discussions on an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.

OpenAI executives disclosed the incident to employees during a plenary meeting at the company’s San Francisco offices in April 2023 and informed its board of directors, according to two people who discussed sensitive information about the company on condition of anonymity.

But executives decided not to share the news publicly because no customer or partner information had been stolen, the two people said. Executives did not consider the incident a national security threat because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the FBI or anyone else in law enforcement.

For some OpenAI employees, the news raised fears that foreign adversaries like China could steal AI technology that — though now primarily a work and research tool — could ultimately endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security and exposed rifts within the company regarding the risks of artificial intelligence.

After the breach, Leopold Aschenbrenner, a technical manager of the OpenAI program focused on ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the government Chinese and other foreigners. opponents from stealing her secrets.

Leopold Aschenbrenner, a former OpenAI researcher, alluded to the security breach in a podcast last month and reiterated his concerns.Credit…via YouTube

Mr. Aschenbrenner said that OpenAI fired him this spring for leaking other information outside the company and argued that his firing was politically motivated. He alluded to the breach in a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security was not strong enough to protect against the theft of top secrets if foreign actors were to break into the company.

“We appreciate the concerns that Leopold raised while at OpenAI and that did not lead to his separation,” said an OpenAI spokeswoman, Liz Bourgeois. Referring to the company’s efforts to build general artificial intelligence, a machine that can do everything the human brain can do, she added, “While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about the job. This includes his characterizations of our security, particularly this incident, which we addressed and shared with our board before he joined the company.”

Fears that a hack of a US tech company could be linked to China are not unreasonable. Last month, Brad Smith, Microsoft’s president, testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a broad attack on federal government networks.

However, under federal and California law, OpenAI cannot prevent people from working at companies because of their nationality, and policy researchers have said barring foreign talent from US projects could significantly hinder AI progress in the States. United.

“We need the best and brightest minds working on this technology,” said Matt Knight, OpenAI’s head of security, in an interview with The New York Times. “It comes with some risks and we have to understand them.”

(The Times has sued OpenAI and its partner, Microsoft, alleging copyright infringement of news content about the AI ​​systems.)

OpenAI isn’t the only company building increasingly powerful systems using rapidly improving AI technology. Some of them – most notably Meta, the owner of Facebook and Instagram – are freely sharing their designs with the rest of the world as open source software. They believe that the risks posed by today’s AI technologies are few and that shared code allows engineers and researchers across the industry to identify and fix problems.

Today’s AI systems can help spread misinformation online, including text, still images and, increasingly, video. They have also started shedding some jobs.

Companies like OpenAI and its competitors Anthropic and Google add safeguards to their AI apps before offering them to individuals and businesses, hoping to prevent people from using the apps to spread misinformation or cause other problems.

But there’s not much evidence that today’s AI technologies pose a significant risk to national security. Studies by OpenAI, Anthropic and others over the past year showed that AI was not significantly more dangerous than search engines. Daniela Amodei, an Anthropic co-founder and the company’s president, said its latest AI technology wouldn’t be a big risk if its designs were stolen or freely shared with others.

“If it was owned by someone else, could it be extremely harmful to many societies? “Our answer is ‘No, probably not,'” she told The Times last month. “Can anything speed up a bad actor down the road? Maybe. It’s really speculative.”

However, researchers and technology executives have long worried that AI could one day fuel the creation of new bioweapons or help penetrate government computer systems. Some even believe it could destroy humanity.

A number of companies, including OpenAI and Anthropic, are already shutting down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber ​​Command. He has also been appointed to OpenAI’s board of directors.

“We started investing in security years before ChatGPT,” said Mr. Knight. “We are on a journey not only to understand risks and stay ahead of them, but also to deepen our resilience.”

Federal officials and state lawmakers are also pushing for government regulations that would bar companies from releasing certain AI technologies and fine them millions if their technologies cause harm. But experts say those risks are still years or even decades away.

Chinese companies are building their own systems that are nearly as powerful as mainstream American systems. By some metrics, China eclipsed the United States as the largest producer of AI talent, with the country generating almost half of the world’s top AI researchers.

“It’s not crazy to think that China will soon be ahead of the US,” said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world’s open source AI projects.

Some researchers and national security executives argue that the mathematical algorithms at the heart of current AI systems, while not dangerous today, could become dangerous and are calling for stronger controls in AI labs.

“Even if the worst-case scenarios are relatively low-probability, if they have a big impact, then it’s our responsibility to take them seriously,” said Susan Rice, former domestic policy adviser to President Biden and former – national security advisor to President Barack Obama. during an event in Silicon Valley last month. “I don’t think it’s science fiction, as many like to pretend.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top