close
close

Microsoft, OpenAI and NVIDIA work with US federal agencies in case critical AI systems are attacked – Firstpost

More than 50 AI experts from the U.S. government, international government agencies, and private companies attended the four-hour session, including representatives from Amazon Web Services, Microsoft, Nvidia, OpenAI, and Palantir. Image credit: Reuters

U.S. federal government officials met with several technology companies, including major AI model operators, cybersecurity companies, and AI hardware manufacturers, to launch the first joint simulation of a cyberattack on key critical AI systems.

This exercise was important because responding to cyber threats targeting AI technologies requires a different approach than dealing with typical hacks.

Both Washington and Silicon Valley are working to get ahead of the unique cyber threats AI companies face. As Axios reports, historically, security measures often lag behind when new technologies become mainstream, leaving many companies vulnerable to evolving cyber threats.

Clayton Romans, deputy director of the Cybersecurity and Infrastructure Security Agency’s Joint Cyber ​​Defense Collaborative (JCDC), noted that as AI tools become more widely available, hackers could use them to accelerate and scale their attacks.

The JCDC hosted the simulation exercise at Microsoft’s office in Reston, Virginia. As is customary with such simulations, CISA did not publicly disclose the specific incident that participants simulated.

Romans explained that the exercise focused on the current threats they see and how government and the private sector can share information about those threats.

The four-hour session was attended by more than 50 AI experts from the U.S. government, international government agencies, and private companies, including representatives from Amazon Web Services, Microsoft, Nvidia, OpenAI, and Palantir.

Kyle Wilhoit, head of threat research at Palo Alto Networks’ Unit 42, was one of the participants. He said the exercise provided an opportunity to talk about the current threats they see and speculate on what new attack methods leveraging AI might look like in the future.

The exercise also helped CISA identify key contacts in the private sector for AI-related incidents and vice versa, according to Romans. This mutual understanding is critical for effective communication and response during an AI-related cyber incident.

The simulation exercise allowed participants to explore potential new threats on the horizon. Lessons learned from the exercise will feed into the development of the upcoming CISA AI Security Incident Playbook, which is expected to be released before the end of the year.

Romans mentioned that the JCDC plans to conduct another simulation exercise on AI before releasing the playbook.

Looking ahead, the lessons learned from these exercises will be critical in developing strategies and protocols for dealing with AI-related cyber threats.

By proactively addressing these issues, federal officials and industry leaders aim to strengthen the security and resilience of AI systems and ensure they can withstand and quickly recover from potential cyberattacks.

Latest news

Find us on YouTube

Subscribe to