close
close

RSAC: Researchers share lessons from world’s first AISIRT

As the use of AI explodes in sensitive sectors like infrastructure and national security, a team at Carnegie Mellon University is pioneering the field of AI security response.

In the summer of 2023, researchers at the university’s Software Engineering Institute, birthplace of the first Computer Emergency and Response Team (CERT), felt there was an urgent need to create a new unit to lead research and development efforts to define tactics Incident response leads AI and machine learning (ML) systems techniques and procedures and coordinates community response efforts.

Just over six months later, Lauren McIlvenny and Gregory Touhill shared their insights from leading the world’s first AI Security Incident Response Team (AISIRT) during the 2024 RSA Conference.

Explain the need for an AISIRT

The AISIRT was created because McIlvenny and Touhill’s research data showed a continued increase in AI-powered attacks and attacks on AI systems.

“We continue to see a lot of activity related to AI-related systems and technologies now being targeted,” Touhill said.

The pair mentioned the numerous threats to generative AI tools such as AI chatbots and large language model (LLM) systems, as well as attacks targeting the engines that power AI models and graphics processing unit (GPU) kernels, their implementations can be vulnerable to memory leaks and can be used to access sensitive information.

The AISIRT was developed in collaboration between Carnegie Mellon University and the CERT Division partner network.

It became partially operational after its initial launch in August 2023 and has been fully operational since October 2023.

The focus is on identifying, understanding and remediating “vulnerabilities” for AI systems of interest to and used by defense and national security organizations.

In this context, McIlvenny explained that “vulnerabilities” include traditional software vulnerabilities, adversarial machine learning vulnerabilities, and flaws that lead to common cyber AI attacks.

How the AISIRT works

The AISIRT uses existing rules of engagement from responding to cyber incidents and is structured based on a traditional Computer Security Incident and Response Team (CSIRT).

It consists of four main components: an AI incident response element, an AI vulnerability detection toolset, an AI vulnerability management framework, and an AI situational awareness service.

A variety of stakeholders are involved in AISIRT, including:

  • A team leader who can explain the technical aspects to those affected in an understandable way
  • System/database administrators
  • Network engineers
  • AI/ML practitioner
  • Threat intelligence researcher
  • Specialists from Carnegie Mellon University and other trusted industry/academic partners as needed

McIlvenny and Touhill said they see AISIRT in the future as a hub for updating and sharing best practices, standards and guidelines around AI for defense and national security organizations.

They plan to build an AI community of practice across academia, industry, defense and national security organizations, and legislative bodies.

“At least 20% of what we show here in the AISIRT structure will need further development in the future,” McIlvenny estimated.

Lessons learned from six months of AISIRT

McIlvenny and Touhill shared some of the lessons learned after more than six months leading AISIRT.

These are:

  • AI vulnerabilities are cyber vulnerabilities
  • AI vulnerabilities appear throughout the system
  • Cybersecurity processes are mature and should evolve to support AI
  • AI systems differ from today’s traditional IT in several interesting ways
  • The complexity of AI systems makes triage, diagnosis and troubleshooting difficult
  • There are no tools for identifying vulnerabilities yet
  • There is a need for secure development training (e.g. DevSecOps) tailored to AI developers
  • Through penetration testing of AI systems by the Red Team throughout the entire development cycle, significant vulnerabilities can be identified early in the development cycle

However, they insisted that AISIRT – and AI security as a whole – is still in its infancy, and that organizations using AI and stakeholders seeking to protect themselves from AI threats still have countless unanswered questions, including the following :

  • New regulatory systems: What is the standard of care when using AI systems and what is the standard of care when developing AI systems?
  • Evolving Privacy Implications: How will AI systems impact citizens’ privacy? How will AI systems weaken existing privacy protocols?
  • Intellectual Property Threats: What do I do if our valuable intellectual property ends up in a generative AI system? What do I do if our valuable intellectual property is discovered in an AI system?
  • Governance and supervision: What are the best practices for AI governance and monitoring? Do I need to set up separate governance models for European and North American business units due to different regulatory frameworks?

“We are at a stage where the questions surrounding AI safety still far outnumber the answers. So please get in touch and share your experiences with using and securing AI,” concluded Touhill.