close
close

British think tank calls for government reporting system for AI incidents

A UK think tank has stressed the need for an incident reporting system to log misuse or malfunctions of artificial intelligence (AI). Without this system, the Department for Science, Innovation and Technology (DSIT) could miss important insights into AI failures.

The Centre for Long-Term Resilience (CLTR) said that in other safety-critical industries such as medicine and aviation, while incident reports are collected and investigated, there is a “worrying gap” in UK regulatory plans for AI.

The CLTR said its mission is to “transform global resilience to extreme risks” by working with governments and other institutions to improve governance, processes and decision-making.

“AI has a history of making unexpected mistakes. Since 2014, over 10,000 security incidents involving deployed AI systems have been reported in the news. As AI becomes more integrated into society, the number and scale of incidents are likely to increase,” the CLTR said in the report.

Critical gap highlighted by CLTR

The CLTR warned that without a robust incident reporting framework, DSIT would miss incidents in high-performance base models, such as BiasDiscrimination or misguided actors who could cause harm.

The CLTR also said that the DSIT lacked transparency on incidents related to the government’s use of AI in public services that could directly harm the public, such as the unlawful suspension of social benefits and the resulting miscarriages of justice.

Without incident reporting, the CLTR added, DSIT would be less able to detect disinformation campaigns or the development of biological weapons, in which case urgent action may be required to protect UK citizens.

Finally, the government may also overlook cases of harm caused by AI companions, tutors and therapists, where deep trust combined with extensive personal data could lead to abuse, manipulation, radicalisation or dangerous advice.

“DSIT lacks a central, up-to-date picture of these types of incidents when they occur. While some regulators collect some incident reports, we note that this is unlikely to capture the novel harms caused by disruptive AI,” the CLTR said.

The benefits of AI incident reporting

According to CLTR, incident reporting is a proven safety mechanism that supports the UK government’s context-based approach to AI regulation.

“DSIT’s priority should be to ensure that the UK Government learns of such novel damage not through the news but through established incident reporting processes,” CLTR said.

Incident reporting enables monitoring of AI-related security risks in real-world contexts and provides a feedback loop for regulatory adjustments. It also enables coordinated responses to major incidents followed by root cause investigations for cross-industry insights.

In addition, the data can help identify early warnings of potential large-scale damage and then use them in risk assessments by the AI ​​Safety Institute and the Central AI Risk Function.

The CLTR recommends the next steps

The CLTR made three key recommendations for immediate action by the UK government.

First, they proposed establishing a streamlined system for reporting AI incidents in public services. This could be achieved by extending the Algorithmic Transparency Recording Standard (ATRS) to include a framework specifically designed for reporting incidents involving AI systems used in public sector decision-making.

The ATRS aims to create transparency by allowing public bodies to disclose details of the algorithmic tools they use.

According to CLTR, these incident reports should be forwarded to a government agency and possibly made available to the public to increase transparency and accountability.

Second, CLTR advised the government to work with UK regulators and experts to identify critical gaps in AI oversight. This step is crucial to ensure comprehensive coverage of priority incidents and to understand the necessary stakeholders and incentives that are essential to creating an effective regulatory framework.

Finally, the CLTR proposed improving the DSIT’s capacity to monitor, investigate and respond to AI incidents. This could include establishing a pilot AI incident database under the DSIT’s central function, aimed at developing the necessary policy and technical infrastructure for collecting and processing AI incident reports.

This database will initially focus on the most pressing gaps identified by stakeholders and could eventually include reports from all UK regulators.

UK government interventions in AI safety

In May, the British government announced The UK-based AI Safety Institute’s evaluation platform has been made available to the global artificial intelligence (AI) community to pave the way for safe innovation of AI models.

By making the Inspect evaluation platform available to the global community, the Institute aims to help accelerate global work on AI safety assessments, thereby contributing to better safety testing and the development of safer models.

In February, the British government announced The UK will provide grants to researchers to investigate how society can be protected from the risks of artificial intelligence (AI).

The funding will also be used to exploit the benefits of AI, such as increased productivity. The most promising proposals will be developed into longer-term projects and receive further funding.

The news arrived In the same week, the UK and South Korea reached a landmark agreement to create a global network of AI safety institutes. At an AI summit in South Korea, ten countries and the European Union committed to working together on a network to improve the science of artificial intelligence (AI) safety.

Be part of the latest tech conversations and discover groundbreaking innovations in Paris.

Don’t miss one of the most exciting technology events of the year in France.