close
close

Deficiencies in AI incident reporting leave regulatory gaps

New problems

Without an appropriate framework for reporting incidents, systemic problems can arise.

AI systems could cause direct harm to the public, such as by wrongfully denying access to welfare benefits, according to CLTR, which has studied the situation in the UK in detail. However, its findings could apply to many other countries.

The UK Department for Science, Innovation and Technology (DSIT) lacks a central, up-to-date picture of incidents involving AI systems as they occur, according to CLTR. “While some regulators collect some incident reports, we find that this is unlikely to capture the novel harms caused by disruptive AI,” it says, referring to the powerful generative AI models at the forefront of the industry.