close
close

How to investigate when a robot causes an accident – and why it’s important that we do so

    <span class="Zuschreibung"><a class="Verknüpfung " href="https://www.shutterstock.com/image-photo/close-robots-hand-holding-magnifying-glass-720412816" rel="nofollow noopener" Ziel="_leer" data-ylk="slk:Andrey_Popov/Shutterstock;elm:context_link;itc:0;sec:content-canvas">Andrey_Popov/Shutterstock</a></span>” src=”https://s.yimg.com/ny/api/res/1.2/I7S3XVEA9RiloflLwfmZpA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/47d3d138c38e784ed a08d53f55c7b743″ data-src= “https://s.yimg.com/ny/api/res/1.2/I7S3XVEA9RiloflLwfmZpA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MQ–/https://media.zenfs.com/en/the_conversation_464/47d3d138c38e784eda08d53 f55c7b743″/></div>
</div>
</div>
</figure>
<p>Robots are becoming more and more common in our daily lives.  They can be incredibly useful (bionic limbs, robotic lawnmowers, or robots that deliver meals to people in quarantine) or just plain entertaining (robotic dogs, dancing toys, and acrobatic drones).  Imagination may be the only limit to what robots can do in the future.</p>
<p>But what happens when robots don’t do what we want them to do – or do it in a way that causes harm?  For example, what happens if a bionic arm is involved in a car accident?</p>
<p>Robot accidents are becoming increasingly concerning for two reasons.  First, the increasing number of robots naturally leads to an increase in the number of accidents in which they are involved.  Second, we are getting better at building more complex robots.  When a robot is more complex, it is harder to understand why something went wrong.</p>
<p>Most robots are based on various forms of artificial intelligence (AI).  AIs are capable of making human-like decisions (although they can objectively make good or bad decisions).  These decisions can affect many things, from identifying an object to interpreting language.</p>
<p>AIs are trained to make these decisions for the robot based on information from massive data sets.  The AIs are then tested for their accuracy (how well they do what we want them to do) before being given the task.</p>
<p>AIs can be designed in different ways.  As an example, consider the robot vacuum cleaner.  It could be designed to redirect in a random direction each time it hits a surface.  Conversely, it could be designed to map its surroundings to find obstacles, cover all surface areas and return to its charging station.  While the first vacuum takes input from its sensors, the second tracks that input into an internal mapping system.  In both cases, the AI ​​takes in information and makes a decision based on it.</p>
<p>The more complex a robot is, the more types of information it has to interpret.  It can also involve evaluating multiple sources of a data type, e.g.  B. in the case of acoustic data, a live voice, a radio and the wind.</p>
<p>As robots become more complex and capable of responding to a variety of information, it becomes increasingly important to determine what information the robot has responded to, especially when causing damage.</p>
<hr/>
<p><em> <strong>    Read more: We’re teaching robots to develop autonomously – so they can adapt to life on distant planets  </strong> </em></p>
<hr/>
<h2>accidents happen</h2>
<p>As with any product, something can and will go wrong with robots.  Sometimes this is an internal problem, such as the robot not recognizing a voice command.  Sometimes it is external – the robot’s sensor has been damaged.  And sometimes it can be both, for example, the robot is not designed to work on carpets and “trips”.  When investigating robot accidents, all possible causes must be considered.</p>
<p>While it can be unpleasant if the robot is damaged if something goes wrong, we are far more concerned if the robot causes harm to a person or fails to mitigate the harm.  For example, if a bionic arm fails to grab a hot drink and thrust it at the owner;  or if a care robot does not register an emergency call if the frail user has fallen.</p>
<p>Why is investigating robot accidents different from investigating human accidents?  In particular, robots have no motives.  We want to know why a robot made its decision based on the specific input it was given.</p>
<p>In the bionic arm example, was it a miscommunication between the user and the hand?  Did the robot mix up several signals?  Block unexpectedly?  Using the example of the person falling over, could the robot not “hear” the cry for help over a loud fan?  Or did it have difficulty interpreting the user’s language?</p>
<figure class=
A person writing with a bionic arm.A person writing with a bionic arm.

The black box

The investigation of robot accidents has a decisive advantage over the investigation of human accidents: There is the possibility of an integrated witness. Commercial aircraft have a similar witness: the black box, which is built to withstand plane crashes and provide information about why the crash happened. This information is extremely valuable not only for understanding incidents, but also for preventing them from happening again.

As part of RoboTIPS, a project focused on responsible innovation for social robots (robots that interact with humans), we created the so-called ethical black box: an internal record of the robot’s inputs and corresponding actions. The ethical black box is designed for each type of robot it lives in and is built to record all the information that the robot responds to. This can be a voice, visual or even brain wave activity.

We test the ethical black box on various robots both in the laboratory and under simulated accident conditions. The goal is for the ethical black box to become the standard for robots of all brands and applications.


Read more: Medical robots: Their facial expressions help people trust them


While the data recorded by the Ethical Black Box still needs to be interpreted in the event of an accident, it is crucial for us to have this data first so we can conduct investigations.

The investigation process provides a chance to ensure that the same error does not occur twice. The ethical black box is not only a way to build better robots, but also to innovate responsibly in an exciting and dynamic field.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The conversationThe conversation

The conversation

Keri Grieman receives funding from the EPSRC and the Alan Turing Institute.