close
close

Demand safeguards to prevent deceased loved ones from being haunted by AI chatbots

Artificial intelligence that allows users to have text and voice conversations with lost loved ones risks causing psychological harm and even digitally “haunting” those left behind without design security standards, according to researchers at the University of Cambridge.

“Deadbots” or “griefbots” are AI chatbots that simulate the speech patterns and personality traits of the dead based on the digital footprints they leave behind. Some companies are already offering these services, enabling a completely new type of “post-mortem presence”.

AI ethicists from the Leverhulme Center for the Future of Intelligence in Cambridge outline three design scenarios for platforms that could emerge as part of the evolving “digital afterlife industry” to demonstrate the potential consequences of sloppy design in an area of ​​AI, which they describe as “high risk”.

The study, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to secretly promote products to users in the manner of a deceased loved one, or to alarm children by insisting that a dead parent is always there is still “with you”.

If the living sign up to be virtually recreated after their death, the resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they offer – just as if they were digitally “stalked from the dead”. “.

Even those who initially seek comfort from a “deadbot” could become drained by daily interactions that become an “overwhelming emotional burden,” researchers argue, but may also be unable to let an AI simulation suspend , if your loved one, who has since died, signs a longer-term contract with a digital afterlife service.

“Rapid advances in generative AI mean that almost anyone with internet access and some basic knowledge can revive a deceased loved one,” said Dr. Katarzyna Nowaczyk-Basińska, co-author of the study and researcher at the Leverhulme Center for the Future of Intelligence in Cambridge (LCFI).

“This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased and ensure that this is not compromised by financial motives, for example through digital afterlife services.

“At the same time, a person can leave an AI simulation as a parting gift for loved ones who are not ready to process their grief in this way. The rights of both data donors and those who interact with AI services after death should be equally protected.”

There are already platforms that offer to recreate dead people with AI for a small fee, such as Project December, which initially used GPT models before developing its own systems, and apps like HereAfter. Similar services have also emerged in China.

One of the potential scenarios in the new work is “MaNana”: a conversation-based AI service that allows people to create a deadbot that simulates their deceased grandmother, without the consent of the “data donor” (the dead grandparent).

The hypothetical scenario is that an adult grandchild, initially impressed and comforted by the technology, receives advertising after completing a “premium trial.” For example, the chatbot that suggests ordering from food delivery services in the voice and style of the deceased.

The relative feels he has disrespected his grandmother’s memory and wants the Deadbot taken down, but in a meaningful way – something the service providers failed to consider.

“People could develop strong emotional bonds with such simulations, making them particularly vulnerable to manipulation,” said co-author Dr. Tomasz Hollanek, also from the LCFI in Cambridge.

“Methods and even rituals should be considered to retire deadbots in a dignified manner. Depending on the social context, this could be, for example, a form of digital funeral or other types of ceremonies.”

“We recommend designing protocols to prevent deadbots from being used in disrespectful ways, such as advertising or an active social media presence.”

While Hollanek and Nowaczyk-Basińska say recovery service designers should actively seek consent from data donors before adopting, they argue that a ban on deadbots based on non-consenting donors would be unworkable.

They suggest that design processes should include a series of prompts to those wishing to “resurrect” their loved ones, such as “Have you ever spoken to The focus is on Deadbot development.

Another scenario presented in the paper, an imaginary company called “Paren’t,” highlights the example of a terminally ill woman who leaves behind a deadbot to help her eight-year-old son cope with grief.

While the Deadbot initially serves as a therapeutic tool, the AI ​​begins to produce confusing reactions as it adapts to the child’s needs, such as depicting an impending face-to-face encounter.

The researchers recommend age restrictions for deadbots and also call for “meaningful transparency” to ensure users always know they are interacting with an AI. For example, these could be similar to current warnings about content that can cause seizures.

The final scenario examined in the study – a fictional company called “Stay” – depicts an elderly person who secretly engages in a deadbot of herself and pays for a twenty-year subscription in the hopes that it will comfort her adult children and make it hers Allows grandchildren to know them.

After death, service begins. An adult child doesn’t attend and receives a barrage of emails in the voice of their deceased parent. Another does, but ends up emotionally exhausted and wracked with guilt over the Deadbot’s fate. But blocking the deadbot would violate the terms of the contract her parents signed with the service company.

“It is critical that digital afterlife services take into account the rights and consent of not only those they recreate, but also those who must interact with the simulations,” Hollanek said.

“These services risk causing people great distress by subjecting them to unwanted digital tracking by alarmingly accurate AI replicas of their deceased loved ones. “The potential psychological impact could be devastating, particularly at an already difficult time.”

The researchers urge design teams to prioritize opt-out protocols that allow potential users to end their relationships with deadbots in a way that brings about emotional closure.

Nowaczyk-Basińska added: “We now need to think about how to mitigate the social and psychological risks of digital immortality, because the technology is already there.”