close
close

Back from the dead online: How AI chatbots that simulate the loss of loved ones could cause “unwanted digital hauntings.”

AI ethicists at the Leverhulme Center for the Future of Intelligence in Cambridge describe the area as “high risk”.

A man uses AI technology to “resurrect” his deceased grandmother, causing controversy online. Photo: Baidu

Co-author Dr. Tomasz Hollanek from the Leverhulme Center said: “It is vital that digital afterlife services take into account the rights and consent of not only those they recreate, but also those who need to interact with the simulations.”

“These services risk causing great distress to people as they face unwanted digital tracking from alarmingly accurate AI replicas of those they have lost.”

“The potential psychological impact, particularly at an already difficult time, could be devastating.”

The study, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to present products to users in the manner of a deceased loved one, or to alarm children by insisting that a dead parent is always there is still “with you”.

The researchers say that if the living opt in to be virtually recreated after their death, the resulting chatbots could be used by companies to send unsolicited notifications, reminders and updates to surviving family members and friends about the services they offer to spam – similar to “haunted by the dead” with digital services.

Dr. Tomasz Hollanek is co-author of the study.

Even those who initially seek comfort from a deadbot can become drained by daily interactions that become an “overwhelming emotional burden,” the study authors argue, but they may also be unable to endure an AI simulation if their now deceased loved one has signed a long-term contract with a digital afterlife service.

The co-author of the study, Dr. Katarzyna Nowaczyk-Basinska, said: “Rapid advances in generative AI mean that almost anyone with internet access and some basic knowledge can revive a deceased loved one.”

“This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased and ensure that this is not compromised by financial motives, for example through digital afterlife services.

“The rights of both data donors and those who interact with AI services after death should be equally protected.”

A study from the University of Cambridge found that AI chatbots – so-called deadbots – need design safety protocols to prevent them from causing psychological harm. Photo: Getty Images
The researchers say there are already platforms that offer AI dead recovery for a fee, such as Project December, which began doing so GPT models before developing its own systems and apps like HereAfter.

According to the study, similar services are now emerging in China.

Hollanek said that people “could develop strong emotional bonds with such simulations, making them particularly vulnerable to manipulation.”

He said that options should be considered to “retire Deadbots in a dignified manner,” which “could mean a form of digital burial.”

“We recommend designing protocols to prevent deadbots from being used in disrespectful ways, such as for advertising or an active social media presence,” he added.

The researchers recommend age restrictions for deadbots and call for “meaningful transparency” to ensure users always know they are interacting with an AI.

They also urged design teams to prioritize opt-out protocols that allow users to end their relationships with deadbots.

Nowaczyk-Basinska said: “We now need to think about how to mitigate the social and psychological risks of digital immortality, because the technology is already there.”