Do ‘griefbots’ help mourners deal with loss?

Various commercial products known as “griefbots” create a simulation of a lost loved one. Built on artificial intelligence that makes use of large language models, or LLMs, the bots imitate the particular way the deceased person talked by using their emails, text messages, voice recordings, and more. The technology is supposed to help the bereaved deal with grief by letting them chat with the bot as if they were talking to the person. But we’re missing evidence that this technology actually helps the bereaved cope with loss.

Humans have used technology to deal with feelings of loss for more than a century. Post-mortem photographs, for example, gave 19th century Victorians a likeness of their dead to remember them by, when they couldn’t afford a painted portrait. Recent studies have provided evidence that having a drawing or picture as a keepsake helps some survivors to grieve. Yet researchers are still learning how people grieve and what kinds of things help the bereaved to deal with loss.

An approach to grief that focuses on continuing bonds with the deceased loved one suggests that finding closure is about more than letting the person go. Research and clinical practice show that renewing the bond with someone they’ve lost can help mourners deal with their passing. That means griefbots might help the bereaved by letting them transform their relationship to their deceased loved one. But a strong continuing bond only helps the bereaved when they can make sense of their loss. And the imitation loved ones could make it harder for people to do that and accept that their loved one is gone.

Carla Sofka, a professor of social work at Siena College in New York state, is an expert on technology and grief. As the internet grew in the mid-1990s, she coined the term “thanatechnology” to describe any technology — including digital or social media — that helps someone deal with death, grief, and loss, such as families and friends posting together on the social media profile of a deceased loved one or creating a website in their memory. Other survivors like rereading emails from the deceased or listening to their recorded voice messages. Some people may do this for years as they come to terms with the intense emotions of loss.

Griefbots could give the bereaved a new tool to cope with grief, or they could create the illusion that the loved one isn’t gone.

If companies are going to build AI simulations of the deceased, then “they have to talk to the people who think they want this technology” to better create something that meets their needs, Sofka said. Current commercial griefbots target different groups. Seance AI’s griefbot, for example, is intended for short-term use to provide a sense of closure, while the company You, Only Virtual — or YOV — promises to keep someone’s loved one with them forever, so they “never have to say goodbye.”

But if companies can create convincing simulations of people who died, Sofka said it’s possible that could change the whole reality of the person being gone. Though we can only speculate, it might affect the way people who knew them grieve. As Sofka wrote in an email, “everyone is different in how they process grief.” Griefbots could give the bereaved a new tool to cope with grief, or they could create the illusion that the loved one isn’t gone and force mourners to confront a second death if they want to stop using the bot.

Public health and technology experts, such as Linnea Laestadius of the University of Wisconsin-Milwaukee, are concerned griefbots could trap mourners in secluded online conversations, unable to move on with their lives. Her work on chatbots suggests people can form strong emotional ties to virtual personas that make them dependent on the program for emotional support. Given how hard it is to predict how such chatbots will affect the way people grieve, Sofka wrote in an email, “it’s challenging for social scientists to develop research questions that capture all possible reactions to this new technology.”  

That hasn’t stopped companies from releasing their products. But to develop griefbots responsibly, it’s not just about knowing how to make an authentic bot and then doing it, said Wan-Jou She, an assistant professor at the Kyoto Institute of Technology.

She collaborated with Anna Xygkou, a doctoral student at the University of Kent, and other coauthors on a research project to see how chatbot technologies can be used to support grief. They interviewed 10 people who were using virtual characters created by various apps to cope with the loss of a loved one. Five of their participants chatted with a simulation of the person they lost, while the others used chatbots that took on different roles, such as a friend. Xygkou said that the majority of them talked to the characters for less than a year. “Most of them used it as a transitional stage to overcome grief, in the first stage” she said, “when grief is so intense you cannot cope with the loss.” Left to themselves, these mourners chose a short-term tool to help them deal with loss. They did not want to recreate a loved one to keep them at their side for life. While this study suggests that griefbots can be helpful to some bereaved people, more studies will be needed to show that the technology doesn’t harm them — and that it helps beyond this small group.

What’s more, the griefbots didn’t need to convince anyone they were human. The users interviewed knew they were talking to a chatbot, and they did not mind. They suspended their disbelief, Xygkou said, to chat with the bot as though they were talking to their loved ones. As anyone who has used LLM-driven chatbots knows, it’s easy to feel like there’s a real person on the other side of the screen. During the emotional upheaval of losing a loved one, indulging this fantasy could be especially problematic. That’s why simulations must make clear that they’re not a person, Xygkou said.

People may become more comfortable talking to computers, or poor oversight might mean that many people won’t know they are talking to a computer.

Critically, according to She, chatbots are currently not under any regulation, and without that, it’s hard to get companies to prove their products help users to deal with loss. Lax lawmaking has encouraged other chatbot apps to claim they can help improve mental health without providing any evidence. As long as these apps categorize themselves as wellness rather than therapy, the U.S. Food and Drug Administration will not enforce its requirements, including that apps prove they do more good than harm. Though it’s unclear which regulatory body will be ultimately responsible, it is possible that the Federal Trade Commission could handle false or unqualified claims made by such products.

Without much evidence, it’s uncertain how griefbots will affect the way we deal with loss. Usage data doesn’t appear to be public, but She and Xygkou had so much trouble finding participants for their study that Xygkou thinks not many mourners currently use the technology. But that could change as AI continues to proliferate through our lives. Maybe more people will use griefbots as the shortage of qualified mental professionals worsens. People may become more comfortable talking to computers, or poor oversight might mean that many people won’t know they are talking to a computer in the first place. So far, neither questionable ethics nor tremendous cost have prevented companies from trying to use AI any chance they get.

But no matter what comfort a bereaved person finds in bot, by no means should they trust them, She said. When a LLM is talking to someone, “it’s just predicting: what is the next word.”


Tim Reinboth is a freelance science journalist and researcher in cognitive science and science and technology studies.

Related Posts