Are We Ready for AI to Raise the Dead?

ai generated art
Are We Ready for AI to Raise the Dead?Shutterstock

Eleven years ago, Tupac came back to life. So did Michael Jackson two years later. Okay, they still weren’t breathing. But there they were, gracing the stage, digitally reincarnated as “holograms.” They caused a commotion the moment they materialized. Some people who saw them thought they were pretty sweet—a nice tribute. Others got queasy. Did the shows’ producers have the families’ approval? Would the King of Pop even want to be resurrected as a collection of light beams? Could we really make that choice for him?

Just a decade later, you can forget about holograms. We’re approaching the possibility of digital immortality.

Here in 2023, we have witnessed the mainstreaming of artificial intelligence in the space of a few months through the text-to-image generator DALL-E and the “large language model” ChatGPT. There’s now AI in your browser with capabilities that seem to have arrived way ahead of schedule. The latest model rolled out by OpenAI, a Bay Area start-up, is called GPT-4, and it draws on a huge amount of data and “machine learning” to answer questions, have a conversation, even write whole essays with citations. Those citations aren’t always real, however, and the information isn’t always true. The model can’t really assess truth or fiction, it just learns how to assemble words in a coherent way. It can construct human speech, but it isn’t conscious or self-aware.

What does this have to do with Michael Jackson's glimmering renaissance at the Billboard Music Awards? It was one thing for the hologrammers to go to a famous artist’s estate and ask for permission to create a projection of him for a four-minute performance. It was primitively lifelike, but it was essentially a movie in three dimensions. Here in the present, there’s already chatter that people might want to begin saving voice recordings and videos of their loved ones, compiling their writings and texts, so that after they pass on, these data points can be put into a machine-learning model to create a digital facsimile of the person who has died. An MIT project on “Augmented Eternity and Swappable Identities” is already working on it. At minimum, it could be what Michael Sandel, the influential ethicist and political philosopher at Harvard University, calls a “virtual-immortality chatbot,” with which—with whom?—you could text back and forth. Call it UndeadGPT.

You can see wonderful possibilities here. Some might find comfort in hearing their mom’s voice, particularly if she sounds like she really sounded and gives the kind of advice she really gave. But Sandel told me that when he presents the choice to students in his ethics classes, the reaction is split, even as he asks in two different ways. First, he asks whether they’d be interested in the chatbot if their loved one bequeathed it to them upon their death. Then he asks if they’d be interested in building a model of themselves to bequeath to others. Oh, and what if a chatbot is built without input from the person getting resurrected? The notion that someone chose to be represented posthumously in a digital avatar seems important, but even then, what if the model makes mistakes? What if it misrepresents—slanders, even—the dead?

Soon enough, these questions won’t be theoretical, and there is no broad agreement about whom—or even what—to ask. We’re approaching a more fundamental ethical quandary than we often hear about in discussions around AI: human bias embedded in algorithms, privacy and surveillance concerns, mis- and disinformation, cheating and plagiarism, the displacement of jobs, deepfakes. These issues are really all interconnected—Osama bot Laden might make the real guy seem kinda reasonable or just preach jihad to tweens—and they all need to be confronted. We think a lot about the mundane (kids cheating in AP History) and the extreme (some advanced AI extinguishing the human race), but we’re more likely to careen through the messy corridor in between. We need to think about what’s allowed and how we’ll decide.

we typed “death and resurrection as conceived by andy warhol pop artist” into shutterstock’s ai image generator and got this
Shutterstock

Leaving the inventors to decide whether they should do something that they can do is certainly not an acceptable path forward. We can’t repeat the mistakes of the social-media era, when we became the product that companies are selling. Right now, the firms working on this are essentially self-regulating, but the townspeople ought to have a say in what goes down in Dr. Frankenstein’s basement.

That will mean a role for the dreaded Federal Government, and our aging fleet of senators will need to get up to speed quick. “We have had a bipartisan agreement since the Clinton and Gore administration to make a regulatory oasis around big tech and start-up companies,” says Rob Reich, a political scientist and the associate director of Stanford’s Institute for Human-Centered Artificial Intelligence. That’s going to have to change. We need industry and social standards that shape how technology develops, he says, “not gold stars for some companies and no stars for others.”

This is not remotely the first time we’ve faced a we-can-but-should-we moment as a society. Take gene editing, which Reich feels is a useful contrast. In medicine, we have the Hippocratic Oath, institutional ethics committees, and other long-standing, mostly unquestioned norms around biomedical research. “AI researchers are more like late-stage teenagers,” he says. “They’re newly aware of their power in the world, but their frontal cortex is massively underdeveloped, and their [sense of] social responsibility as a consequence is not so great.”

Our governing troubles are compounded by the fact that, while a few firms are leading the way on building these unprecedented machines, the technology will soon become diffuse. More of the codebase for these models is likely to become publicly available, enabling highly talented computer scientists to build their own in the garage. (Some folks at Stanford have already built a ChatGPT imitator for around $600.) What happens when some entrepreneurial types construct a model of a dead person without the family’s permission? (We got something of a preview in April when a German tabloid ran an AI-generated interview with ex–Formula 1 driver Michael Schumacher, who suffered a traumatic brain injury in 2013. His family threatened to sue.) What if it’s an inaccurate portrayal or it suffers from what computer scientists call “hallucinations,” when chatbots spit out wildly false things? We’ve already got revenge porn. What if an old enemy constructs a false version of your dead wife out of spite? “There’s an important tension between open access and safety concerns,” Reich says. “Nuclear fusion has enormous upside potential,” too, he adds, but in some cases, open access to the flesh and bones of AI models could be like “inviting people around the world to play with plutonium.”

The stakes really are existential, and not just in the scenario where a superpowerful intelligence breaks free and threatens our species. When I raised the prospect of inaccurate virtual-immortality models, Sandel suggested that the real problems might arise as the models get more and more accurate. “The more the anomalies disappear,” he says, “the harder it will be to distinguish the virtual experience from an actual human encounter.”

Sandel calls this a problem of “authenticity,” not accuracy. If you cloned your dog, would they both be your dog? Or would you always have a creep down your neck when you look at Sparky II? As these models approach carbon-copy perfection, we’ll be faced with the question of what separates us from our digital clones. Is it self-awareness? A capacity for empathy? Is it creativity—genuine inspiration—or are we all just, like ChatGPT, assembling words and concepts in different formations? Is it possible to imbue a chunk of code with any of the above, and what will it cost us to find out?

We need ethical guardrails for the coming gold rush, and not just because the profit motive here brings danger. Congress should think seriously about banning targeted advertising with AI, even with the First Amendment battle that would unleash, but that’s just the start. We need far more comprehensive criteria that focus on human flourishing and the prevention of suffering, perhaps enforced by an entirely new department of the federal government. We are now delving into questions of what it means to be human, and it can’t be left to Silicon Valley to decide. If a digital facsimile of your lost loved one is talking to you, are they truly present? Is it closer to a home movie, or is it constantly becoming in a way we’d recognize as human? Is being human separable from the ephemeral nature of our existence? It may even spawn a new concept: the right to remain dead.

It’s all enough to make the holograms look like kid stuff. We’re talking now about a God Machine, and we can’t afford to sleepwalk through Genesis. After all, there was an attempt to stop Michael Jackson’s resurrection nine years ago, but it didn’t focus on any of the moralities. One hologram company was suing another for patent infringement.


We typed “death and resurrection as conceived by Andy Warhol pop artist” into Shutterstock’s AI image generator and got the image at top.

You Might Also Like