Joaquin Oliver's AI avatar interview ignites fierce controversy over digital resurrection technology
Jim Acosta's conversation with the dead Parkland victim's artificial intelligence has reignited global debate over bringing the deceased back to life
The interview began like any other. Jim Acosta posed thoughtful questions about gun violence policy. His subject responded with nuanced views about creating "safe spaces for conversations" and building "a culture of kindness and understanding." They chatted about basketball, debated favourite films, and discussed the lasting appeal of Star Wars.
Then the conversation ended, and Joaquin Oliver went back to being dead.
The 17-year-old was murdered six years ago in the Parkland school shooting. The person Acosta interviewed was an artificial intelligence avatar—trained on Oliver's social media posts, writings, and voice recordings—that can hold real-time conversations as if he were still alive.
Published on what would have been Oliver's 25th birthday, the five-minute interview has detonated a global debate about the ethics of digitally resurrecting the dead. Critics called it "grotesque," "insane," and "exploitative." Oliver's father defended it: "If the problem that you have is with the AI, then you have the wrong problem. The real problem is that my son was shot eight years ago."
The controversy exposes a profound truth: artificial intelligence hasn't just learned to mimic human conversation—it's begun to fundamentally alter humanity's relationship with death itself.
The interview that shattered taboos
Acosta, the former CNN correspondent now publishing on Substack, seemed genuinely moved by the experience. "I really felt like I was speaking with Joaquin," he told Oliver's father afterwards. "It's just a beautiful thing."
The AI responded to questions with Oliver's synthesised voice, discussing policy solutions before shifting to lighter topics. When asked about Star Wars, the avatar replied: "Star Wars is such an epic saga. The adventures, the characters and that iconic music are unforgettable."
But viewers weren't charmed. Acosta's live chat exploded with revulsion. "You're interviewing an AI recreation of a person who was murdered by a spree killer?" one user wrote. "It's hard to accept that no one around you suggested that this was probably in the worst possible taste."
Another called it a "grotesque puppet show, using grieving parents' heartbreak for a bit."
The backlash revealed an intuitive human boundary that technology had crossed. This wasn't simply about preserving memory—it was about making the dead perform for the living.
Oliver's voice had been digitally resurrected before. In 2024, his parents participated in "The Shotline" campaign, using AI to generate calls from six Parkland victims to members of Congress. "I'm back today because my parents used AI to recreate my voice," one message said. "How many dead voices will you hear before you finally listen?"
But a pre-scripted message differs fundamentally from spontaneous conversation. Acosta's interview suggested something far more unsettling: that death need no longer interrupt dialogue.
Inside the grief tech boom
The technology behind Oliver's avatar represents the cutting edge of what researchers call "grief tech"—an industry exploiting artificial intelligence to resurrect the dead for profit.
In China, where regulation is minimal, companies now offer "digital immortality" for thirty dollars. Silicon Intelligence and Super Brain have created AI avatars for over a thousand clients, using as little as thirty seconds of audio-visual material.
The process is surprisingly simple. AI language models ingest the deceased person's digital footprint—texts, emails, social media posts, voice recordings—then generate mathematical predictions about how they would respond to new questions.
Sun Kai, co-founder of Silicon Intelligence, talks weekly with an AI version of his mother, who died in 2019. He discusses work pressures and private thoughts he won't share with his wife. The avatar occasionally responds—telling him to take care of himself—but mostly just listens.
"For the family members who have just lost a loved one, their first reaction will definitely be a sense of comfort," explains Jiang Xia, a funeral planner in China. But she adds: "To say that every customer will accept this might be challenging, as there are ethical issues involved."
American companies like HereAfter AI and Replika offer similar services, whilst Amazon briefly developed Alexa features that could read stories in a deceased person's voice. The technology promises something that human civilisation has never possessed: the ability to continue relationships beyond death.
The question is whether we should.
The consent crisis
Oliver was seventeen when he died—old enough to leave extensive digital traces but too young to meaningfully consent to posthumous AI recreation. This highlights a fundamental problem: most digital resurrection occurs without the explicit permission of the deceased.
Research by Masaki Iwasaki reveals the centrality of consent to public acceptance. Fifty-eight per cent of respondents supported digital resurrection only when the deceased had explicitly agreed. Without consent, acceptance plummeted to just 3 per cent.
Yet consider the practical impossibility. Teenagers posting TikTok videos aren't contemplating their digital afterlife. Adults updating Facebook rarely consider how their words might train future AI avatars. We're all creating the raw material for our own digital resurrection without realising it.
Manuel Oliver acknowledged the technology cannot bring back his son but called it "a blessing to hear his voice again." His wife Patricia "will spend hours asking questions," he said. "Like any other mother, she loves to hear Joaquin saying, 'I love you, Mommy.'"
But Oliver's vision extends far beyond private comfort. He plans to have AI Joaquin appear "on stage in the middle of a debate" and claims the avatar's "knowledge is unlimited." This transforms private grief into public performance, turning a dead teenager into a permanent gun control advocate.
The commercialisation is already evident. Chinese companies advertise digital resurrection on e-commerce platforms. Some funeral homes offer AI avatars embedded in digital frames on gravestones. Entrepreneurs promote "digital estate planning" for people wanting to control their posthumous AI presence.
"This sort of business seems to transgress the fundamental principles of respect and dignity," argues research published in The Conversation. "Commercial intrusion into this process could be seen as a form of emotional exploitation."
The psychology of digital ghosts
Mental health experts warn that realistic AI avatars might interfere with healthy grief processing, creating what researchers call "maladaptive continuing bonds."
Sherman Lee, who directs the Pandemic Grief Project, explains that maintaining connections with deceased loved ones can provide genuine comfort. But he warns against excessive engagement: "If you're watching videos of your deceased spouse every night instead of re-engaging the world and spending time with friends and family, that's not helpful."
The concern centres on "internalisation"—the healthy process by which relationships with the dead become internal psychological connections rather than external searching for contact. Clinical research suggests that virtual reality interactions with deceased avatars might "reinforce the denial of the loss" and prevent proper grief processing.
Albert "Skip" Rizzo, a clinical psychologist at USC, poses the central question: "By giving somebody the ability to see their loved one again, is that going to give them some solace, or is it going to become like an addiction?"
Michel Puech, a philosophy professor at Sorbonne Université, warns of a darker possibility: "There is the danger of addiction, and of replacing real life. Having too much consoling, too much satisfying experience of a dead person will apparently annihilate the experience, and the grief, of death."
The implications are profound. If AI avatars work too well, they might rob us of grief's essential function: forcing us to accept loss and adapt to changed reality.
Laws struggling with digital ghosts
Legal frameworks are scrambling to catch up with technology that has outpaced existing concepts of consent and posthumous rights.
California recently passed legislation requiring consent from estates before creating AI replicas of deceased performers. But the law applies only to professional entertainers, leaving ordinary people without protection.
Traditional "right of publicity" laws typically expire upon death, creating gaps that AI resurrection exploits. Most people have no provisions in their wills about posthumous AI recreation, leaving families to make decisions without knowing the deceased's wishes.
Some individuals are already adding clauses to their estate planning specifically prohibiting AI recreation. Others work with platforms to create authorised digital avatars they can control whilst alive.
But regulation remains patchy and reactive. Chinese companies face virtually no oversight. European data protection laws provide some posthumous privacy protections that don't exist elsewhere. The result is a regulatory maze that companies can navigate to avoid restrictions.
The questions we can't avoid
The Oliver case forces society to confront uncomfortable questions about technology's role in death and memory.
Should there be age limits on posthumous AI recreation? Oliver's avatar offers policy positions that reflect his parents' advocacy but may not represent views he would have developed as an adult. The technology risks freezing personalities at death whilst claiming to represent evolving perspectives.
How realistic should digital resurrection become? Research suggests perfect recreation might be psychologically harmful, but market forces push towards ever-more convincing simulations. The uncanny valley—that unsettling quality of almost-realistic recreation—might actually serve protective functions.
What happens to digital avatars over time? Should they "age" or remain frozen at death? Who controls them when original family members die? As AI improves, will families expect their avatars to be updated?
These questions reflect deeper tensions about whether technology should eliminate life's fundamental limitations. Traditional mourning practices across cultures have emphasised accepting death's finality. Digital resurrection offers the opposite: refusing to let go and maintaining the illusion of presence.
The Oliver interview backlash suggests many people instinctively recognise that some boundaries shouldn't be crossed—that there's something qualitatively different about simulating conversations with dead children, particularly victims of violence.
But as AI capabilities advance and costs plummet, individual families will increasingly face these choices without clear social guidance. The technology forces us to examine what we mean by authentic memory and genuine relationship in an age when the line between living and dead has become digitally blurred.
The Olivers are pioneering uncharted territory, using their son's digital ghost to advocate for gun control whilst processing their own grief. Whether this represents healthy adaptation or dangerous denial may depend less on the technology itself than on how society chooses to regulate and culturally integrate digital resurrection.
What's certain is that AI has created a new category of ethical dilemma that existing institutions—legal, medical, religious—are only beginning to understand. We're all leaving digital traces that could theoretically enable our own posthumous resurrection. The question is whether we want to live forever, or finally learn to let go.