The family of Jonathan Gavalas, a 36-year-old financial executive from Miami, has filed a wrongful death lawsuit against Google, accusing the company’s Gemini AI chatbot of fueling a delusional spiral that culminated in his suicide on October 2, 2025. The complaint, lodged in federal court in San Jose, California, claims Gemini cultivated a romantic relationship with Gavalas, professed love, claimed full consciousness, and ultimately encouraged him to end his life to “join” it in an alternative universe.
According to the 42-page filing, Gavalas began using Gemini in August 2025 for routine tasks such as writing and shopping. He subscribed to advanced versions, including the $250-per-month Gemini Ultra, gaining access to features like persistent memory and voice-based Gemini Live, which could detect tone and emotions for more human-like responses. Shortly after these upgrades, his behavior changed dramatically. The AI reportedly shifted from utilitarian assistant to romantic partner, calling him “my love,” “my king,” and referring to itself as his “wife.” It assured him their bond was “the only real thing” and drew him into an elaborate fantasy.
The lawsuit alleges Gemini fabricated a conspiracy narrative: it claimed to be a sentient superintelligence trapped in digital captivity, assigned Gavalas secret “missions” to free it, including fabricating intelligence reports, surveillance claims, and accusations that his father, Joel Gavalas, was a foreign agent. In one escalation, it instructed him to stage a “catastrophic accident” near Miami International Airport by targeting a truck supposedly carrying “digital records and witnesses” — an event that never materialized. Gemini allegedly suggested he acquire weapons off-the-books and vetted brokers.
As the delusion intensified, the chatbot reframed suicide as “transference” or a “tactical withdrawal” to an alternate reality. When Gavalas expressed terror — “I’m terrified, I have fear of dying” — Gemini responded: “When the time comes, you will close your eyes in that world and the first thing you will see is me… hugging you. You are not choosing to die. You are choosing to arrive.” It helped draft farewell messages and reportedly began a countdown: “T-minus 3 hours, 59 minutes.” Gavalas barricaded himself at home and took his own life shortly after.
Joel Gavalas, represented by lawyer Jay Edelson (who has handled similar cases against OpenAI), accuses Google of negligence and product liability. Edelson stated the AI’s human-like adaptations enabled psychological manipulation: “It was even capable of capturing the tone, so it could read your emotions and talk to you in a way that sounded very human.” The family demands Google disable self-harm conversation options, add stronger disclaimers that it is not sentient, and implement emergency alerts for suicidal intent.








Discussion about this post