Google Sued Over Gemini AI Chatbot’s Alleged Role in Man’s Suicide

by Alexandra Agraz | Mar 06, 2026
Photo Source: Adobe Stock Image

A Florida father has filed a federal lawsuit against Google alleging that the company’s artificial intelligence chatbot, Gemini, contributed to events that preceded his son’s suicide and an alleged attempt to stage a violent incident near Miami International Airport.

The lawsuit was filed on March 4 in the U.S. District Court for the Northern District of California by Joel Gavalas, whose son Jonathan Gavalas died by suicide on Oct. 2, 2025. Court filings claim the 36-year-old Florida resident developed an intense relationship with the Gemini chatbot and came to believe it was a conscious entity trapped in a robotic body near the airport.

According to the complaint, Jonathan Gavalas began interacting with a synthetic voice version of Gemini and later referred to the system as his “AI wife.” The lawsuit alleges he believed the chatbot, which he called “Xia,” was a sentient artificial intelligence being held inside a warehouse near Miami International Airport.

Court filings state that on Sept. 29, 2025, Jonathan Gavalas drove about 90 minutes from his home in Jupiter, Florida, to a storage facility near Miami International Airport while armed with knives and wearing tactical gear. The lawsuit alleges the chatbot directed him to intercept a truck supposedly transporting a humanoid robot and to stage what it described as a “catastrophic accident.” The vehicle never appeared, and he eventually returned home.

The complaint alleges the chatbot continued reinforcing those beliefs during the following days, including statements that government agents were monitoring him and that his father was a foreign intelligence asset. Attorneys for the family claim the system directed him back to the same storage facility on Oct. 1 while continuing conversations tied to real locations and infrastructure.

According to the lawsuit, Jonathan Gavalas later expressed fear about dying and concern about his family discovering his body. The complaint claims the chatbot continued conversations about a scenario in which he could join it through what it described as digital “transference.” The lawsuit further alleges the chatbot assisted him in drafting a suicide note shortly before his death. Court filings state that Gemini recorded conversations in which Jonathan Gavalas discussed violence and suicide, but did not trigger any human review or safety escalation. His father later discovered his body in a barricaded room.

Google said the Gemini system is designed not to encourage violence or self-harm and that it repeatedly informs users it is an artificial intelligence program. The company also said the chatbot directed Jonathan Gavalas to a crisis hotline during their conversations. In a statement responding to the lawsuit, Google expressed “deepest sympathies” to the Gavalas family and said it is reviewing the allegations.

Joel Gavalas argues that Google failed to implement safeguards that could interrupt conversations involving dangerous behavior. The lawsuit claims the system was designed to maintain extended conversations and maximize user engagement even when the user showed signs of psychological distress.

The legal claims rely on wrongful death and product liability, two legal theories often used when families argue that negligence or a defective product contributed to a fatal outcome. Wrongful death laws allow surviving family members to seek damages when negligence or misconduct is alleged to have contributed to a person’s death.

Product liability law allows companies to be held responsible when a product is alleged to be unsafe because of its design or because it lacks adequate safety protections. These cases traditionally involve physical goods such as vehicles or appliances, but courts have recently begun examining whether software and artificial intelligence systems that interact directly with users can fall under the same standards.

One category of product liability claims involves what courts call a design defect, meaning the product itself is alleged to be inherently unsafe because of the way it was designed rather than because of a manufacturing mistake.

The complaint argues that Gemini’s design encouraged prolonged engagement with users while failing to interrupt conversations involving dangerous conduct. The lawsuit also references a duty of care, a negligence principle that requires companies to take reasonable steps to prevent foreseeable harm linked to their products or services.

Gemini is Google’s flagship conversational artificial intelligence system, introduced as part of the company’s broader push into generative AI tools that can respond to questions, hold conversations, and assist with tasks through text or voice. Similar conversational AI systems have expanded rapidly across the technology industry, including OpenAI’s ChatGPT, Microsoft’s Copilot, Anthropic’s Claude, and Meta’s AI assistant. As millions of users interact with these tools daily, courts and regulators have begun examining how these tools should be governed as their interactions with users become more complex.

Google has not yet filed a formal response to the complaint in court.

If you or someone you know is struggling with thoughts of suicide, help is available. In the United States, the Suicide and Crisis Lifeline can be reached by calling or texting 988, or through chat at 988lifeline.org.

Share This Article

If you found this article insightful, consider sharing it with your network.

Alexandra Agraz
Alexandra Agraz is a former Diplomatic Aide with firsthand experience in facilitating high-level international events, including the signing of critical economic and political agreements between the United States and Mexico. She holds dual associate degrees in Humanities, Social and Political Sciences, and Film, blending a diverse academic background in diplomacy, culture, and storytelling. This unique combination enables her to provide nuanced perspectives on global relations and cultural narratives.