Families of victims killed in one of Canada’s deadliest recent mass shootings have filed lawsuits in the United States against OpenAI and its chief executive, Sam Altman, accusing the company of failing to act on warning signs ahead of the attack.
The cases, lodged in federal court in San Francisco, stem from the February shooting in Tumbler Ridge, Canada, where nine people including several children — were killed.
According to the lawsuits, OpenAI’s systems flagged troubling interactions involving the attacker months before the incident.
The plaintiffs claim internal assessments identified the individual as posing a credible threat, but no alert was sent to law enforcement.
Lawyers representing the families argue that the company failed to act because doing so could have exposed the scale of harmful conversations on its platform and potentially affected its business trajectory. They are seeking damages and court-ordered changes to how the company handles safety risks.
At the center of the case is an 18-year-old suspect whose online exchanges allegedly included detailed violent scenarios.
The lawsuits claim that OpenAI’s safety team recommended contacting authorities after reviewing the conversations, but that recommendation was ultimately overruled by senior leadership.
The suspect later carried out a series of attacks, first targeting family members at home before going to a former school, where multiple victims were killed and others injured. The attacker later died by suicide, according to police.
One of the plaintiffs is a young survivor who remains in critical condition after sustaining multiple gunshot wounds.
In response, OpenAI described the shooting as a tragedy and said it has a strict policy against the use of its tools to support violence.
The company noted that it had already strengthened its safeguards, including improving how it detects potential threats, responds to distress signals, and connects users to mental health resources.
The company also said it reports situations to law enforcement when there is clear evidence of an imminent and credible risk, with input from safety and mental health experts in more complex cases.
However, the lawsuits argue those safeguards were not applied effectively in this instance. They claim the suspect’s account was deactivated but that the individual was able to return to the platform and continue harmful activity.
The legal action is part of a broader wave of cases testing how far responsibility extends for artificial intelligence companies when their platforms are misused. Courts are increasingly being asked to decide whether tech firms can be held accountable for real-world harm linked to user interactions.
Legal experts say the outcome could have far-reaching implications, particularly around how companies balance user privacy, platform safety, and the obligation to report potential threats.
OpenAI has denied similar allegations in other cases, often pointing to users’ personal histories and actions outside the platform.
Still, this lawsuit is among the first to directly connect an AI chatbot to a mass-casualty event raising difficult questions about where responsibility begins and ends in the age of artificial intelligence.




