top of page

OpenAI Staff Raised Internal Warnings Before Deadly Canada School Shooting

  • Feb 21
  • 3 min read

21 February 2026

Months before an 18-year-old suspect carried out one of the deadliest school shootings in Canadian history, employees at OpenAI were alarmed by violent content in a user’s interactions with the company’s ChatGPT chatbot and debated whether to involve law enforcement, according to internal discussions now revealed in news reports.


In June 2025, the user later identified as Jesse Van Rootselaar engaged in conversations with ChatGPT that included descriptions of gun-related scenarios and violence over several days. Those interactions were picked up by the company’s automated review systems and flagged for further internal review by human moderators, sparking concern among staff about possible real-world risk. Some employees urged leadership to notify Canadian police about the troubling material, but senior OpenAI officials ultimately decided against doing so at the time, concluding that the content did not meet the company’s threshold for “credible and imminent risk” that would justify outside intervention.


OpenAI’s internal debate and decision not to report to law enforcement occurred against a backdrop of broader industry and societal questions about how artificial-intelligence platforms should balance user privacy with public safety responsibilities. The company banned Van Rootselaar’s account in June 2025 for violating usage policies related to violent content, but it refrained from notifying the Royal Canadian Mounted Police until after the tragic events of February 10, 2026 unfolded. On that day, authorities say Van Rootselaar, an 18-year-old trans woman, killed eight people in Tumbler Ridge, British Columbia, including students and school staff at a local secondary school, before dying from an apparent self-inflicted injury at the scene.


The incident has prompted intense scrutiny of OpenAI’s safety protocols and decision-making processes, as well as renewed debate about the responsibilities of tech companies that host AI products used by millions of people worldwide. Last year’s interactions were automatically surfaced by ChatGPT’s moderation systems because they were flagged as involving violent content, and internal reviewers interpreted aspects of the conversations as potentially concerning. A group of about a dozen employees engaged in discussion over whether to escalate to law enforcement, with at least some advocating that the behaviour might signal a threat beyond the digital environment. Leadership ultimately judged that the posts did not rise to the level that would require a report under the company’s internal standards, which are designed to weigh risks against user privacy considerations and the potential harms of unnecessary policing involvement.


After the shootings, OpenAI contacted investigators at the RCMP and said it would cooperate with their inquiries into the tragedy and the user’s online footprint. Police in Canada have confirmed that they are reviewing digital evidence, including social-media activity and online interactions, as part of their broader investigation into the shooting. The case has drawn attention not only in Canada but around the world because it touches on how emerging technologies detect and respond to signals of potential violence, whether human moderators should play a role in escalating threats, and how platforms should navigate legal and ethical frameworks that differ across jurisdictions.


Critics of OpenAI’s handling of the situation argue that early notification to authorities might have offered additional context for law enforcement, possibly contributing to preventative efforts. Supporters of the company’s approach emphasise the challenges of interpreting digital text for intent and the dangers of over-reporting, which could erode user trust and chill legitimate private conversation. The debate has prompted broader calls for clearer standards and potential regulatory guidance to govern how AI companies should respond when their systems detect hostile, violent or otherwise concerning intent in user interactions.


As investigations continue into the Tumbler Ridge shooting and the role that digital footprints may have played in understanding the suspect’s behaviour, OpenAI’s internal deliberations are likely to remain a focal point of discussions about AI safety, user privacy and corporate accountability in an age when powerful technologies increasingly intersect with social and public-safety concerns.

Comments


bottom of page