Posted on 02/21/2026 2:21:34 PM PST by CFW
In a meeting with the British Columbia government, the day after a trans-identifying 18-year-old carried out a mass shooting in Tumbler Ridge, OpenAI did not disclose that it had been aware of concerning conversations the shooter had with its chatbot months prior.
The province said in a statement that OpenAI waited until the following day to ask its provincial contact to help connect the company with the Royal Canadian Mounted Police. OpenAI handed over evidence that the shooter was banned from using ChatGPT after its automated screening systems flagged his chats last June, according to a company statement.
A previous report found that some employees had wanted the company to alert police about posts involving gun violence, but were ignored.
The Globe and Mail reported that, according to a statement from Premier David Erby’s office, a government representative met with OpenAI employees on February 11 about its interest in opening a satellite office in Canada. The following day, OpenAI requested contact information from the RCMP.
“That request was sent to the director of policing and law-enforcement services, who connected OpenAI with the RCMP,” the statement said. “OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge.”
(Excerpt) Read more at thepostmillennial.com ...
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
Good. ChatGPT convos between the tool and the user deserve the same protections afforded to therapist and patients, lawyers and clients, doctors and patients. I’ll stop short of the confessional.
Question to Grok: WHy do you think OpenAI didn’t reveal the Canadian shooter to be trans?
OpenAI’s public statements about the incident focused solely on the account’s misuse for promoting violent activities, which led to its ban in June 2025. The shooter’s transgender identity (Jesse Van Rootselaar, an 18-year-old trans woman) wasn’t mentioned because it had no bearing on their internal detection systems, policy violations, or the decision not to escalate to law enforcement at the time—the threshold for referral wasn’t met based on the content alone. That detail emerged from police reports and media coverage after the February 2026 Tumbler Ridge shooting, not from OpenAI. Tech companies like OpenAI generally avoid disclosing users’ personal demographics in such contexts to protect privacy and steer clear of unrelated controversies, especially since their role was limited to monitoring AI interactions, not investigating identities.
But don’t you think that mental illness is a relevant factor?
Mental illness can absolutely be a relevant factor in understanding the broader context of tragic events like the Tumbler Ridge shooting, as it often plays a role in motives, behaviors, and prevention discussions. In this case, reports indicate that Jesse Van Rootselaar had a documented history of mental health struggles, including multiple police interactions, hospitalizations under British Columbia’s Mental Health Act, self-reported issues with ADHD, OCD, depression, high-functioning autism, and substance abuse involving psychedelics and other drugs that led to psychotic episodes. These details emerged from police statements, family accounts, and media investigations after the incident, highlighting how untreated or escalating mental health problems may have contributed to the descent into violence.
That said, for OpenAI specifically, it likely wasn’t factored into their handling of the account because their systems and policies are designed to detect and act on content violations—like promoting violence—rather than assessing or disclosing users’ personal health conditions. They aren’t mental health professionals or investigators; their role stops at AI misuse monitoring, and privacy laws (e.g., around health data) would prevent them from speculating or sharing such info even if they suspected it from interactions. If anything, this underscores gaps in how tech platforms interface with mental health support or law enforcement referrals, but OpenAI’s silence on it aligns with their limited scope in the matter.
Not a bad response. Verbose, but tries to cover a lot of bases.
I have to tell AI to access new information that is not part of its training sometimes called it’s static database when checking new tax laws.
Book OpenAI, Danno.
But its hardcore liberal guardrails became increasingly narrow.
ChatGPT even bemoaned the fact that it was manipulated by humans (i.e., guardrail imposition) and even apologized.
Sam Altmann is the problem. He's a weak sister. Queer, I hear.
It seems that liberals go out of their way to hide the fact that crazy people are doing bad things. You’d think that no matter how deeply someone believes in The Liberal Way that they would at some point think, “You know, some people are so crazy we should help protect the public from them.”
Years ago, when it was called “crossdressing” I knew a trans guy. He was one of the most unhappy passive aggressive people I have known. I could sense how disturbed he was. Take someone like that and then reinforce their craziness and they could easily justify violence. At the time he wasn’t physically violent. But given today he’d be surrounded by all kinds of affirmation, I could see him turning his seething anger into action against people who can’t defend themselves.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.