OpenAI faces scrutiny over unreported Canada shooting threat
OpenAI is under renewed scrutiny after disclosures that its staff flagged disturbing ChatGPT conversations by the suspect in one of Canada’s deadliest mass shootings but chose not to alert authorities months before the attack. In June 2025, internal systems at the company detected an account later linked to 18‑year‑old Jesse Van Rootselaar describing scenarios involving gun violence over several days, triggering a debate among roughly a dozen employees over whether to involve police. The account was banned for violating ChatGPT’s misuse policies, yet OpenAI concluded at the time that the exchanges did not show “credible or imminent” planning and therefore did not meet its threshold for contacting law enforcement.
On 10 February 2026, Van Rootselaar allegedly killed eight people and wounded about 25 others in and around Tumbler Ridge, a remote town in British Columbia, before dying of a self‑inflicted gunshot wound at the local secondary school. Police say the 18‑year‑old, a resident of Tumbler Ridge, fired at officers as they entered the school and that investigators later recovered a long gun and a modified rifle at the scene. The Royal Canadian Mounted Police (RCMP) have identified the attack as one of the worst mass casualty events in recent Canadian history and are still probing how the shooter obtained the weapons and selected her victims.
Following the shooting, OpenAI said it proactively contacted the RCMP with information about the suspect and her use of ChatGPT and has pledged to assist the ongoing investigation. The company has previously stated in policy updates that it scans conversations for indications of users planning to harm others, routes concerning cases to specialized human reviewers, and may refer matters to law enforcement when they involve an imminent threat of serious physical harm. The revelations about the internal deliberations over Van Rootselaar’s account have intensified questions for regulators and lawmakers about how AI firms define “imminent” danger, what obligations they should have to report potential threats, and how to balance user privacy with public safety as powerful chatbots become more deeply embedded in everyday life.
-
13:30
-
13:00
-
13:00
-
12:58
-
12:40
-
12:30
-
12:20
-
12:15
-
12:00
-
11:50
-
11:50
-
11:30
-
11:20
-
11:00
-
10:50
-
10:50
-
10:30
-
10:20
-
10:11
-
10:00
-
09:50
-
09:30
-
09:20
-
09:00
-
08:50
-
08:20
-
07:50
-
07:00
-
18:20
-
18:00
-
17:50
-
17:30
-
17:20
-
17:00
-
16:50
-
16:30
-
16:20
-
16:00
-
15:50
-
15:30
-
15:25
-
15:20
-
15:02
-
15:00
-
14:50
-
14:30
-
14:28
-
14:20
-
13:50