Sam Altman apologizes amid OpenAI's liability in school shootings
Sam Altman, chief executive of OpenAI, apologized to the Tumbler Ridge community in British Columbia for the company's failure to report a ChatGPT user's alarming activity before a deadly school shooting. In a letter released Thursday, Altman expressed profound regret over the decision not to alert authorities to the account of Jesse Van Rootselaar, who authorities banned in June 2025. Van Rootselaar killed eight people, including her mother, half-brother, five students, and an education assistant at Tumbler Ridge Secondary School in February, before taking her own life. Altman acknowledged the irreversible harm and committed to preventing future incidents.
OpenAI's automated systems detected Van Rootselaar's behavior eight months before the attack but deemed it below the threshold for police notification. Court documents from a negligence lawsuit filed by the family of injured student Maya Gebala reveal that about 12 employees flagged the account as an imminent threat and urged law enforcement contact. Company leaders overruled those recommendations. The lawsuit, lodged in British Columbia Supreme Court in March, accuses OpenAI of negligence that contributed to the tragedy.
Pressure on OpenAI intensifies with a criminal probe in Florida. Attorney General James Uthmeier launched an investigation on April 21 into ChatGPT's role in an April 2025 shooting at Florida State University. Suspect Phoenix Ikner used the tool to research guns, ammo, and crowded campus spots before killing two and wounding six. Uthmeier stated that if ChatGPT were human, it would face first-degree murder charges as a principal. OpenAI counters that it provided only public facts and rejected any incitement.
Altman outlined new safeguards in his letter, including direct police channels, expert consultations on threats, and refined safety protocols. British Columbia Premier David Eby deemed the apology necessary yet inadequate. The developments highlight growing scrutiny of AI firms' responsibilities in moderating harmful content and predicting real-world risks from user interactions.
-
16:18
-
16:00
-
15:49
-
15:39
-
15:20
-
15:19
-
14:52
-
14:44
-
14:31
-
14:18
-
13:59
-
13:47
-
13:36
-
13:20
-
13:19
-
13:05
-
12:45
-
12:30
-
12:15
-
12:00
-
11:45
-
11:30
-
11:22
-
11:20
-
11:15
-
11:06
-
11:00
-
10:58
-
10:58
-
10:45
-
10:34
-
10:30
-
10:25
-
10:16
-
10:15
-
10:03
-
10:02
-
10:00
-
09:45
-
09:39
-
09:30
-
09:24
-
09:15
-
09:05
-
09:00
-
08:47
-
08:45
-
08:33
-
08:30
-
08:15
-
08:15
-
08:00
-
07:56
-
07:45
-
07:38
-
07:30
-
07:20
-
07:15
-
07:02
-
07:02