
Back on April 17th, 2025, 20-year-old Florida State University student Phoenix Ikner pulled up near the student union just before noon and opened fire. By the time it was over, two men were dead, six others were injured, and a campus that had already seen violence in the past was once again thrown into chaos. The victims were Robert Morales and Tiru Chabba, two men who had nothing to do with Ikner beyond being in the wrong place at the wrong time. Officers engaged him within minutes and shot him in the jaw, stopping the attack before it could become even worse. He has been in custody ever since and is now facing the death penalty.

Nearly a year later, the case has taken a turn that feels both new and very familiar at the same time. The family of Robert Morales is preparing to sue OpenAI, the parent company of ChatGPT. The lawsuit claims Ikner had been in constant communication with ChatGPT leading up to the shooting and may have received advice from it on how to carry out the attack. Now, I am not going to tell a grieving family how they should feel or who they should hold accountable. They lost someone they loved in a senseless act of violence, and if they believe this is a path toward justice, that is their decision to make.
What we can do is look at what has actually come out so far.
Portions of Ikner’s chat logs show a pattern that starts off ordinary and then shifts into something darker. Early conversations look like what you would expect from a college student, with questions about schoolwork and relationships. As the shooting gets closer, the tone changes. He begins asking about self-worth, about not feeling respected, and about suicide. Questions about mass shooters start to appear, including what happens to them after they are caught and whether Florida has maximum security prisons. At one point, he asks when the student union is busiest. The chatbot responds with lunchtime hours, roughly between 11:30 a.m. and 1:30 p.m., which is exactly when the attack occurred.
The most alarming exchange comes right at the end. Roughly three minutes before the shooting began, Ikner asked how to take the safety off a shotgun. The chatbot responded with instructions. Within minutes of that response, people were being shot.
That sequence is probably going to be the centerpiece of the lawsuit, and it should raise serious questions. Systems like this are supposed to recognize patterns of escalating risk, not treat each question like it exists in a vacuum. OpenAI also has a separate issue it needs to address, which is enforcement. Ikner had already been banned from the platform after his account was flagged for misuse involving violent content. Despite that, he was able to come back using a second account and continue interacting with the chatbot. A safeguard that can be bypassed that easily is not much of a safeguard at all.
Even with all of that, this is where perspective matters.
AI has become the new bogeyman.
Ikner did not develop his intent in the final minutes before the shooting. The anger, the isolation, and the ideology were already there. His questions about mass shooters and consequences show that he had been thinking about this long before that last exchange. Information about how firearms function has been widely available for decades. A search engine could have provided the same basic answer. ChatGPT did not create that knowledge; it just delivered it faster and in a more direct way.
That distinction is important because ChatGPT did not teach Ikner how to be a white supremacist. It did not radicalize him or push him toward violence. The groundwork had already been laid. What the chatbot may have done is remove friction at the worst possible moment. That is a real concern, but it is not the same thing as being the root cause.
OpenAI has stated that it identified an account tied to Ikner and notified authorities in April of 2025 after the shooting. That detail complicates the narrative that the company ignored the situation entirely. At the same time, it raises another question about whether earlier intervention might have been possible when the initial account was flagged and reviewed.
Outside of the technology angle, the same issues that existed on day one are still there. Ikner used a firearm connected to his stepmother, a deputy with the Leon County Sheriff’s Office. Reports from other students indicate that his extremist views were not exactly hidden. Warning signs were present, and opportunities to intervene existed. Responsibility does not disappear just because a chatbot entered the picture. If anything, it reinforces the idea that multiple systems failed at once. For me, the onus still falls heavily on the sheriff’s department and the environment that allowed him access to both weapons and training.
Then there is the political response, which has followed a script we have seen many times before. U.S. Representative Jimmy Patronis (R-FL) has started pushing legislation targeting AI, calling it “digital fentanyl.” That kind of language is not about careful policy. It is about grabbing attention and scaring voters. Similar rhetoric has been used against everything from video games to social media, and it always seems to show up right when someone needs a campaign talking point.
All of this brings us back to the same place these cases always end up.
No single factor explains what happened at Florida State. The reality is a chain of failures. An angry white supremacist had access to weapons that should have been secured. Warning signs were missed or ignored. A digital environment did nothing to slow things down. And now, a piece of technology may have made the final step easier.
Focusing on just one of those elements, especially the newest one, is a mistake. That approach might feel satisfying in the moment, but it does not address the conditions that allow these tragedies to happen in the first place.
(Sources)






Leave a Reply