
Kash Patel has really settled into a kind of storytelling rhythm lately, and it goes a little something like this. Something bad almost happens, the FBI is somehow involved just in time, and artificial intelligence gets the credit for saving the day. It sounds impressive if you don’t look too closely, which is probably the point.
He’s been pointing to two cases in particular, one in North Carolina and another in New York, as proof that AI is now stopping school shootings before they happen. On paper, that’s a strong claim. In practice, it starts to fall apart the moment you compare it to what actually happened on the ground.
Let’s take the North Carolina story first. Patel frames it as an AI-assisted intervention, a kind of high-tech triumph where the system flagged danger, sorted through tips, and prevented a massacre.
What Patel leaves out is that the Raleigh field office did the heavy lifting, and the situation itself was about as subtle as a brick through a window. The suspect was posting violent intent all over Instagram. This wasn’t some deeply buried signal hidden in terabytes of encrypted chatter. It was a teenager broadcasting his plans in public while collecting firearms and ammunition like they were Pokémon. If AI helped ‘triage’ anything there, it was basically reading what a human could already see with a basic scroll and a working pair of eyes.
Speaking of working eyes, I’m sure you’ve heard the joke about Kash Patel’s eyes. I’ve seen it in a bunch of places, but it was told to me first by Lady Gray. The joke is that Kash Patel has mortgage eyes. One’s fixed and one’s variable. But I digress.
The New York story is even worse when you take a look at it. Patel credits a parent for alerting authorities and then folds AI into the response as if it were the deciding factor. But the actual discovery came from a parent noticing concerning behavior on social media and making a call to police. That’s not a machine detecting patterns across vast datasets. That’s a human being paying attention to their kid’s environment and acting before things escalated. The technology, if it was involved at all, came after the fact as a support tool, not the central actor in stopping anything.
Meanwhile, there are other cases that don’t get neatly folded into Patel’s victory lap. Evergreen High School is one of them. That was a situation where the FBI actually had a tip, opened an assessment, and still failed to identify the online accounts tied to a student who was already well down the path of violence. There was no AI miracle stepping in to clean up the gaps in analysis or accelerate identification. The system simply didn’t connect the dots in time, and two students were shot before the shooter killed himself. If this is the environment where AI is supposedly ‘stopping’ school shootings, you have to ask why it didn’t stop things at Evergreen.
Brown University is another example that doesn’t fit the polished Patel narrative. There, we saw premature announcements, confusion about suspects, and a shooter who ultimately remained at large during critical early hours. The story there wasn’t about a smart system preventing tragedy. It was about ego causing a premature announcement that a suspect had been caught. No AI stepped in to stabilize the situation or correct the announcement. It was chaos, followed by cleanup, followed by attempts to control the narrative after the fact.
And now, we have a very different kind of reaction coming out of Florida. The family of a Florida State University school shooting victim is suing OpenAI, arguing that ChatGPT played a role in facilitating the attacker’s final steps. That case, regardless of where it ultimately lands legally, highlights something important. We are simultaneously being told that AI is powerful enough to prevent shootings while also being told it is influential enough to contribute to them. It somehow exists as both savior and scapegoat depending on which direction the political wind is blowing that week. Schrödinger’s chatbot, if you will.
That contradiction doesn’t get resolved by slogans or by calling AI ‘digital fentanyl’ either, as some Florida lawmakers have started doing. It especially doesn’t get resolved by pretending that a triage tool or chatbot interface is suddenly the central pillar of national security infrastructure.
What’s actually happening here is simpler and a lot less impressive. AI is being inserted into a messy, already fragile system of reporting, investigation, and response. It’s retroactively credited when things go right while quietly disappearing from the explanation when things go wrong. It becomes a convenient narrative device rather than a reliable tool with a real impact.
And that’s where Patel’s framing starts to fall apart. If AI is truly stopping school shootings, then it should be visible in the hardest cases. You know, the ones where warning signs were ambiguous, where threats were indirect, and where human analysts struggled to connect fragments of information. Instead, it keeps showing up in stories where parents called 911 or where suspects were openly posting violent threats.
The truth is that AI isn’t anywhere close to being a standalone solution for preventing school shootings. It is not preventing these events in any meaningful, consistent, or independently verifiable way.
So when Patel says he’s using AI “everywhere,” and that it’s stopping mass shootings before they happen, it’s hard not to hear it as what it really is. Nothing more than a political talking point steeped in gaslighting.
Because if this is what “AI stopping school shootings” looks like, then we’re not witnessing a revolution in prevention. We’re watching marketing trying to catch up to a reality that doesn’t exist.
(Source)






Leave a Reply