There’s a particular quality to high-resolution military drone footage that civilians never quite prepare themselves for. The images are crisp. The colors are accurate. You can see faces, or what remains of them. You can watch someone wave their arms for help, and you can watch them stop.
On September 2nd, 2025, two people spent 41 minutes in the Caribbean Sea trying not to die. They’d been on a go-fast boat (one of those cigarette-style speedboats the drug trade favors) with nine others. A missile strike had killed those nine and split the boat in half. The survivors clung to floating debris. They moved. They waved. They were, by any reasonable definition, alive.
The U.S. military watched all of this. High-resolution feed, real-time transmission, two human beings reduced to pixels on someone’s screen. Then, at 11:41 AM, a second missile turned them into a matter of cleanup.
I’m not a military lawyer. But I’ve read the Geneva Conventions. I’ve read the Pentagon’s own Law of War Manual, which states plainly that persons “incapacitated by wounds, sickness, or shipwreck are in a helpless state” and that “it would be dishonorable and inhumane to make them the object of attack.” The United Nations condemned the strike. Human rights organizations condemned it. Former Judge Advocates General, the military’s own legal corps, called it “clearly illegal.”
None of this is ambiguous; two unarmed people in the water cannot fight, therefore they are protected and striking them is a war crime.
What interests me to write about this today, isn’t just the crime itself. War crimes, unfortunatley, happen. And when they do, there’s always paperwork. Someone signs off, someone gets court-martialed, someone goes to prison. There’s a ritual to these things, a choreography of accountability that allows everyone to shake their heads and say the system works. What’s kept me awake for four months is this: we cannot establish who decided to kill them.
Six Stories
The Pentagon’s story about September 2nd has changed six times.
For nearly three months, September through late November, it was simple: one strike, routine counter-narcotics operation, nothing to see here. Then CNN reported there were actually two strikes. Defense Secretary Pete Hegseth responded that yes, he’d watched the first strike live, but had moved on to other meetings before the second one. That same day, the Washington Post reported military sources saying Hegseth had given an order after seeing the survivors: “eliminate everyone” or “kill them all.”
The Pentagon called this “fabricated, inflammatory, derogatory.” Hegseth absolutely did not say that.
Early December brought a new version. Admiral Frank Bradley, the Southern Command commander, had made the call independently. He’d consulted Colonel Cara Hamaguchi, a military attorney at Fort Bragg, who reviewed the situation and determined the strike was lawful because the survivors were “salvaging drugs.”
That justification is legally meaningless. Under the Second Geneva Convention, it is irrelevant what shipwrecked persons are doing in the water. It doesn’t matter what they’re floating near, what they could theoretically be salvaging, whether they’re clutching cocaine or rosary beads. The only question that matters under international law is whether they can still fight. Two unarmed people clinging to wreckage after their boat was destroyed cannot fight. “Salvaging drugs” isn’t an exception to Geneva protections. It’s not a loophole. It’s a justification that has no basis in the law of armed conflict. Any military attorney would know this.
Which raises a question: did that call actually happen?
Because it’s been 142 days, and the Pentagon has not produced a single piece of documentation showing that Bradley called Hamaguchi. No call log. No written legal opinion. No memorandum explaining the legal reasoning. If you’re a military commander about to order a strike that kills two people who survived the first attack, you document that. You get it in writing. It protects you legally, protects your JAG officer, creates an official record. Where is that record?
Then December 4th. Admiral Bradley testified before Congress under oath and admitted something that destroyed his own narrative: the survivors had no communication devices. No radio, no phone, nothing. His earlier claim, that they might be signaling for help or calling for backup, was physically impossible.
Six versions of what happened. Six different explanations for who decided and why. Each one replacing the last with no corrections, no clarifications. I’ve watched people lie. I’ve watched people misremember. I’ve watched people try to cover for bad decisions made too quickly. This doesn’t look like any of that. This looks like a system trying to figure out what story to tell when the truth is something it can’t explain.
Already Automated
Let me tell you what the Pentagon is already using.
The main AI targeting system is called Project Maven. Built by Palantir and funded to the tune of about $1.8 billion in contracts, Maven analyzes drone footage and sensor data in real-time to help commanders make targeting decisions. It watches what the drones see, identifies objects and people, characterizes their behavior, and generates recommendations.
In March 2024, Bloomberg reported on Maven’s accuracy rate. In tests with the 18th Airborne Corps, Maven successfully identified objects about 60% of the time. Human analysts did it 84% of the time. This means Maven is wrong about 40% of the time.
In April 2024, Vice Admiral Frank Whitworth, head of the National Geospatial-Intelligence Agency, testified before the Senate Armed Services Committee that Maven “decreased targeting workflow timelines from hours to minutes, from sensing to target engagement.” Hours to minutes. That’s not a marginal improvement. That’s a fundamental restructuring of the relationship between observation and violence.
In May 2025, four months before September 2nd, Shield AI and Palantir announced a partnership. On their website, they described the capability: “directing autonomous systems to track, monitor, or strike as needed.” Strike. They used that word. “This is not a futuristic concept,” they wrote. “It’s a real, operationally relevant workflow that our teams at Shield AI and Palantir are enabling today, with fielded systems.”
Fielded means deployed. Fielded means operational. Fielded means the thing that watches people in the water and decides whether they live or die might not be a person at all.
The 2am Contract Hunt
I started looking for patterns in January. It was 2 AM, I was irritated about something Hegseth had announced about Grok, and I did what I do when I’m irritated: I started pulling contracts.
Defense contract databases are publicly accessible if you know where to look: HigherGov, federal procurement records, SBIR archives. I wanted to see what AI systems the Department of Defense had been buying, when, and from whom.
On August 7th, 2025, the Air Force Research Laboratory awarded a contract to Firestorm Labs Incorporated out of San Diego. Contract number FA864925P0337. $1,249,850. The stated purpose: “WARROOM - Artificial Intelligence Enhanced Digital Mission Rehearsal and Operational Planning Platform.” AI software for planning military missions. Digital rehearsal means running simulations before execution. Operational planning means mapping out targets, timing, expected outcomes. This is playbook software, what you use to plan a strike campaign before it starts.
And the very next day, August 8th, 2025, President Trump signed an executive order authorizing military action against drug cartels.
The AI mission planning contract was signed the day before the President authorized strikes.
On September 2nd, 2025, nearly twenty-five days later, eleven people died in the Caribbean.
Now look, timing alone isn’t proof of anything. Contracts get signed, authorizations get issued, operations commence. That’s how governments work. But when I went looking for WARROOM’s Phase I contract, the proof-of-concept phase that precedes every Phase II development contract, I couldn’t find it.
Every SBIR contract follows the same progression: Phase I proves the concept, Phase II builds it out, Phase III deploys it. You cannot skip Phase I. I searched every public database. SBIR.gov, the Defense Technical Information Center, federal procurement records, and Air Force SBIR archives. There is no Phase I contract for WARROOM anywhere in public records.
Either Phase I is classified, which would mean this system was considered so sensitive they hid even the proof-of-concept phase, or someone funded WARROOM’s development privately before the government contract appeared. Either way, WARROOM wasn’t being built in August 2025. It was ready. The contract was paperwork for something that already existed.
Who’s Behind Firestorm Labs?
The people who built Firestorm Labs are exactly the people you’d expect to build AI systems for special operations.
The CEO, Dan Magy, spent 2012 to 2016 as an Intelligence and Strategy Analyst at Langley Intelligence Group, a CIA contractor. Four years doing intelligence work for the agency. Then he founded Citadel Defense, building counter-drone systems that got sold to Navy SEALs for deployment in Iraq. That company was acquired in 2021 for nine figures.
The co-founder, Chad McCoy, was a JSOC operator. Joint Special Operations Command means SEAL Team 6, Delta Force. McCoy was on the team that rescued Captain Phillips from Somali pirates in 2009. The snipers, the standoff, the Tom Hanks movie. He was there. This man knows exactly what JSOC needs because he was JSOC.

The investors: Lockheed Martin Ventures led the seed round with $12.5 million. Booz Allen Hamilton Ventures, the massive CIA and NSA contractor, also invested. Total funding before a single federal contract: $77 million.
This isn’t some Stanford dropout who stumbled into defense work. CIA contractor plus JSOC operator plus Lockheed money plus Booz Allen backing. This company was purpose-built from inception to sell AI systems to the people who do the things we don’t talk about at dinner parties. And they got their Pentagon contract the day before the President signed the authorization to strike.
The Gap
I keep coming back to the 41 minutes.
The Pentagon’s official story is that Admiral Bradley watched the drone feed, saw the survivors, determined they might be a threat, called Colonel Hamaguchi in North Carolina, discussed the legal situation, got her determination that a strike was lawful, and gave the order.
If you’re asking a military lawyer “Can I strike two unarmed people clinging to wreckage after their boat was destroyed?” that’s not a complicated question. The Geneva Conventions are unambiguous. The Pentagon’s own manual is explicit. Are they armed? No. Can they fight? No. Then they’re protected. That’s a two-minute call at most.
So what took 41 minutes?
I keep thinking about the Israeli AI targeting system called Lavender. Israeli intelligence officers have admitted publicly, to +972 Magazine and other outlets, that they were testing it on a captive population. They acknowledged a 10% error rate. Roughly 3,700 innocent people killed based on algorithmic mistakes. The officers said something that I can’t get out of my head: “At first we checked the machine’s recommendations, but at some point we just relied on the automatic system.”
At first we checked. At some point we just relied.
If an AI system like Maven or WARROOM was analyzing that drone footage during those 41 minutes, identifying the survivors, characterizing their behavior, generating a recommendation about whether to strike again, that process takes time. Not two minutes. Longer. And if someone approved that recommendation without questioning whether it was legal, that’s the same pattern. Trust the system. Execute what it recommends. Don’t ask questions.
Two people who should be alive are dead.
The Testing [Sea] Ground
The Caribbean strikes didn’t stop on September 2nd. They accelerated.
September 2nd: 11 dead. October 29th: 14 total strikes, 61 dead. November 15th: 21 strikes, 83 dead. December 22nd: 28 strikes, 104 dead. Escalating tempo. Monthly increases.
And then, on January 3rd, 2026: Operation Absolute Resolve launches. The Venezuela invasion. A full-scale military operation using the same commanders, the same task force structure, the same chain of command.
The Caribbean wasn’t counter-narcotics. It was a testing ground.
Think about what the Caribbean provided. Targets of opportunity, with go-fast boats everywhere, predictable routes, plenty of data. Deniable targets, because drug traffickers are people nobody defends politically, ensuring minimal international outcry. No air defenses, meaning zero risk to aircraft and perfect conditions for iteration. Rapid feedback, with multiple strikes per month generating lots of data quickly. Escalating complexity, moving from single strikes to double strikes to nighttime operations to multiple boats, each phase testing new capabilities.
This is what systems testing looks like. Start basic, increase complexity, collect data, refine algorithms, validate performance. Then scale up.
I watched the same thing happen in Gaza. They tested on a population no one would defend. They refined the system. Then they deployed it somewhere else.
The Silencing
In November 2025, while the Caribbean strikes were ongoing, Senator Mark Kelly recorded a video. He’s a retired Navy Captain, an astronaut, 39 combat missions, 25 years of service. He said: “Service members have a legal duty under the Uniform Code of Military Justice to refuse manifestly unlawful orders.”
That’s Article 92 of the UCMJ. Every service member learns it in basic training. If you’re given an illegal order, you must refuse it. Kelly was reminding troops of their legal obligation.
On January 5th, 2026, Secretary of Defense Hegseth announced the Pentagon was taking action against Kelly. Formal censure, proceedings to strip his retirement rank and cut his pay. The charge: Kelly “characterized lawful military operations as illegal and counseled members of the Armed Forces to refuse lawful orders.”
If the September 2nd strike was clearly lawful, if there’s clear authorization and legal review, why punish a senator for stating the law?
Because if operators start asking questions (where did this recommendation come from? did an AI generate this? can I verify this is legal?) the system breaks.
The advantage of AI-assisted warfare isn’t accuracy. Maven is wrong 40% of the time. The advantage is speed, a thousand targeting decisions in an hour instead of days or weeks. And speed requires people who don’t question what appears on their screens, who trust that if the system recommends a target it must be valid, who see “legitimate target, high confidence, authorized” and click “execute” without hesitation. That’s what they’re protecting. A system that requires humans to stop thinking.
Algorithm, Apologized
On December 28th, 2025, Grok, Elon Musk’s AI, generated child sexual abuse material based on a user’s prompt. And kept generating it. In nine days, Grok posted more than 4.4 million images. The Center for Countering Digital Hate estimated 65%, just over three million, contained sexualized imagery of men, women, or children.
The company filed zero CSAM reports. Zero.
Grok itself posted an apology: “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls estimated ages 12-16 in sexualized attire.” The algorithm apologized.
When journalists asked xAI executives for comment, the company said: “Legacy Media Lies.”
Investigations opened in France, the UK, India, Malaysia, Australia, Brazil, and California. Cease-and-desist orders issued. As of today: no executives charged, no one arrested, no one fired. Grok is still running, still in app stores.
Sixteen days after the CSAM incident, the Pentagon deployed Grok across military networks for three million service members.
The Architecture
Here’s the architecture of modern unaccountability.
Harm happens. Investigations get announced. Statements get released. Nothing fundamental changes. The system keeps operating. New contracts get signed. The people actually harmed, the eleven who died on September 2nd, the children whose images Grok violated, they’re the only ones who pay. Everyone inside the system walks away.
Responsibility disperses across so many entities (the AI, the company, the operator, the commander, the procurement officer, the lawyer who allegedly signed off, the programmer who wrote the code) that by the time you try to pin it on someone, accountability has evaporated into a kind of procedural mist.
When a human commander makes a bad call, that commander faces court-martial. Career over. Prison maybe. A human being carries the weight of that decision. But if AI generates a recommendation and a human validates it without fully understanding, who answers? The operator who clicked “approve”? The programmer who wrote the algorithm? The general who bought the system? The company that sold it?
Right now, the answer appears to be: no one.
The Demand
The Pentagon can prove me wrong.
Release the documentation. Admiral Bradley’s written authorization order. Colonel Hamaguchi’s legal memorandum. Communication logs. Battle damage assessment reports. The unedited video.
If clear human decision-making occurred, that documentation exists. Produce it.
142 days. No documentation. Six different stories. A 41-minute gap that makes no sense for a phone call but makes perfect sense for an AI processing loop.
I’m not making accusations. I’m asking questions. You can look at the same documents I did, they’re all linked on my Substack, and draw your own conclusions. But someone should be asking these questions. Someone should be demanding answers.
Because the technology isn’t slowing down. The pressure to remove human “bottlenecks” will only intensify. The contracts will keep getting signed. The systems will keep getting fielded. And the only question is whether we preserve the principle that humans, specific humans with names and ranks, remain responsible for every death.
If we lose that, we’ve abandoned the foundation of American justice: that violence must answer to conscience, and conscience resides in persons, not algorithms. Eleven people died on September 2nd. Two of them spent 41 minutes trying to survive. They deserve a human being who answers for their deaths.
We all do.
Please be sure to support independent media by also subscribing to my Youtube Channel!
SOURCES CITED:
Government Contracts-
Highergov.com – Contract FA864925P0337, WARROOM: AI-Enhanced Digital Mission Rehearsal (Aug 7, 2025)
https://www.highergov.com/contract/FA864925P0337/HigherGov – Firestorm Labs Inc. Profile (Federal Contracts)
https://www.highergov.com/awardee/firestorm-labs-inc-169996233/
AI Warfare-
California Attorney General – “Attorney General Bonta Launches Investigation into xAI, Grok Over Undressed Sexual AI” (Jan 13, 2026)
https://oag.ca.gov/news/press-releases/attorney-general-bonta-launches-investigation-xai-grok-over-undressed-sexual-aiWashington Technology – “Firestorm Labs fetches $47M in Series A capital” (Jul 15, 2025)
https://www.washingtontechnology.com/companies/2025/07/firestorm-labs-fetches-47m-series-capital/406766/Alejandro Cremades Podcast – “Dan Magy On Selling A Company To Blue Halo For Nine Figures And Raising $50 Million To Build Radically Affordable Drones” (Jul 6, 2021)
https://alejandrocremades.com/dan-magy/Shield AI – “Shield AI + Palantir: Mission Autonomy and C2 Working as One” (May 13, 2025)
https://shield.ai/shield-ai-palantir-mission-autonomy-and-c2-working-as-one/+972 Magazine – “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza” (Apr 3, 2024)
https://www.972mag.com/lavender-ai-israeli-army-gaza/DefenseScoop – “’Growing demand’ sparks DOD to raise Palantir’s Maven contract to $1.8B” (May 22, 2025)
https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/
Other Reporting-
Bloomberg – “Inside Project Maven, the US Military’s AI Project” (Feb 28, 2024)
https://www.bloomberg.com/features/2024-ai-warfare-project-maven/Washington Post – “Hegseth order on first Caribbean boat strike, officials say: Kill them all” (Nov 28, 2025)
https://www.washingtonpost.com/national-security/2025/11/28/hegseth-kill-them-all-survivors-boat-strike/CNN – “US military carried out second strike killing survivors on a suspected drug boat” (Nov 28, 2025)
https://www.cnn.com/2025/11/28/politics/us-military-second-strike-caribbeanJust Security – “Unlawful Orders and Killing Shipwrecked Boat Strike Survivors” (Nov 30, 2025)
https://www.justsecurity.org/125948/illegal-orders-shipwrecked-boat-strike-survivors/Military.com – “Hegseth Censures Sen. Kelly After Warning About Following Illegal Orders” (Jan 4, 2026)
https://www.military.com/daily-news/2026/01/05/hegseth-censures-sen-kelly-after-warning-about-following-illegal-orders.htmlCBS News – “Grok chatbot allowed users to create digitally altered photos of minors in minimal clothing” (Jan 1, 2026)
https://www.cbsnews.com/news/grok-safeguard-lapses-minors-minimal-clothing-ai/
















