No Human in the Loop - 17th May 2025
Dispatches from the Algorithmic Front
Week of 10–16 May 2025
1. Introduction
This week made something abundantly clear: machines don’t need consciousness to create chaos—they just need access.
From NATO’s late-stage epiphany on data warfare, to China’s accelerating push toward autonomous campaign simulation, and Coinbase’s reminder that humans remain the weakest link in digital security, the algorithmic front is expanding—quietly, and without sentiment.
No firewalls were raised in protest. No drones waved flags. But the lines moved.
2. This Week’s Algorithmic Flashpoints
China Trains AI to Simulate War, Unsupervised
A research team at Xi’an Technological University led by Professor Fu Yanfang unveiled an AI system capable of autonomously generating military combat scenarios at scale. Designed for PLA command training, it can iterate through thousands of possible engagements—no human prompting required.
Why it matters: This marks a step beyond war-gaming. It’s AI beginning to model intent, not just reaction. Useful for planning. Dangerous if misread.
Germany’s Helsing Targets the Seabed
Defence darling Helsing announced plans for long-duration autonomous underwater drones equipped with its LURA AI system. These stealth bots will patrol Europe’s seabed for months at a time, hunting for subsurface threats to infrastructure—think Nord Stream, but pre-emptive.
Why it matters: The seabed is becoming militarised by machine. Unlike space, there’s no Geneva Convention for cables and pipelines.
Coinbase Hack: The Human API
A cyberattack on Coinbase saw attackers bribe third-party contractors to exfiltrate sensitive user data. The breach, revealed this week, will cost the firm upwards of $180 million. Crucially, Coinbase’s AI monitoring system flagged the breach—but no human acted in time.
Why it matters: AI did its job. Humans didn’t. This is the paradox of the “loop” we’re supposedly kept in.
NATO Discovers Data
On 14 May, NATO released its first official Alliance Data Strategy, declaring data “a strategic asset” on par with air or sea power. The goal: interoperable analytics, real-time situational awareness, and—eventually—AI-driven C2. Implementation, of course, will be another story.
Why it matters: The allies have finally realised that data isn’t admin—it’s ammunition. Let’s see how long it takes to stop storing it in Excel.
Poland’s Election Targeted
Just days before the 12 May election, Prime Minister Donald Tusk confirmed a cyberattack on ruling party infrastructure—linked to Russian threat actors. The method? Social engineering and network infiltration. Moscow denied everything, as is tradition.
Why it matters: Election interference is no longer novel—it’s a habit. The innovation is in the quietness: fewer fireworks, more behavioural microtargeting.
UN Talks on Autonomy Go Nowhere, Again
The UN reconvened its Convention on Certain Conventional Weapons (CCW) to discuss autonomous weapon regulation. After a decade of talks, still no binding agreement. Key powers want “national frameworks” (read: freedom to act). The UK suggested ‘responsible autonomy’, which means precisely nothing.
Why it matters: While Geneva prevaricates, code advances. We now regulate nuclear weapons faster than we do facial-recognition rifles.
3. Signals in the Noise
Across these developments, a pattern emerges:
- Machines are being trusted with initiative. China’s war sim AI is not just reacting—it’s modelling. Helsing’s drones are not querying home—they’re hunting.
- Institutions are struggling to catch up. NATO has finally drafted a data doctrine. The UN has still not drafted a line in the sand for autonomy.
- Human error is still the breach point. Coinbase’s loss wasn’t due to bad code, but to a bad contractor and a worse process.
It’s not that the loop has been broken. It’s that no one’s quite sure who’s in it anymore.
4. Prediction Protocol
- AI–Assisted C2 Will Be Trialled in the Baltics
Expect NATO field commands to begin testing limited AI support systems for operational logistics in Q3, under the guise of “interoperability stress tests”. - PLA to Release New ‘Doctrine-Lite’ Based on AI Output
Watch for informal Chinese doctrinal updates filtered through DeepSeek simulations—blurring the line between AI-generated models and military planning. - More Subsea Incidents Framed as ‘Technical Faults’
With Helsing’s model now public, other states (Scandinavia, UK, Japan) will quietly escalate autonomous seabed monitoring. Incidents will increase. Attribution will not.
5. Black Box: The Story Behind the Story

In a filing buried in the European Patent Office this week, Rheinmetall applied for a system that allows a battlefield drone to detect “emotional volatility” in humans and switch engagement mode accordingly. The technical term: “Adaptive Human-State Cueing”.
Translation: The drone won’t just know where you are. It’ll know if you’re scared—and decide what to do with that.
We used to worry whether AI could tell a combatant from a civilian. We’re now training it to tell the confident from the uncertain.
Dispatch Ends