With NATO experiment, Reality Defender exposes military’s deepfake weakness

New content from deepfake detection firm Reality Defender looks at the company’s role in supporting NATO’s cognitive warfare experimentation.
“In late 2025, Reality Defender had the privilege of supporting NATO Allied Command Transformation (ACT) and NATO Communications and Information Agency (NCIA) in the Innovation Continuum Cognitive Warfare Experimentation, an initiative exploring how AI-driven content influences operational-level decision-making,” says the company’s post.
“What unfolded underscores the importance of rapidly developing and executing a strategy to mitigate deepfakes in the cognitive dimension.”
Test asks officers to identify ‘Bad AI’
It sounds like an episode of Doctor Who, but Reality Defender says that, in conflict scenarios, the proliferation of AI technologies presents a real risk. “Cognitive overload, speed of information flow, and the sophistication of adversarial techniques have the power to shape – or destabilize – critical decisions.”
A recent AIID Incident Report shows impersonation fraud running rampant across sectors. “The most severe impacts frequently,” it says, “come from the collision of AI outputs with human institutions, such as courts and schools, where the cost of being wrong is high and correction is slow.” If costs are high in the schools and courts, in warfare, they are a matter of life and death.
Reality Defender’s role in the NATO experiment was to “introduce controlled deepfake content into a realistic warfighting scenario to assess its impact on experienced operational planners.” How easily might a military official be fooled by a synthetic face or an injection attack? How might deepfakes “affect the warfighter at the operational level?”
To find out, Reality Defender presented “seasoned operational war planners” with two forms of AI. Their task was to distinguish so-called Good AI, defined as “trusted analytical tools and data streams,” from Bad AI – “adversarial AI analytic tools and deepfake content designed by Reality Defender” – woven seamlessly into a realistic information environment alongside detailed analytic situational reports.
“Would war planners rely on verified intelligence, or would manipulated media overpower their judgment?”
‘Launch the attack on Valkenvania!’
Reality Defender’s cliffhanger question has a dangerous answer. Experienced war planners, it turns out, are no better at detecting deepfakes than anyone else using human eyes. “Participants, despite extensive operational experience, repeatedly ignored accurate reporting in favor of a single fabricated video.”
Most concerningly, “planners ordered military action on the fictional region based on the disinformation embedded in the deepfake news video.”
This is the state of the information landscape today: when put to the test, and faced with time pressure, seasoned military leaders will make critical decisions based on fake media. If the order to attack the fake land of Valkenvania looks real enough, and is deployed in the right context, it could easily lead to an international crisis. And Reality Defender has shown how easy it is to make deepfake content look far better than real enough.
“This was not a failure of training,” it says. “This was not a failure of intelligence collection. This was the extraordinary power of deepfake manipulation in a high-pressure decision environment.” Even finely honed military wits are no match for the sophistication of deepfakes created using large language models, diffusion models and other types of generative AI.
Cognitive dimension becomes a theatre of war in AI era
The company says the dismal findings reinforce an “urgent need for automated deepfake detection across the entire spectrum of military operations.” The cognitive dimension is under attack from tools that try and manipulate perception in high pressure environments. Modern deepfakery exploits cognitive biases and creates bogus urgency to cloud decision-making.
“The cognitive dimension is where wars are decided, because it is where decisions are made. This experiment demonstrates that deepfakes directly threaten that dimension by distorting perception faster than traditional safeguards can respond. Deepfake detection therefore must be treated as a core element of cognitive defense – integrated into workflows, trusted by operators, and available before manipulated media shapes action. Without it, decision superiority is no longer assured.”
Article Topics
deepfake detection | deepfakes | injection attacks | military | NATO | Reality Defender | synthetic data







Comments