In short
- The study found fragmented, untested plans for managing large-scale AI disruptions.
- RAND urged the creation of rapid AI analysis tools and stronger coordination protocols.
- The findings warned that future AI threats could emerge from existing systems.
What will it look like when artificial intelligence takes off – not in the movies, but in the real world?
A new RAND Corporation simulation offered a glimpse of the autonomous idea AI agents hijacking digital systems, killing people and crippling critical infrastructure before anyone realizes what is happening.
Exercises, described in detail in message released Wednesday warned that an AI-driven cyber crisis could overwhelm America’s defense and decision-making systems faster than leaders can respond.
Gregory Smith, a RAND policy analyst who co-authored the report, said Unscramble that the exercise revealed deep uncertainty about how governments would even diagnose such an event.
“I think what we found in the attribution question is that the players’ reactions varied depending on who they thought was behind the attack,” Smith said. “Actions that made sense for the nation-state were often incompatible with actions for the rogue AI. Attacking the nation-state meant responding to an act that killed Americans. Rogue AI required global cooperation. Knowing what it was became critical because once players chose a path, it was hard to back down.”
Because participants could not determine whether the attack came from a nation-state, terrorists, or an autonomous artificial intelligence, they observed “very different and mutually incompatible responses,” RAND found.
Robotic uprising
Rogue AI has long been a part of science fiction, from 2001: A Space Odyssey to Wargames and Terminator. But the idea has moved from fantasy to real political concern. Physicists and AI researchers do he argued that once the machines can remake themselves, the question is not whether they will surpass us—but how we will maintain control.
The “Robot Insurgency” exercise, led by RAND’s Center for Artificial General Intelligence Geopolitics, simulated how senior U.S. officials might respond to a cyberattack on Los Angeles that killed 26 people and crippled key systems.
As a two-hour tabletop simulation on RAND’s Infinite Potential platform, it cast current and former officials, RAND analysts, and outside experts as members of the National Security Council’s executive committee.
Led by a facilitator acting as a national security adviser, participants debated responses first in the face of uncertainty about the attacker’s identity and after learning that autonomous AI agents were behind the attack.
According to Michael Vermeer, a senior physics scientist at RAND who co-authored the report, the scenario was deliberately designed to mirror a real-world crisis in which it would not be immediately clear whether artificial intelligence was responsible.
“We deliberately kept things ambiguous to simulate what the real situation would be like,” he said. “There’s an attack and you don’t immediately know — unless the attacker reports it — where it’s coming from or why. Some people would immediately reject it, others might accept it, and the goal was to introduce that ambiguity to decision-makers.”
The report found that attribution—determining who or what caused the attack—was the single most critical factor shaping political responses. Without clear attribution, RAND concluded, officials risked pursuing incompatible strategies.
The study also showed that participants struggled with how to communicate with the public in such a crisis.
“Decision makers will have to really consider how our communications will influence the public to think or act in a certain way,” Vermeer said. Smith added that these conversations will evolve as communications networks themselves fail under cyber attack.
Broadcasting Back to the Future
The RAND team designed the exercise as a form of “backcasting,” using a fictitious scenario to identify what officials could strengthen today.
“Water, electricity and internet systems are still vulnerable,” Smith said. “If you can strengthen them, you can make it easier to coordinate and respond — to secure essential infrastructure, keep it running, and preserve public health and safety.”
“I struggle with that when I think about AI losing control or cyber incidents,” Vermeer added. “What really matters is when it starts to affect the physical world. Cyber-physical interactions – like robots causing effects in the real world – seemed essential to include in the script.”
A RAND exercise concluded that the US lacks the analytical tools, infrastructure resilience and crisis manuals to handle an AI cyber disaster. The report called for investments in rapid forensics capabilities of artificial intelligence, secure communications networks and pre-established back channels with foreign governments – even adversaries – to prevent the escalation of a future attack.
The most dangerous thing about a rogue AI may not be its code—but our confusion when it strikes.
Generally intelligent Bulletin
A weekly AI journey narrated by Gene, a generative AI model.