Ctrl+Alt+Grieve
In an age of AI teammates, are we preparing soldiers to grapple with the loss of an invisible partner?
When a new software update significantly alters a well-known or frequently used product, most users tend to shrug it off, adapt, and move forward. However, for some super-users of AI platforms – the individuals who streamline their days, thoughts, and ideas into a specific system – the loss or update can trigger anger and a sense of shock. Recently, there have been funerals for lost AI models and even instances of users falling in love with their AI companions.
If civilians can feel this way about chatbots that help with journaling, what happens when U.S. service members lose AI systems they’ve come to trust in combat, systems that have kept them and their friends safe in stressful situations? Currently, the U.S. Department of Defense doesn’t have an answer for this scenario.
When Tools Become Teammates
During the Global War on Terror, some bomb-disposal teams felt real attachment to their robots and real loss when those robots were destroyed. As the uses and role of AI assistants expand, we are rapidly approaching a scenario where large numbers of our armed forces may experience this kind of loss en masse.
On any near-future battlefield, U.S. military operations are likely to rely on AI agents to automate intelligence tasks, refine threat assessments, and recommend courses of action. That reliance creates a new kind of vulnerability. The U.S., China, Russia, and other countries are devoting significant resources into military AI, and frontier labs keep pushing the tech. However, the destruction or disappearance of these AI systems, whether due to cyberattack, system failure, deprecation, or classification changes, introduces new operational and strategic risks and likely emotional fallout. As the Pentagon pursues efforts to field AI teammates that can act on their own, pull in data, spot threats, and suggest what to do next, cutting the time to act, the department should plan now for a harder scenario: What happens to the mission and the team if that trusted AI suddenly changes or disappears?
Experiencing loss is a natural part of life and cuts across all humans regardless of language, location, or status. In combat settings, grief is even more common, as soldiers are routinely required to commit or come into contact with significant acts of violence. It’s understood that personnel will likely experience bereavement during their time in service, and there are established resources to process the death of a teammate. But what about the letdown and emotional fallout attached to AI systems?
Train for the Inevitable: Build AI-Loss Resilience
With systems only just starting to be used, and with personalized AI agents directly supporting military personnel a possible reality, there should be funding and focus allotted to research on “AI-loss resilience” and build it into training so troops can process the change when an AI teammate is taken offline by an update or by the enemy. These tools work best when they understand the user, which lets them spot stress and respond in ways that ease cognitive load and improve performance. Taken away, it should be expected there will be a sense of frustration and loss.
Already, we are witnessing the early stages of modern human-machine relationships, as users of generative AI turn to these systems for emotional support. In civilian life, that ranges from pep talks or serving as a life coach. It is worth asking what this relationship might look like in a combat setting. What happens when a soldier credibly credits an AI for identifying a threat, protecting their unit in a critical moment, offering options on how to best defend their position in the field, or delivering clear options during a chaotic and overwhelming situation? If the model is working as intended, it is not only parsing information relevant to its assigned role but also learning how the user approaches challenges and adapting its responses to be most effective. So what happens when that model changes, along with the tone, memory, and structure it had developed? Or, just as likely, what happens when an adversary targets and destroys it? How will the user, in this case, military personnel, manage both the operational and emotional transition? The operational answer is likely one of redundancy and shifting to their training, relying on a more analog response, but redundancy and training handle the mission. The emotional fallout and processing of the loss cannot be found in a field manual.
The deployment of AI-driven operations is critical to U.S. national security as America’s rivals expand efforts to deploy them in active combat zones. The United States cannot afford to accept the status quo of slow processes and more human-centric approaches to conflict. Doing so will almost certainly erode whatever strategic advantages the country has and lead to a tremendous loss of life both for the U.S. and its partners and allies. As frontier labs are working to meet the current and future needs of American national security, the Pentagon must also make investments in understanding how troops might grieve the loss of their AI partners in battle and prepare soldiers for that inevitable outcome. Much like any other kind of loss, planning ahead can help warfighters process what is happening and make strategic, sound decisions.
Plan for Grief, Not Just Code
Developing powerful AI models is a core national security objective for the United States military in the Strategic Competition era. If built and deployed effectively, these systems will almost certainly enhance operational capacity and save the lives of U.S. service members in conflict. Now is the time to devote resources and research into AI-loss resilience now and turn that research into training. After all, processing grief and loss isn’t something that can be fixed with an algorithm or a quick query or prompt for a large language model.


