The US military has entered the second phase of its AI evolution, and this time, it’s personal. For the first time, soldiers are using generative AI to assist in their daily tasks, including surveillance and threat analysis. This marks a significant shift from earlier military AI efforts, which focused on technologies like computer vision to analyze drone footage.
Recently, two US Marines, who spent most of last year deployed in the Pacific, shared their experience using generative AI to assist in intelligence gathering. They were tasked with scouring intelligence reports using a chatbot interface that’s reminiscent of ChatGPT. The goal was to identify potential threats to the unit, making their work both more efficient and, according to proponents, potentially more accurate.
This latest push toward integrating generative AI into military operations follows an earlier phase that began in 2017, where AI was used primarily for image analysis. With the advent of tools capable of human-like conversation, the Pentagon is now working to bring AI into more critical areas of defense, sparking both excitement and concern across the globe.
Phase Two of Military AI: What Comes Next?
The shift toward generative AI marks a new phase in the US military’s technological advancement. This phase, which began under the Biden administration, is being fueled by a growing urgency to stay competitive in AI technology. Public figures like Elon Musk and Secretary of Defense Pete Hegseth have been vocal advocates for AI’s role in enhancing military efficiency.
While there’s much anticipation surrounding the potential of AI to revolutionize warfare, there are also concerns. AI safety experts are sounding alarms about whether these advanced AI systems are truly fit to handle the complexities of military intelligence. The risk of AI suggesting actions based on incomplete or biased data is a key point of contention, especially when it involves geopolitical stakes.
Proponents of AI in military operations argue that the technology can increase precision, reduce human error, and, potentially, lower the risk of civilian casualties. However, critics warn that the rise of AI could lead to dangerous decisions made by machines, ones that may not fully understand the complexities of human conflict.
Three Key Questions About Military AI to Watch
As the Pentagon accelerates its use of generative AI, experts are raising critical questions about the technology’s role in military decision-making. Here are three pressing issues to keep an eye on:
1. What are the limits of “human in the loop”?
The term “human in the loop” refers to the practice of having a human oversee AI-driven decisions to ensure that mistakes are caught before they become disasters. While this concept sounds reassuring, the growing complexity of AI systems poses a challenge to its effectiveness. As AI models pull data from thousands of sources, it becomes increasingly difficult for humans to keep up and catch potential errors.
Heidy Khlaaf, Chief AI Scientist at the AI Now Institute, argues that “human in the loop” might not be as effective as we think. The sheer volume of data AI systems analyze means that humans can no longer sift through it all in a reasonable amount of time to verify the AI’s conclusions.
2. Is AI making it easier or harder to know what should be classified?
In the past, military intelligence relied heavily on human discretion to determine what information should be classified. However, as AI tools like generative models take over the analysis of intelligence data, new challenges are emerging.
One issue, known as “classification by compilation,” arises when multiple unclassified documents are combined to reveal sensitive information. AI systems are particularly adept at making these connections, which could lead to underclassifying information—or in some cases, overclassifying it. As AI systems generate new analyses from vast amounts of data, finding a consistent classification method becomes increasingly difficult.
Chris Mouton, Senior Engineer at RAND, notes, “I don’t think anyone’s come up with great answers for what the appropriate classification of all these products should be.” This raises questions about how AI’s ability to analyze data impacts national security, particularly when it comes to the classification of intelligence.
3. How high up the decision chain should AI go?
AI’s role in military decision-making is expected to grow as the technology matures. The Pentagon has already adopted AI for tasks like analyzing drone footage, but will it be used to make life-or-death decisions on the battlefield?
The idea of “agentic AI“—systems that not only analyze data but also perform actions based on that analysis—is becoming a reality. As military commanders express interest in using AI to improve decision-making, there are concerns about how much trust should be placed in machines during critical moments.
A report from Georgetown’s Center for Security and Emerging Technology highlights the increasing use of AI to assist military leaders in decision-making, particularly at the operational level of war. While this shift could improve efficiency and speed, it raises questions about accountability and the ethics of delegating life-or-death decisions to AI systems.
The Future of Military AI: Balancing Innovation and Oversight
As the Pentagon pushes forward with its adoption of generative AI, the stakes are higher than ever. The potential benefits—greater efficiency, more accurate intelligence, and reduced human error—are clear, but so are the risks. AI systems, if not properly controlled, could make decisions that have dire consequences, both for soldiers and civilians alike.
This debate is far from settled, and the questions raised by experts, military leaders, and tech companies will shape the future of military AI for years to come. Whether AI is ultimately seen as a valuable tool for enhancing military operations or a dangerous threat to global security will depend on how these challenges are addressed.
As AI continues to advance, its role in defense will only expand. The next phase of military AI promises not only to transform how the military operates but also how we think about the intersection of technology, security, and humanity.