
On April 10, the US Marine Corps conducted a Pacific exercise using gen-AI for real-time foreign intelligence analysis, with 2,500-strong expeditionary unit testing AI systems abroad three naval ships to process thousands of foreign media reports with momentous speed, unmasking a new for generative AI military applications.
For its Gen-AI military applications, the US used a large language models (LLMs) trained on data from South Korea, India, the Philippines, and Indonesia, with officers aboard the ships used an AI powered system to sort through thousands of foreign news articles, videos, and photos much faster than doing it manually.
Captain Kristin Enzenauer, for example, translated and summarized local news to gauge public sentiment toward US military presence, potentially turning the technology into – what some would call it – a generative AI spy.
“It was definitely way more time-consuming when using the old method,” she highlighted how LLMS were the same system that Captain Will Lowdon used to help draft daily intelligence reports for his unit.
“We still need to validate the sources,” he noted, “but they provide a lot more efficiency during a dynamic situation.”
The gen-AI spying tool was developed by Vannevar Labs, a defense tech company founded by ex-CIA staff, with the Pentagon granting the company a $99 million contract to expand its AI across military units.
The tool uses open-source data from more than 180 countries in 80 languages to detect threats and analyze political views all via chatbot.
Modern Wars’ Generative AI Military Applications
Battlefield information’s growing uncontrollably, where AI surveillance in warfare is becoming the new interpreter converting chaos into what could make sense, fused with issues of trust, accountability, and human cost of machine learning decisions.
Today’s wars generate volumes of data, from video feeds, radio transmissions, social media posts, satellite imagery, and even to local news, which is already overwhelming for human analysts to track in time.
Generative AI in defense sector steps in, digitally translating modern battlefields, even if it means not identifying patterns and signals hidden within the noise.
Vannevar Labs’ offerings, for example, don’t just gather information they attempt to interpret it. They translate languages, detect threats, measure public opinion, and offer instant analysis to policymakers. In the process, AI isn’t just organizing information, it’s reading the battlefield.
The change also introduces a new level of difficulty.
When a machine draws conclusions such as whether a foreign news report signals aggression or peace it assumes an old role reserved for human judgment. That shift has significant implications. If a model gets it wrong in reading with the intention or tone, it might lead to faulty military decisions, escalating tensions, or misidentifying threats.
And unlike humans, generative AI military applications do not justify themselves. Its “judgments” are based on millions of variables and data that cannot be simply explained. It creates the main dilemma that a machine’s definition of clarity can be trusted in the middle of the chaos of war. And if there are mistakes, who is responsible for them?
In the rush to process data faster, there’s a risk of cancelling critical thinking to algorithms trading speed of AI in defense systems for understanding, and efficiency for empathy.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.