On March 17, in Washington, Senator Elissa Slotkin introduced the AI Guardrails Act to legally restrict the Pentagon’s use of the technology, ensuring that humans maintain control over dangerous force, domestic surveillance, and the nation’s nuclear arsenal.
The US military has increased its adoption of AI to identify targets and take operations into a next level. Meanwhile, lawmakers are now drawing firm legal and ethical lines to ensure, despite technology advancements, the final decision to use force remains a human responsibility.
By insisting that only a human Commander-in-Chief can authorize a strategic strike, the bill will prevent escalation into a global conflict, as AI with no safeguards could lead to unintended military escalations.
Harsh Limitations on Military AI
The proposed legislation focuses on three specific areas where AI could pose the greatest risk if left unchecked.
First, it would ban any AI system from autonomously deciding to kill a target. Second, it prohibits the military from using AI to conduct mass surveillance on Americans.
To ensure these systems remain under democratic control, the bill establishes agentic AI guardrails for the most advanced systems, creating a ban on using AI to launch or discharge nuclear weapons.
“My bill ensures a human is involved when deadly autonomous weapons are fired, AI cannot be used to spy on the American people, and that a human is on the switch to launch nuclear weapons,” said Senator Slotkin, emphasizing that these rules are about basic accountability in a high-tech world.
Slotkin noted that while the US must win the AI race against competitors, such as China, it must do so by implementing AI safety controls, which are designed for safeguarding against AI exploits that might compromise national security.
Between Silicon Valley and the Pentagon
The AI Guardrails Act follows the high-profile falling out between the Department of Defense (DoD) – currently Department of War – and the AI company, Anthropic.
The Claude model parent had previously expressed concerns that current AI guidelines were not strong enough to prevent future administrations from crossing the AI boundaries.
The disagreement between the DoD and Anthropic eventually led to an order by President Trump to stop using the Claude AI model, despite the Anthropic AI model security safeguards being already in place.
The legal dispute highlights the need for AI guardrails that are permanent rather than temporary policies. Slotkin argued that without agentic AI guardrails, the government faces constant uncertainty.
“The Pentagon was able to target Anthropic in this case and is going to spend the next year and God knows how many millions of dollars ripping out Anthropic from all the classified systems,” she told NBC News.
For her, this was a waste of resources caused by a lack of AI agent guardrails.
In parallel, other lawmakers, such as Senator Mark Kelly, are also seeking ways to set new standards that focus on safeguarding against AI exploits while maintaining a competitive edge.
By establishing agentic AI guardrails, the US will lead the world in responsible innovation. Slotkin remains committed to passing the AI guardrails Act. The point is that AI agents with no boundaries are a risk the taxpayer shouldn’t have to fund.
Ultimately, the bill serves as a model for AI chatbot safeguards and larger military systems alike, and through it, the overall emphasis is that AI safety controls don’t need to be overly complex.
Agentic AI guardrails, as they currently stand, ensure that the adoption of the intelligent technology serves the mission without compromising human values. In conclusion, echoing AI guardrails is simply the responsible way forward.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.