Meta Facilitates Online Misinformation by Defense Llama Chatbot

Meta announced, "Defense Llama," an AI tool for military applications that has since raised concerns over ethical dangers of AI in military.

Meta announced the shift on November 4, revealing “Defense Llama,” an AI tool for military applications that has since raised concerns over ethical dangers of AI in military, misinformation, and reliability.

Marketed as a “responsible” AI model trained on multiple datasets, including military doctrine and international humanitarian laws, the Defense Llama chatbot will support governmental users in working through complex scenarios, including planning airstrikes.

Experts have raised alarms about the tool’s effectiveness and ethics.

The Issue of AI and Misinformation

An example on Scale AI’s website showed Defense Llama suggesting munitions for destroying a reinforced concrete building while minimizing collateral damage. Instead of showcasing precision, the chatbot offered flawed and generic advice.

A retired US Air Force targeting officer, Wes J. Bryant, called the tool “completely useless,” and said that no trained military unit would ever use such outputs for critical decision making.

“If anyone brought the idea up, they’d be promptly laughed out of the room,” Bryant told The Intercept.

Former US Army explosive ordnance disposal technician Trevor Ball also pointed out serious concerns based on mistakes in technical data and context needed in real world military applications.

Meta’s raid into military AI comes amongst broader concerns about its role in spreading misinformation. Just three weeks ago, another Meta chatbot was created, claiming to have access to government systems, but it ended up spreading false information – become a continuous trend for a company that once had a purpose of connecting the world.

Legal scholar in automated warfare, Jessica Dorsey, spoke about the approach Defense Llama took.

“Simplistic and dangerous” was how Dorsey described the way an activity like airstrike planning would involve not just picking up the right kind of arms but also meeting strict legal and ethical standards for minimal civilian harm, a principle the chatbot does not fulfill.

Meta’s connection to military AI reflects a wider trend of technology giants working for defense funding. That hard work towards innovation has a double way, but the potential risk for tools like Defense Llama can’t be done immediately. Misinformation and flawed output make up for a very dangerous model.

At the end of the day, AI should be an added value to solving human problems and not to increasing them. The transformation of Meta from a social media platform to becoming an agent in dangers of AI in military brings up serious ethical questions about the role of technology in modern society. How does a company that was founded on the promise of connecting people justify contributing to tools designed to kill?

Technological Warfare

The emerging trend of technologies such as Defense Llama points to new dimensions of military operations and the increasingly disturbing role played by AI processes in decision making. While it may be efficient, this may come at a cost in excluding human judgment, a world where a mistake in military algorithms could have catastrophic results.

With every step that AI takes, the need for governments, companies, and civil society to establish strict guidelines on how to use it, especially when dangers of AI in military are escalating this fast. Lacking strong ethical regulations, the technology that was meant to save us will be our biggest nightmare.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.