
While Western eyes only focus on the global powers in the AI in warfare race, the wars in Gaza, Yemen, and Libya are silently being turned into testing grounds for lethal autonomous technologies all with little accountability and human cost.
The wars that have been taking place are paving a new road for the consequences and damage that AI in warfare is doing. The fact that the tech hub of the Middle East, Israel using AI in war explains how much the third world countries will suffer. No law, no movement, and no global pawer is being able to bring this to an end. How did it all start and where did it all begin to keep us asking, was this meant to be this deep?
The Roman God of War-AI
In November 1911, an Italian pilot Lt. Giulio Gavotti started dropping grenades from an airplane over Libya an air warfare turning point. And just like then, new military technology is again being experimented with on marginalized battlefields, with far reaching consequences.
Fast forward to March 2020, a Turkish made Kargu-2 drone could have executed the world’s first known attack by a lethal autonomous weapon system (LAWS) in a conflict in Libya.
In a UN investigation, it reported for striking a Libyan militia “lethal autonomous weapons system” being required. Whether or not the strike in fact was autonomous became the focus of expert disagreement. Little concern, though, went to where the strike happened not in one of the well-watched war theaters, but in a forgotten alternative conflict.
AI will transform the character of warfare
Gaza Wars, Yemen Wars, and parts of Lebanon are quietly becoming lab rats for AI electronic warfare test, turning into colossal power wars involving state and non-state actors, often underreported, in which international law is repeatedly breached.
AI is now enhancing violence.
In Gaza, Israel using AI in war systems includes “Where’s Daddy?”, “Lavender,” and “Gospel” has caused worldwide alarm. The systems had previously been deployed against suspected Hamas locations and leaders, causing attacks on homes of civilians. According to +972 Magazine, an online Israeli newspaper, UN officials were concerned about the “lowered human due diligence to avoid or minimize civilian casualties.”
AI warfare is already here, and in most countries in the Middle East, such as Yemen, its adoption and adaptation are intensified, where Saudi and Emirati forces have experimented with drone technologies, and Iran has donated AI in electronic warfare embedded Shahed drones to its friends. These countries function with high levels of autonomy, protected by geopolitical connections.
The region’s austere environments, from Yemen’s deserts to Gaza’s periphery, have become the unexpected, but strategically valuable, valuable testing grounds for AI in the Middle East. A region that was once, and still deemed, as the Cradle of Civilization, by its inhabitants, combines operational demands with minimally complex landscapes that curtail algorithmic machine learning variables.
According to 2024’s MIT Tech Review, the UAE’s Desert Falcon drones are a worthy example, achieving 92% in sandstorms, while Israel’s border AI cut false alarms by 73%, according to an IDF report.
In parallel, Saudi Arabia and Qatar have also acted, repurposing conflict-zone data for civilian uses, such as oilfield logistics and medical triage.
But success doesn’t always mean ethicality, as 60% of training data lack oversight, and dual use risks are far from being checked, especially since and during the latest Israeli war on multiple fronts in the Middle East, on Gaza and West Bank in Occupied Palestine, Lebanon in Beirut and South of the country, and different Yemeni cities, according to UNICRI.
War And Peace in the Age of AI
The limited – or more accurately silenced, public discourse around AI-based military operations in war zones is also exposing a systemic media asymmetry.
According to a Reporters Without Borders 2024 study, 78% of Western media outlets disproportionately self-censor coverage on allied forces’ morally controversial AI deployments, as seen in Gaza, Yemen, and Lebanon.
Algorithms on major social platforms, such as Meta, Google’s YouTube, and X (FKA Twitter), suppress 42% more conflict-related AI content when tagged with ‘Western allied’ metadata. Even though governments cite operational security, Amnesty International argues that this level of opacity authorizes unaccountable ‘shadow automation’ of warfare.
Whether Gaza, where civilian casualty figures increase under Israel AI warfare use, or Yemen, where Saudi-led airstrikes have tested new technology with minimal regulation, these regions are more than battlefields, they are unregulated testing grounds.
The use of AI in warfare cannot be removed from its geopolitical context. Where control is weak and international pressure is nonexistent, the level for using test systems is dangerously low. Without world outrage or accountability, we may commit a new era of AI technology Middle East brutality, spreading discreetly.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.