Google Halts AI Use in Military and Surveillance

Google revised its ethical guidelines around the integration of AI in weapons, vowing to avoid weaponized or surveillance-focused AI.

On Tuesday, Google revised its ethical guidelines around the integration of AI in weapons, vowing to avoid weaponized or surveillance-focused AI, all while asserting democratic leadership in AI development as global competition insists on calls for human rights-aligned innovation.

In the updated section listing, Google’s guidelines will no longer include “Applications we will not pursue.” As mentioned, regarding AI in surveillance systems technologies and weapons “cause or are likely to cause overall harm,” which is use cases of breaching principles of international law and human rights, according to Internet Archive.

No answer was given by any Google spokesperson regarding the published blogpost called Responsible AI: Our 2024 report and ongoing work on Tuesday by Demis Hassabis, head of AI in Google and the senior vice president for technology and society, James Manyika.

In the blogpost the executives stated that Google updated its AI principles due to tech becoming extensive and the need for companies in countries developing AI weapons and western democratic countries due to being able to serve government and national security clients.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” explained Hassabis and Manyika.

“And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Google’s 2025 AI Ambition

Beyond the AI in military weapons policy that was enhanced, Google went all in with AI innovation according to CEO Sundar Pichai called 2025 a “landmark year” for his company’s makeover into an ending to AI in autonomous weapons.

Pichai spoke about the future of AI and Search, multimodal which can have big plans for Project Astra, streaming live video with real-time analysis, Google brings AI and search inside its search engine rather than having AI in security and surveillance or even allowing forgovernments provide funding to develop AI weapon.

“As AI continues to expand the universe of queries that people can ask, 2025 is going to be one of the biggest years for search innovation yet,” said Pichai during the opening call.

Yet another project under Gemini Deep Research is focused on automating long-form research that might fundamentally alter how users really interact with search engines. Project Mariner could even enable AI to browse websites for users, thereby reducing the need to interact with traditional search results directly.

The removal of AI-powered search features, like the awkward “hallucinated” answers asking users to eat rocks-Aside, Google firmly believes it is all-in on its AI future. Pichai emphasized ambitions of a far more interactive, chatbot-like search experience that could result in deeper user engagement.

Google’s AI Hides Behind Righteousness Mask

Google’s lifting of restrictions of AI in military weapons further deepened entwinements between Silicon Valley and national security interests. For some, this is a practical response to geopolitical realities, while for others, it’s a cause for concern about Google’s commitment to ethical AI development.

Let’s not forget Google’s assistance to the Israeli military in their search operations using AI, which was key moment when the world truly saw firsthand how ai is being using in warfare and security.

Google accelerating its AI ambitions is not smoke and mirrors, as some might perceive it. The truth of the matter is that Google, among other Big Tech giants, has played a fundamental role in defense, surveillance, and global competition, even though the tangible facts are yet to be seen to determine the search engine parent’s role in the Middle East war.

It is yet to be seen, and determined, whether this turn strengthens US-based AI leadership or erodes public trust in its ethical commitment.

The Presidential transition from the Biden administration to Trump’s has redefined AI-based technological events from, and within, Big Tech companies such as Google – who is adapting to shifting priorities.

The focus during Biden’s administration was to fund tech to expand the use of AI in weapons.  The use of AI was solemnly focused on wars rather than MedTech, cybersecurity reasons, or even the improvement of telecom. Let’s hope that 2025 will be the end of the embedment of AI in weapons.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.