
Germany’s AI contradiction is materializing right before our eyes, with two-third of the country’s citizens using AI daily, yet only a third of them trust its decisions, presenting a chasm wider than anywhere on Earth, highlight what could be the beginning of a zero trust AI mentality, according to Klynveld Peat Marwick Goerdeler (KPMG)’s global study.
KPMG’s study, “Trust, attitudes and use of artificial intelligence: A global study 2025”, in collaboration with the University of Melbourne, exposed Germany’s crisis in confidence, revealing the country’s race to adopt tools it doesn’t understand.
66% of Germans already use AI in their homes, workplaces, or universities, but merely 32% have confidence in the output from these tools. Germany is placed second last globally in AI trust and safety confidence and literacy.
The study, based on responses from more than 1,000 Germans and 48,000 people worldwide, shows that only 20% of Germans have received formal AI training far below the global average of 39%. Less than half feel confident when it comes to judging or using AI tools correctly.
The uncertainty around AI is constituted by the lack of both company internal policies and public regulation. Though 62% of respondents say their companies use AI, only 46% have a formal strategy.
AI Trust Risk Security Management Market
The KPMG data shows Germany’s demand for AI reckoning, wanting more regulation due to their zero trust AI, with three in four insisting on binding international standards, while 71% support co-regulation by governments and tech firms.
Only a third of Germans believe current law is sufficient – a clear rejection of tech self-governance – sending a clear message to their officials under the newly elected Chancellor’s authority, Friedrich Merz. For them, as users, Germany’s zero trust AI stance won’t soften without radical transparency from the companies developing the technology.
If left unaddressed, such concerns can undermine Europe’s push to be a leader in responsible AI development and data autonomy. As technology develops, the legal and ethical standards that regulate it must take place or else zero trust AI will become the norm and an idea that has taken over the users’ mind.
Should We Trust AI Built by Tech Giants?
Germany has become the latest – and most forceful – European regulator to push back against Big Tech, with Meta’s most of the time taking center stage. The Germany watchdog, Verbraucherzentrale North Rhine-Westphalia (NRW), is resisting Meta’s aggressive AI rollout, issuing a cease-and-desist order over its newly launched AI assistant, Meta AI.
Meta’s release of its new AI assistant dubbed a ChatGPT rival has been criticized across Europe. Unlike other AI chatbots, Meta AI pulls information directly from users’ social media histories on Facebook, Instagram, and WhatsApp causing distrust in AI on Meta’s platforms.
Just weeks after privacy watchdogs in France, Italy, and Spain flagged concerns over Meta AI’s chatbots’ controversial data mining from the company’s social platforms, scrapping users’ personal social media histories without explicit consent. The backlash highlighted an AI trust gap between the European governments and the American social networking giant.
According to RMIT Professor Kok-Leong Ong, “Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements,” and could even result in misinformation and mental health issues.
“Meta should simply ask the affected people for their consent. But if Meta ignores EU law, there will be consequences for the whole of Europe,” said Max Schrems, one of the chairmen of privacy group Noyb, accusing Meta of ignoring EU data protection rules.
The zero trust AI governance around Meta’s AI applications mirrors Germany’s conundrum eager embrace met with profound suspicion. Absent proper training, coherent guidelines, and rigorous data laws, both corporate and public trust in AI can be expected to continue to deteriorate.
“We’re using our decades of work personalizing people’s experiences on our platforms to make Meta AI more personal,” Meta stated. “It can pick up important details based on context… and deliver more relevant answers.”
“It’s urgent, because all the data that has been incorporated into the AI is difficult to retrieve,” NRW data protection specialist Christine Steffen said. She warned that sensitive user data may already have been processed without proper consent in violation of the EU’s General Data Protection Regulation (GDPR).
I Don’t Trust AI
There is a deepening rift between Silicon Valley’s “launch first, adjust later” ethos, and Europe’s strict data sovereignty principles, further widening the gap in building trust in AI. On one side of the room, you have Meta, OpenAI, Microsoft, and Google, sailing to embed AI across their platforms, on the other side, EU regulators are drawing hard, battle lines, with Germany now leading the charge.
The battle between building trust in AI and fearing it will be turned on us is growing louder in Germany. Humans are increasingly using AI at work, in schools, even at home but still do not feel secure doing so. Meta’s AI system, constructed based on years of personal social media use, has done nothing but instill more public skepticism.
Most people worry about how their information is being handled, especially without their explicit permission. As privacy concerns mount and legislation lags, Germans are asking difficult questions, who controls this technology, and who is it ultimately serving? Until there are more definitive answers, human trust in AI will be proven otherwise.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.