Big Banks Seek AI Anthropic Model Mythos for Cyber Defense

On April 12, Treasury Secretary Scott Bessent and Federal Chair Jerome Powell met in Washington, with bank executives to discuss security and urged leaders to use the Anthropic AI model ‘Mythos’ to identify and fix digital security flaws before hackers’ exploitation. 

The financial sector sees this push important as regulators and tech firms try to find common ground. The arrival of new Anthropic models, so powerful for their own creators are hesitant to release them to the public, creating a rare sense of shared urgency.  

Even as the administration navigates a complex legal battle with the company, priority remains the nation’s financial stability. This is a double-edged sword situation, and the government wants to ensure the nation’s large vaults are reinforced with the most advanced tools available to prevent systemic collapse. 

A Digital Shield for the Financial Core 

The meeting brought together the heads of the country’s most important banks, including leaders from Goldman Sachs, Citigroup, and Bank of America. While JPMorgan Chase was the first official partner to test the Anthropic AI model, several other major institutions are now reportedly putting the tech through its measures

It’s about using the AI company’s agents to find hidden holes in the banks’ defenses, describing this Anthropic new model as the most capable so far, designed to push reasonings’ boundaries. 

Interestingly, it wasn’t built specifically for cybersecurity, yet it has shown a strange ability to sniff out vulnerabilities in operating systems. This Anthropic AI model is being shared only with select partners to strengthen critical software without accidentally handing a death key to bad actors. 

Anthropic applied AI is being used to solve real-world problems in a high-stakes environment.  

“The White House has been leading an ongoing core interagency taskforce, which includes the Treasury,” said a Treasury spokesperson.  

“That has been proactively engaging across the government and industry to execute the first phases of a plan to ensure the United States and Americans are protected.” 

Success here depends on the Anthropic Claude agent skills in navigating complex, multi-step security workflows. The model must work within massive corporate databases to flag risks that human analysts might overlook. 

Anthropic AI Model Stuck Between a Rock and National Security 

The push to adopt the Anthropic AI model comes at a confusing time for the Anthropic white house relationship. Just months ago, the Pentagon labeled the company a ‘supply-chain risk’ after a major contract hit a deadlock. 

Unfortunately, the disagreement involved how the military could use AI, leading to a breakdown in an Anthropic AI government partnership. The company insisted on strict safety guardrails, while defense officials pushed for more flexibility taking national security as a reason.  

Despite this legal friction, financial regulators view Anthropic applied AI as too vital to ignore the banking sector. For Secretary Bessent and Chair Powell, cyberattack risks outweigh current political tensions, especially on Anthropic Claude for government use. 

But the rollout did come with its own set of warnings, as some officials worry about the speed of adoption. Some experts cautioned that letting an Anthropic new model dig through internal bank systems could risk exposing sensitive customer information. 

Banks are acknowledging that Anthropic applied AI requires a healthy balance between security and privacy, cautioning distribution to elite institutions to remain a core part of the Anthropic business model as they prepare their AI model for a wider, safer release in the future. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.