Generative AI Fuels New Wave of Cyber Threats

Gartner analyst Peter Firstbrook warned at the Security and Risk Management Summit generative AI phishing attacks are advancing.

On June 3, Gartner analyst Peter Firstbrook warned at the Security and Risk Management Summit on Monday that while generative AI phishing attacks are advancing, it has not yet created entirely new attack techniques, but that threat is looming.

AI’s accelerating cybercriminal activity, enabling faster and more efficient attacks through automation and social engineering. However, fears of AI-driven phishing attacks are inventing entirely new hacking techniques that remain largely unproven, according to Firstbrook.

“Generative AI is being used to improve social engineering and attack automation, but it’s not really introduced novel attack techniques,” said Firstbrook, distinguished VP analyst at Gartner, during the company’s summit.

“There is no question that AI code assistants are a killer app for Gen AI,” he said. “We see huge productivity gains,” said Firstbrook, acknowledging how AI-powered code assistants are boosting attackers’ productivity.

These tools help AI used for phishing attacks to develop malware capable of stealing data, logging activity, or wiping systems, Firstbrook added.

Referring to a September report by HP researchers called “HP Wolf Security Threat Insights Report: September 2024”, Firstbrook noted that attackers have already begun using AI powered cybersecurity threats to create remote access Trojans.

“It would be difficult to believe that the attackers are not going to take advantage of using Gen AI to create new malware,” he added, “We are starting to see that.”

Weaponizing Open-Source Tools Amid Deepfakes’ Rise

Beyond malware creation, hackers are now using AI and cyberattacks to flood open-source platforms like GitHub with fake utilities. These malicious AI phishing attacks tools can be inadvertently integrated into legitimate software by unsuspecting developers.

“If a developer is not careful and they download the wrong open-source utility, [their] code could be backdoored before it even hits production,” Firstbrook warned.

The rapid output of generative AI automated phishing makes it harder for code repositories to respond.

“It’s a cat-and-mouse game, and the Gen AI enables them to be faster at getting these utilities out there,” he said

Meanwhile, AI in cyberattacks remains relatively rare but is growing in visibility. According to Gartner, 28% of organizations reported deepfake audio attacks, 21% experienced video-based incidents, and 19% encountered biometric-bypassing media.

Despite these AI phishing attacks figures, only 5% suffered actual losses of money or intellectual property.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.