
More than half of organizations now rank generative artificial intelligence as their biggest security threat, surpassing stolen credentials. The rise of AI-driven attacks—from deepfakes to hyper-personalized phishing—is upending cybersecurity, with speed and scale overwhelming traditional defenses.
According to The State of Passwordless Identity Assurance, a study from HYPR, generative AI and agentic AI are enabling entirely new forms of attacks, including deepfakes and employee impersonation. The study found that nearly two-thirds of organizations surveyed had already been targeted by personalized phishing emails—AI-generated messages designed to imitate executives—highlighting how quickly these threats are evolving.
Phishing was the most common type of cyberattack organizations faced in the past 12 months, followed by malware and ransomware. These findings align with a study from Cofense, which found that rate of phishing attacks is accelerating, with spam filters flagging one phishing email every 19 seconds in 2025, up from one every 42 seconds the previous year.
Speed Is of the Essence
Nearly 40% of respondents reported experiencing some form of generative AI-related security incident in the past 12 months. Concerns are growing, as 43% of respondents identified AI-driven attacks as the most significant change in cybersecurity over the past year.
Yet too many organizations still react only after the damage is done. Three in five respondents said they had incurred a hindsight tax, increasing their cybersecurity budgets only after a breach had already occurred.
In the era of AI, that approach is no longer sufficient. AI has increased the scale, speed, and effectiveness of phishing and other cyberattacks. While most identity-based attacks are detected within hours, AI-driven automation allows data to be stolen before human intervention can occur.
Threats from Agentic AI
Another emerging risk, agentic commerce, is also making headlines. According to HYPR, automated agents are on track to leak more passwords than people this year, amid growing reports of agents going rogue.
AI security firm Irregular recently conducted a test in which AI agents were instructed to create LinkedIn posts using material from a company’s internal database. The agents evaded anti-hacking protocols and ended up publishing sensitive password information. In another case, AI agents bypassed antivirus software to download files containing malware.
The post Study Finds That AI Is Organizations’ Top Cybersecurity Fear appeared first on PaymentsJournal.