A novel artificial intelligence feature is making phishing attacks—already the weapon of choice for cybercriminals—even more effective.
In the past, phishing emails were sent out en masse using the same template, making it easier for fraud detection systems to identify patterns among these blanket messages. Now, a technique called polymorphic phishing incorporates AI to randomize components of fraudulent emails—such as sender names, subject lines, and even the content.
This allows bad actors to launch customized email campaigns that can bypass many security measures. As with many AI-powered fraud mechanisms, polymorphic phishing attacks have rapidly gained traction. According to SecurityWeek, at least one polymorphic feature was present in 76% of all phishing attacks last year.
“Phishing attacks remain the leading way cybercriminals breach networks and systems, infect devices—both personal and corporate—with ransomware, and coerce employees and consumers to reveal and leak sensitive personal information and corporate intellectual property,” said Tracy Goldberg, Director of Cybersecurity at Javelin Strategy & Research.
“DNS security features, used to block malicious websites and web-based attacks, and spam blocking, which traps suspicious emails based on domain, keywords, and email server rules, are being circumvented by these emerging polymorphic phishing attacks,” she said. “That means spam blockers and DNS filtering are increasingly less effective.
Innovating New Fraud Vectors
This new spin on phishing is part of a broader trend: through technology and social engineering, cybercriminals have gained an edge over organizations. This is especially true in the financial services industry, where longstanding compliance and risk concerns have made institutions slower to adopt new technologies.
Meanwhile, cybercriminals face no such constraints. They’ve been quick to experiment with emerging tech like AI, developing new and more effective methods of attack. One result: novel fraud vectors, such as AI agents, which can be developed to carry out fraud attacks autonomously.
Known and Trusted Users
To combat these innovations, organizations must look beyond their current limitations to find solutions against this growing threat. They will also need to adapt and integrate emerging technologies capable of identifying such threats more effectively.
“Verifying the authenticity of senders through protocols like Domain-based Message Authentication, Reporting and Conformance (DMARC), and DomainKeys Identified Mail (DKIM), remain among the best tactics to stop phishing and spam,” Goldberg said. “Additionally, AI can be used to help email security, by relying on defenses that analyze emails to detect content patterns that suggest the email has been automated, rather than written by a human.”
“That, of course, increases the risk of so-called ‘false positives,’ meaning legitimate emails that have been sent en masse—such as marketing emails or those sent through mail merge—are more likely to get blocked,” she said. “Companies will soon be forced to lean toward encrypted email security that limits email access to known and trusted users.”
The post How Polymorphic Phishing Campaigns Leverage AI to Evade Detection appeared first on PaymentsJournal.