Novel social engineering attacks surge by 135 percent driven by generative AI
New research from cybersecurity AI company Darktrace shows a 135 percent increase in social engineering attacks using sophisticated linguistic techniques, including increased text volume, punctuation, and sentence length, and with no links or attachments.
This trend suggests that generative AI tools, such as ChatGPT, are enabling threat actors to craft sophisticated and targeted attacks at speed and at scale.
The study of over 6,700 employees across the UK, US, France, Germany, Australia, and the Netherlands finds 82 percent are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication.
It also finds that 30 percent of global employees have fallen for a fraudulent email or text in the past. The top three characteristics of communication that make employees think an email is a phishing attack are: being invited to click a link or open an attachment (68 percent), an unknown sender or unexpected content (61 percent) and poor use of spelling and grammar (61 percent).
An increase in the frequency of scam emails and texts in the last 6 months has been noted by 70 percent, and 79 percent of company spam filters incorrectly stop important legitimate emails from getting to their inbox.
In addition, 87 percent are concerned about the amount of personal information available about them online that could be used in phishing and other email scams.
Max Heinemeyer, chief product officer at Darktrace says:
Email security has challenged cyber defenders for almost three decades. Since its introduction, many additional communication tools have been added to our working days but for most industries and employees, email remains a staple part of everyone’s job. As such, it remains one of the most useful tools for attackers looking to lure victims into divulging confidential information through communication that exploits trust, blackmails, or promises reward so that threat actors can get to the heart of critical systems, every single day.
The email threat landscape is evolving. For 30 years security teams have given employees training on spotting spelling mistakes, suspicious links, and attachments. While we always want to maintain a defense-in-depth strategy, there are increasing diminishing returns in the approach of entrusting employees with spotting malicious emails. In a time where readily-available technology allows to rapidly create believable, personalized, novel and linguistically complex phishing emails, we
find humans even more ill-equipped to verify the legitimacy of ‘bad’ emails than ever before. Defensive technology needs to keep pace with the changes in the email threat landscape, we have to arm organizations with AI that can do that.
You can read more on the Darktrace site. The company is also launching an upgrade to its email protection product, Darktrace/Email, to help guard against the risks posed by generative AI created attacks. You can find out about that here.
Image credit: tashatuvango/depositphotos.com