Contact us today.Phone: +1 888 282 0696Email: sales@aurorait.com

Email Scams Reimagined: The Rise of AI-Driven Cyber Deceptions

There is almost nobody who has been on the internet, who has not encountered it. From the early 1980s, the Nigerian Prince email scam (10), requesting the user to remit a small sum of money – and thereby get him to reveal his personal information including banking details – in return for a huge windfall by clicking on a link, has been not just doing the rounds, but incredibly enough, tricking users into complying.

Given the success of this ‘primitive-by-modern-standards’ modus operandi, one would be forgiven to think change in the modus operandi of scammers would happen. After all, users were still being tricked into complying, despite most becoming familiar with the language used in the offensive emails.

But change has taken place, thanks in no small measure, to recent developments like Generative AI which are offering scammers powerful tools to draft convincing emails, with amazing ease and conviction.

The scene is set

It is not hard to see why emails today are by far the attack vector of choice for scammers. Almost 4.3 billion of the world’s 8.1 billion people use email, with Apple and Gmail heading the email client market accounting for approximately 28-29% of the traffic. Career firm Zippia (1) estimates that around 347 billion emails are generated each day by individuals and companies, a figure that is tipped to increase to 376 billion by 2025. A study shows that out of the 347 billion emails, around 10% of these (3.4 billion) emails (2) are phishing emails. The first six months of 2023 witnessed a whopping 135% (6) spurt in scam emails and texts.

Such humungous numbers alone make emails a fertile paradise for hackers to prey on unsuspecting recipients of carefully crafted emails with misleading and malicious links. Add to that the human tendency of users to be gulled into a false sense of security or fall prey to fictitious scams, a lack of awareness and training, and you have a tsunami-in-the-making on your hands.

To add to this, AI’s highly versatile capabilities like natural language processing (NLP), voice-to-text and speech recognition, and the possibility of repeated attacks in new avatars allow scammers to further potentiate their attacks on unsuspecting victims, causing them to reveal passwords and email credentials.

Research by the FBI shows that ransomware, once the hunting tool of choice for bad actors, has been replaced by the technically-easier-to-implement Business Email Compromise (BEC) that does not require traditional coding skills.  It is estimated that today enterprises in aggregate are losing 51 times more money through BEC attacks (5).

Organizations have the most to fear

With email hacking having the potential to devastate them, Business Email Compromise (BEC) is literally causing sleepless nights for CEOs and CISOs globally. Almost all (98%) of the 300 senior cyber security stakeholders surveyed by Abnormal Security (3), a US-based email security firm that undertook a survey ‘State of Email Security in an AI-Powered World’, were concerned about the cyber security risks posed by ChatGPT, Google Bard, WormGPT, and similar generative AI tools that facilitate the generation of highly convincing email attacks based on publicly-available information.

Between April 2022 and April 2023, Microsoft Threat Intelligence (4)detected and investigated 35 million BEC attempts with an adjusted average of 156,000 attempts daily. Microsoft’s Digital Crimes Unit has observed a 38% increase in BEC between 2019 and 2022.

Successful BEC attacks have cost organizations hundreds of millions of dollars annually, with a high percentage of BEC victims not recovering their losses.

The history of email hacking

Emails have a long history of deception and deviation. Always a lucrative attack vector, scammers have adapted to defense mechanisms over the years as cyber security technology evolved. Phishing emails with malicious links were deployed when antivirus solutions blocked attacks. When secure email gateways blocked these emails, along came text-only attacks, with seemingly legitimate email content to trick unsuspecting victims into taking action, that would reveal sensitive information.

Though email hacks were fairly common in the preceding decades, OpenAI’s GPT3 in 2020 marked the beginning of the use of AI to generate emails. However, their ChatGPT bot – called by Forbes (7) as the fastest-growing consumer application in history – that was released in late Nov 2022, was to take email scams to another level altogether, in terms of their highly credible content and ease of generation. Email hacking has also evolved in its manner of presentation. Darktrace, a British cyber security firm says their research (9) shows emails are now configured to look more like they originated from organizations’ IT departments – from the erstwhile spurious emails that were made out to impersonate senior-level executives. The dark web offers sophisticated BEC attack kits that can be easily procured and deployed by hackers.

How they work

Generative AI is driven by Large Language Models (LLMs) (8) that source real-time information from news outlets, corporate websites, and other sources. Widely available and constantly evolving, LLMs can generate credible content instantly. An LLM like ChatGPT can instantly produce results making it highly potent for bad actors engaged in social engineering. The result is content that is language-perfect, grammar-correct, and content that is highly believable, thereby making users less likely to be suspicious.  Working quickly, chatbots can also spread malicious campaigns at an incredibly fast rate. Where videos and audio messages are needed, AI’s immense potential is unbelievable. Just 3 seconds of an audio recording input is enough to generate convincing voice impersonations. Hackers have mastered the art of cleverly phrasing their queries to generate highly impressive exploit and malicious code.

Tackling the menace

Experts are of the opinion that the best way to fight Gen AI-driven BEC is to harness the defensive capabilities of AI, by using self-learning tools that can detect malicious emails in real-time without human intervention or pre-set rules. Behavioral analytics is slowly becoming a buzzword in BEC circles. Defensive AI that is created around good organizational and employee knowledge, could serve to automatically detect malicious mails received.

It is also mooted that such Defensive AI be used to train security tools to automatically throw up red flags when suspicious emails are received.

Microsoft Security Blog (4) advises that deploying a secure email solution, deploying AI-focused cyber security teams, adopting secure payment solutions and gateways, and training employees to identify and respond correctly to anomalous email activity, can serve to stem the menace.

Finally, a last line of defence, would be for employees, whom such emails target, to generally be aware of the 3 main characteristics of spam email – they are almost always received from an unknown sender, includes an invitation to click a link or open an attachment, and typically uses poor grammar and spelling. Employees must ask themselves three questions before acting on emails received:

  • Is the email expected
  • Is the email from someone not known to the recipient, and finally
  • Is the email asking the recipient to do something in an unusual manner or a hurry

Concluding words

It is unlikely that Gen AI will change in the coming years, either as tool for content generation or for malicious purposes. If anything, it will increase in popularity and usage. IBM predicts that 2024 will witness an age of unprecedented deception and identity theft, as attackers further potentiate both their skills and their attack methods. But the good news comes from a survey by Darktrace that puts the concern levels of global employees of the devastating power of Gen AI at 82% (9).

And that, in combination with properly installed defensive AI, can well prove to be a silver bullet for the woes plaguing BEC.

Discover the unstoppable power of DEFEND and PlurilockAI, the ultimate AI-generated tools that crush security threats.

Get in touch with sales@aurorait.com or call (888) 282-0696 to experience the unmatched protection that Aurora, a proud member of the Plurilock family, delivers through these groundbreaking solutions.

References

(1) 75 Incredible Email Statistics [2023]: How Many Emails Are Sent Per Day? – Zippia

(2) Three billion phishing emails are sent every day. But one change could make life much harder for scammers | ZDNET

(3) Generative AI phishing fears realized as model develops “highly convincing” emails in 5 minutes | CSO Online

(4) Cyber Signals: Shifting tactics show surge in business email compromise | Microsoft Security Blog

(5) 10 Business Email Compromise Statistics to Know in 2023 (mailmodo.com)

(6) Gen AI Propelling Surge of Sophisticated Email Attacks (technewsworld.com)

(7) Generative AI: The Latest Weapon In The Cybercriminal Arsenal (forbes.com)

(8) Managing the Risks of Generative AI and LLMs Through Technological Advances | Aurora (aurorait.com)

(9)  Generative AI Impact on Email Cyber-Attacks.pdf (website-files.com)

(10) Social Engineering at Work | Aurora (aurorait.com)

 

 

 

 


Contact us at sales@aurorait.com or call 888-282-0696 to learn more about how Aurora can help your organization with IT, consulting, compliance, assessments, managed services, or cybersecurity needs.

Recent Posts