Introduction
In 2019, when the head of a UK energy firm (1) received an urgent call from the CEO of the company’s German headquarters, advising that he immediately transfer a sum of GBP 220,000 to a Hungarian supplier, he had absolutely no reason to doubt the authenticity of the call. After all such interactions at his level were common, and there was no mistaking the distinctive German accent and the melodious tones of his superior. The head of the UK branch was however badly mistaken. For it was not the German CEO speaking, but an AI-driven deepfake responsible for the impersonation phishing attack.
One year later, in what is considered the largest deepfake attack to date, an impersonation attack(11) using ‘deep voice technology’ occurred, leading to a Hong Kong bank manager transferring a staggering amount of USD 35 million.
Attaining its name from ‘Deep Learning’, a form of AI, deepfakes are taking the world by storm – and being used literally to ‘put words into the mouths’ of others!
Understanding Deepfake Technology
Gartner (4) defines Deepfakes as audio, images, and videos that appear real but are actually AI-generated. Generated with a view to cash in on unsuspecting victims, it has its genesis in a culture of disinformation and ‘truth decay’. Forbes (1) calls it the latest development in the ongoing war between business and counterfeiting.
Cyber Magazine (6) defines it as synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. Deepfakes leverage ‘Deep Learning’, a part of AI to manipulate visual and audio content. At work are generative neural network architectures, such as autoencoders or generative adversarial networks (GANs).
Based on the areas they target, threats can be categorized as:
- societal (stoking social unrest and political polarization)
- legal (falsifying electronic evidence)
- personal (harassment and bullying, non-consensual pornography, and online child exploitation)
- traditional cybersecurity (extortion and fraud and manipulating financial markets).
Alarming trends
The technology has come a long way since its early days in 2019. Advanced, easily accessible tools have made it a scamster’s paradise. Particularly vulnerable are corporate executives, considering the sensitive positions they are in. The figures of attacks perpetrated and material generated speak for themselves.
The World Economic Forum (7) puts the figure of deepfake videos online at an annually-increasing rate of 900%, and VMware finds that two out of three defenders report seeing malicious deepfakes used as part of an attack, a 13% increase from last year.
Cyber Magazine (6) estimates that the attacks have increased by almost 43% since 2019. And Gartner (12), in its prediction for 2023, says that 20% of successful account takeover attacks will use deepfakes to socially engineer users to turn over sensitive data or move money into criminal accounts.
With the potential to disrupt life as we know it, particularly in the case of impersonation involving identity theft and misuse, sectors like travel, finance, health, etc are on the cusp of facing a crisis.
Tackling deepfakes
Though the levels of mimicry achieved are often high, experts are of the opinion that it is possible for someone close to see through the impersonation attempt, by closely observing variations in known mannerisms, delivery, and intonations on the one hand, and syncing discrepancies in the videos created on the other. With the scope for deepfakes likely to grow even further, and online meetings now the new normal, a new breed of identity professionals, skilled in the art of ‘sniffing out’ impersonations, is starting to be in demand.
Forbes (2) lists some of the measures that can be taken to counter the threat. These include paying attention to Social channels, conducting self-assessments, registering organization trademarks, and keeping legal action an option.
VentureBeat (7) reiterates the importance of security awareness training for employees, in the same manner as employees are ‘taught to avoid clicking on web links’.
The best answer to tackling them, however, may lie with the very source that helped create the deepfake in the first place. Cyber experts are convinced that the way forward is to use Defensive AI against Adversarial AI, which is used in the creation of the deepfake. Along with AI and ML which arguably hold the greatest promise in tackling this menace, Blockchain technology (9), commonly used in the cryptocurrency world, is also being mooted as a plausible answer to the problem.
Outlook
The proliferation of deepfakes has been met with a measured response from tech giants like Google, Microsoft, and Meta, who have not only condemned them openly and taken tough hosting stances against them but have also started developing tools for recognizing them. Microsoft has released its Video Authenticator and Adobe has released a software tool to validate its Photoshop product, which is largely used for altering images. The administration too has reacted strongly to the threat. In the USA (14), some states ban a few kinds of deepfakes including pornography. Copyright laws are being invoked to quell impersonation attempts. And legislation is around the corner with the European Union leading the way with the forthcoming AI Act, and the US looking set to act decisively to counter the menace.
Yet the odds are heavily stacked in favour of the deepfake technology which seems to be developing at a faster rate than detection technology. Experts opine that the next 5 to 10 years should see the creation of even more compelling fakes that are indistinguishable from reality. The seemingly mismatched fight is not helped by the fact that a significant number of netizens are still not able to assimilate the precarious situation we are in. 2019 research from iProove (13) showed that 72% of people were still unaware of deepfakes.
On an organizational front, this does not augur well for CISOs, whose bag of woes continues to spill over. For the industry and society at large, however, the threat is palpable. With the wherewithal to negate the threat completely still lacking, a slew of measures from awareness, policy-making, and technology will be called for if the inferno-in-the-making is to be controlled.
References
(1) Deepfakes – The Good, The Bad, And The Ugly (forbes.com)
(2) Deepfakes: 7 Ways To Guard Against This New Form Of Disinformation (forbes.com)
(3) Overview Of How To Create Deepfakes – It’s Scarily Simple (forbes.com)
(4) https://blogs.gartner.com/darin-stewart/2021/06/30/executives-need-a-deep-fake-defense-strategy/
(5) https://blogs.gartner.com/andrew_white/2021/01/12/our-top-data-and-analytics-predicts-for-2021/
(6) Deepfakes to become a growing trend in 2022 says IntSights | Cyber Magazine
(7) Why deepfake phishing is a disaster waiting to happen (venturebeat.com)
(8) Deepfakes Are a Growing Threat to Cybersecurity and Society: Europol – SecurityWeek
(9) Deepfake Technology Is a Rising Cyberthreat – Cybersecurity Magazine (cybersecurity-magazine.com)
(10) Deepfakes – The Danger Of Artificial Intelligence That We Will Learn To Manage Better (forbes.com)
(11) Social Engineering at work – Aurorait.com
(12) https://blogs.gartner.com/andrew_white/2021/01/12/our-top-data-and-analytics-predicts-for-2021/
(13) Deepfakes Are a Growing Threat to Cybersecurity and Society: Europol – SecurityWeek
(14) AI-generated deepfakes are moving faster than policy can : NPR