Contact us today.Phone: +1 888 282 0696Email:

AI – The Twin-Edged Sword Shaping the Future of Cybersecurity

Almost all new inventions that promise to revolutionize a field or remedy a despairing situation are fraught with dangers that bear examination. Take the case of how the aviation industry struggled to shake off the apprehensions of safety after the early debacles of the Wright brothers. Decades would pass before the industry reached maturity, and travel by flight would acquire its reputation as the safest means of transport. In recent times, the life-saving Covid-19 vaccines that were touted as the answer to the virulent respiratory virus that was ravaging the world, met with their fair share of consternation as to their efficacy and safety, as news of the ill effects of the virus began to surface.

Artificial Intelligence (AI) is no different. Initially looked upon as the answer to cybersecurity problems, this disruptive technology is being increasingly viewed as a double-edged sword, as scamsters deploy it adroitly to automate and potentiate their attacks.

Debates have begun as to its deployment, and despite the benefits that it is providing to a number of industries, the jury is still out on it.

After the thrill is gone

Defined by Gartner (1) as the ‘application of advanced analysis and logic-based techniques, including machine learning (ML), to interpret events, support and automate decisions and to take actions’, AI can make analyses combining probability and logic to assign a value to something that is likely to happen or not happen. No human intervention is involved in tasks that human minds would otherwise find hard to manage at high speeds. Simplilearn (2) defines it a little more succinctly as the ‘ability of a computer to think and learn on its own; a simulation of human intelligence (hence, artificial) into machines to do things that we would normally rely on humans.’

Perfect, right? But look.

Almost immediately after industries and organizations started to deploy the power of AI to improve their cybersecurity postures, cybercriminals too jumped on the bandwagon, harnessing the same power that AI offered to upscale their attack vectors.

No sooner its praises as a game-changer were being sung, naysayers in the cyberworld sounded alarm bells as cases of AI-generated hacks emerged. Suddenly the thrill seemed to have gone, and cyber experts realized they had a twin-edged sword on their hands. 

The Case for AI

AI is synonymous with the increasingly digital world that we live in –  a world of connectivity, online presence, automation, data overloads, and cyber challenges. Its ability to handle tasks humans would find impossible, makes it a must-have for organizations and individuals looking to improve their processes, provide 24/7 coverage (3), save time, reduce costs, substantially reduce errors, stave off digital-world challenges, and make more informed decisions. AI has the potential to greatly enhance our cybersecurity efforts in the following ways:

  • Threat detection – using its ability to process humungous amounts of data in real-time, allowing for more effective and efficient threat detection by identifying patterns and anomalies, and automating/prompting responses before the threats materialize
  • Malware detection and Vulnerability management – using its ability to recognize suspicious behavioral patterns in system software
  • User behavior analysis – using its ability to continually authenticate and monitor user presence and activity, while taking action to block unauthorized access. AI-driven Behavioral biometrics (7) are helping prevent system breaches, reduce insider threats, and improve overall security

The flip side

AI however is not all roses – as cybercriminals soon proved by exploiting its ability to process patterns, manage data, analyze vulnerability situations, and recommend entry points to create highly potent and successful attacks. The 2020 USD 35 million scam (8) on a bank in Hongkong involving the cloned voice of a director serves as a ready reference to the power of AI being put to use by cybercriminals. Media cloning software based on AI is making news the world over, with many companies now offering their products for use on a commercial scale. In recent times, the debate has been joined by the Open Source technology movement (6) involving Language Learning Models (LLMs) driven by AI. Going further, ChatGPT, the AI-enabled chatbox is now in the news as European nations are coming together to frame policies for its use amidst concerns about data privacy and cheating.(9)

AI has come under scrutiny for (3):

  • Data privacy concerns: – in the event that large databases are compromised to exfiltrate confidential and sensitive data that is used for malicious purposes
  • Spawning cybercriminal activity – and creating an arms race of sorts as cybersecurity experts and cybercriminals work to outsmart each other
  • Bias and discrimination – as a consequence of the manner in which they have been created and the manner they are programmed to process data
  • Adversarial attacks (4) – involving manipulation of data to deceive AI systems, resulting in false negatives (where an alert is not raised or acted upon by a real threat) and false positives (where an alert is raised or action initiated for a non-threat). The former would result in a cyber security crisis, while the latter would result in a waste of time and resources
  • Lacking creativity – in so much as it may be predictable and unable to come up with innovative solutions, the way humans can
  • Heavy investment– involved in setting it up and maintaining it
  • Reducing employment – in that deploying it can mean a quite significant reduction in manpower deployment, given its potential to take on many human tasks
  • Ethical dilemmas – in the manner in which it takes actions or proposes solutions, especially in medical or emergency situations, which may contrast with decisions taken had there been a human interface

Final Words

Going by the above, the arguments seem overwhelmingly stacked against AI. Yet it is interesting to read what Deloitte has to say in its 2022 study (5)  of global leaders.  94% of respondents firmly believed AI was critical to their organization’s success, though approximately 50% of respondents also felt that challenges involving executive commitment and managing risk remain. As much of 79% of respondents say they have deployed up to 3 types of AI (up from 62% in the earlier year), though many think they are yet to realize the benefits. The study however goes on to say that the market continues to expand.

Going forward, organizations will need to take a long hard look within their organizations, evaluate the pros and cons inherent in AI systems and reach decisions of implementing or maintaining them in the best way.

A double-edged sword is, after all, still very much a sword! 

Aurora can provide organizations with innovative technology to meet their AI goals by offering a wide range of cybersecurity solutions and services, including Plurilock’s DEFEND.

Reach us at or call +1 888 282 0696 or visit us at


Contact us at or call 888-282-0696 to learn more about how Aurora can help your organization with IT, consulting, compliance, assessments, managed services, or cybersecurity needs.

Recent Posts