Contact us today.Phone: +1 888 282 0696Email: sales@aurorait.com

A Concerted Plan for AI Risk Mitigation

In 2017 with Christmas round the corner, when fans of OWN, Oprah Winfrey’s Television Network received an Instagram message from her asking them not to respond to a scam that promised to give away USD 5,000 to the first 1 million followers as it might compromise their personal information, it marked one of the earliest examples of impersonation scams, that would soon be witnessed with an alarming frequency. Six years on, scams driven by Generative AI continue to impersonate, mislead, defraud, and bankrupt netizens in all walks of life. Where there once was trust in its ability to generate results that served our ends, we now have questions being asked as to what and how much of those results, we can trust. Yet even more ominous is the nagging thought about the danger of dismissing something in those results that is very real and tangible, and not a fake or untrustworthy as we may have been led to believe.

The nascent nature of Generative AI however compels us to adopt a more studied approach, rather than rushing headlong into sounding a red alert about this misplaced trust. While we are certain – and indeed, there are cases to make the point – that there is risk, we are still attempting to fully understand the risks that it poses.

This document provides a three-pronged approach to mitigating that risk via a culture of awareness and learning, governance and monitoring, and tools that harness the defensive powers of AI itself.

#1. Create a culture of awareness and learning. 

As with most concerns, the need for embedding awareness, knowledge and training in Generative AI in the AI ecosystem would constitute a good first step in the direction of risk mitigation. A better informed and better equipped user – whether he is a customer, vendor, employee, or another stakeholder – is always desirable. Studies on the subject conducted in recent times seem to suggest there already exists a fair amount of awareness of both the benefits and the risks in it. A recent survey showed that as much as 83% of decision-making respondents in organizations acknowledged it was a ‘strategic priority’. KPMG’s 2023 study (1) on individual respondents across 17 countries, showed many agree that AI is a mixed bag – 85% understood the benefits it confers, while 73% perceived there were inherent risks.

Some of the touchpoints going forward:

i. Understanding what’s good and what’s bad

A USD 120 billion industry that is growing at a phenomenal rate of 38% CAGR, and forecast by the World Economic Forum (WEF) (2) to increase global GDP by $15.7 trillion by 2030, Generative AI today is driving decision-making and providing solutions for chatbots, content, images, videos, 3D modelling, automation and more in every field. Many of its applications – like the diagnoses it is turning out in the medical sector – with its ability to process data and come up with remarkably accurate results in the shortest of time frames, are making it almost indispensable. But its disruptive nature exemplified in AI-supported phishing, AI-supported password guessing, human impersonation via voice cloning and deepfakes, social engineering scams, identity thefts and email compromise by malicious actors or poorly-orchestrated data sets to begin with, has made it a twin-edged sword that is impacting the future of cybersecurity (3).

ii. Evaluating the risk quotient

With the inroads it has made into every sphere and walk of life, Generative AI is compelling organizations and individuals to evaluate their risk posture very critically. There is no doubting or discounting the phenomenal benefits that can accrue from investing in it. Yet, as we have seen, it can have devastating consequences if improperly set up, misread, or unsparingly used. Worse still, deploying it is also to invite the attention of bad actors, that could as easily leverage weakness in one’s cybersecurity set up to wreak as much, if not more damage.

iii. Addressing ethical concerns

Amongst the biggest touchpoints in the AI debate that organizations will need to address is the matter of its ethical use. AI is a vital tool in advancing strategy but carries with it a fair share of risk. These two intrinsic characteristics of AI – strategy and risk – place it squarely in the responsibility matrix of the organizations’ board of directors.

Organizations will therefore have to set the right tone and guiding principles for an ethical framework for AI adoption that can be operationalized to achieve responsible outcomes.

The most alarming ethical concerns for an organizational AI framework are:

  • Possible unemployment due to replacement of human workforce by AI-enabled systems in many fields
  • Decisions based on its recommendations will be biased and discriminatory.
  • Privacy will be compromised.
  • Legal, compliance and intellectual property violations
  • Desensitized recommendations violating human rights and emotions.
  • Spread of false, inflammatory, and sensitive information
  • Misuse by malicious actors for fraud, identity theft and other scams
  • Misuse for state-sponsored terrorism harming human lives and critical infrastructure.
iv. The case for Responsible AI (RAI) 

The call for RAI has been doing the rounds for the last couple of months.  Termed ‘an approach to developing and deploying Artificial Intelligence from both an ethical and legal point of view, with the goal of deploying it in a safe, transparent, trustworthy and ethical fashion’, by TechTarget (4),  RAI is a part of AI governance that an organization should follow. The National Institute of Standards and Technology (NIST) offers seven key principles for an RAI framework:

  • Accountability and transparency
  • Explainability and interpretability
  • Fairness
  • Privacy-driven
  • Secure and resilient
  • Valid and reliable
  • Safety

A key consideration for much of the above would lie in the effective set up and creation of the machine learning (ML) models and the data sets that power it.

v. Preparing to strike out

As the push for RAI gathers momentum, organizations can look to ensure that they:

  • Invest in AI models that are transparent and explainable
  • Implement best-practice machine learning tools and resources
  • Deploy responsible AI tools to inspect their AI models
  • Create a culture of support and appoint gender and racially-diverse teams to oversee ethical AI concepts
  • Identify metrics for training and monitoring to help keep errors, false positives and biases at a minimum
  • Perform regular tests for bias testing and record findings
  • Inculcate an environment of learning from RAI findings

Happily, the charge to ensure RAI is being championed by industry leaders. TechTarget cites the initiatives of Microsoft, IBM (which also boasts of its own ethics board) and FICO, whose governance policies, interaction guidelines, fairness checklists, data sheet templates and toolkits, explainable models and research are helping drive RAI in their organizations. Other organizations too have embarked on a series of initiatives, as AIethicist.org (5) tells us.

#2. Establishing AI Governance and Monitoring

The second step towards ensuring AI risk mitigation would be to continue with legislation and administrative measures. Legislation ensures compliance. More importantly it conveys a sense of assurance that the administration has your back if you comply. Happily, again we are witnessing a good response from the administration, where legislative measures are concerned. With incidents mounting and perceived threats on the increase, regulators in the US and Europe (6) have moved definitively to address the situation.

The NIST and CISA have led the way in recent times, providing roadmaps, recommendations, and regulations for effective use of AI in organizations. In January 2023, the National Institute of Standards and Technology (NIST) (7) released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies. In parallel, it also released a playbook that would help navigate and deploy the framework.

In August 2023, the Cybersecurity and Infrastructure Security Agency (CISA) released its digital roadmap for the nation, addressing amongst other things the need for standard engineering, Secure-by-Design software (9) and integration of security operations practices into AI engineering.

While regulations will play an important role, collaboration between the administration and the industry will be moot, going forward – a synergy between policy makers and industry experts, and a readiness on the part of all players to adopt best-industry practices.

#3. Harness the defensive power of AI

The third step towards achieving AI risk mitigation is the deployment of industry-level tools. Fortunately, we do not have to look far for them. As threats proliferate and malicious actors further potentiate their attacks on the AI citadel, an age-old maxim comes to mind: sometimes you need to fight fire with fire! 

That would mean using AI’s own immense potential, duly aligned of course for the checks and balances, to stem the menace. These would include (10) deploying unbiased data sets for language learning models and training AI systems to detect malware, recognize attack patterns, battle bots, predict breaches, provide predictive intelligence, proactively initiate remediation measures and provide best-in-class attack surface and endpoint protection.

Final words 

As Generative AI continues to gather momentum, we are certain to see attacks using it, register similar growth. It augurs well that there is a clear intent on the part of the administration and the industry to mitigate the inherent risk. A clear perspective of where the versatile but volatile technology is headed is the need of the hour. Proactive strategies will come next.

Time, collaboration, and proactive action will be moot. But if the resilience – and track record – of the cybersecurity industry is anything to go by, then the cybersecurity industry should soon see the advent of a RAI-invested industry, fully seized of the pros and cons of Generative AI, and well-equipped with risk mitigation measures to put the challenges it poses well behind it.

Award-winning cybersecurity company Plurilock continues to stay invested in your Generative AI journey, with its all-new AI PromptGuard, which makes your use of Generative AI safer, rather than blocking it.

For more information as to how this pathbreaking solution works to screen and redact your confidential data, call us on   +1 (888) 282-0696 (USA West) / +1 (908) 231-7777 (USA East) or email us at sales@aurorait.com to book a demo.

References: 

  1. Trust in Artificial Intelligence | Global Insights 2023 – KPMG Australia
  2. Here’s why organizations should commit to RAI | World Economic Forum (weforum.org)
  3. AI – The Twin-Edged Sword Shaping the Future of Cybersecurity | Aurora (aurorait.com)
  4. What is RAI? | Definition from TechTarget
  5. AI NGOs, Research Organizations, Ethical AI Organizations | AI Ethicist
  6. Managing the Risks of Generative AI and LLMs Through Technological Advances | Aurora (aurorait.com)
  7. NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence | NIST
  8. Software Must Be Secure by Design, and Artificial Intelligence Is No Exception | CISA
  9. Artificial Intelligence in Cybersecurity (computer.org)


Contact us at sales@aurorait.com or call 888-282-0696 to learn more about how Aurora can help your organization with IT, consulting, compliance, assessments, managed services, or cybersecurity needs.

Recent Posts