Contact us today.Phone: +1 888 282 0696Email: sales@aurorait.com

Managing the Risks of Generative AI and LLMs Through Technological Advances

The RSA Conference in San Francisco in April 2023 brought into sharp focus the increasing role of Artificial Intelligence (AI) in shaping the digital landscape, with particular emphasis on its implications for cybersecurity. Featuring perspectives on the benefits and risks of AI, the event included presentations from a wide range of experts from the administrative, legislative, and corporate space.

With the immense benefits of Generative AI and Large Language Models (LLMs) a given, the focus was on addressing the inherent risks associated with deploying the now widely used tools. Google announced its new AI-powered Google Cloud Security AI Workbench, which takes advantage of advancements in LLMs. SentinelOne announced its AI-powered cybersecurity threat detection platform, while Veracode announced Veracode Fix, which uses Generative AI to recommend fixes for security flaws in code.

What they are

Generative AI is an umbrella term(1) for processes that automatically create something that doesn’t yet exist, based on the data it’s been trained on. Generative AI(2) learns from the large amounts of data provided to it via the LLM to generate new, realistic artifacts that reflect the characteristics of the training data. It can produce a variety of novel content, such as images, video, music, speech, text, software code, and product designs, using several techniques, including AI foundation models. Large language models (LLMs)(3) are algorithm sets for natural language processing. Based on deep learning, they deliver in response to textual instructions from the user. They are used for a variety of tasks, including machine translation, question-answering, and text generation.

Computer World(5) defines LLMs as ‘next-word prediction/probability engines’ – a machine-learning neuro network that is trained through data input/output sets. Until trained with user-specific data, LLMs in their native state are developed around huge amounts of data and trillions of parameters from open sources on the internet including Wikipedia. Training LLMs to use the data of choice requires the use of massive, expensive server farms that act as supercomputers.

LLMs constitute the algorithmic starting point for Generative AI chatbots like OpenAIs ChatGPT and Googles Bard, to name just a few. The technology is tied back to the trillions of parameters that Generative AI processes instantly.

The pros and cons

Barely nine months since its emergence in Nov 2022, Generative AI has already made deep inroads into all spheres. Gartner(6) summarizes the benefits of the disruptive new technology for organizations in terms of opportunities to increase revenue, reduce costs, enhance productivity, and better manage risk, calling it ‘a competitive advantage and differentiator for the future.’

Yet despite the apparent benefits, a host of evident drawbacks are being cited to suggest a cautious and prudent approach to the use of Generative AI and investment in LLMs. These drawbacks include:

  • ·Lack of predictability despite the huge investments and training efforts involved.
  • Lack of accuracy and consistency in results produced.
  • Bias in results produced that often necessitates close scrutiny to ensure compliance with established policies, and industry and ethical standards.
  • Intellectual property (IP) and copyright violation possibilities due to the current lack of governance and protection assurances concerning data.
  • Cybersecurity compromise and cyber fraud due to the malicious use of Generative AI by scammers, especially with the use of Social Engineering
  • High costs of set up and training involved in the set up of server farms and super processors.
  • Impacted sustainability goals considering the amounts of electricity and computing resources necessary to train and sustain LLMs.

A judicious approach

Reading into the eminent success of Generative AI, tech giants quickly invested heavily in the new product. Microsoft pumped USD 10 billion into OpenAI and integrated ChatGPT into its Bing search engine. Google and Meta released their own LLM bots Bard and LLaMA. But with the entire digital world thrilling at the joyride, what started out as a small eye-opener, soon had discerning users of the tool checking their rearview mirrors for the looming menace.

The eye-opener is reported by Payments Journal(7), where a scammer leveraged the latent power of Generative AI to foist a deepfake video call on an unsuspecting individual. Impersonating a friend over WeChat, the scammer convinced a Mongolian national to transfer roughly $600,000 to a bank account in Inner Mongolia. The deception was noticed only much later when the transferer called the friend to confirm the transfer. The incident, one of a spate of several impersonation attacks, seemed to sound the call for a judicious approach to the use of Generative AI.

A closer look at the inherent risks of Generative AI seems to make the point. Perhaps the most significant of these lies in its quintessential predictive nature and its inability to understand language and gauge intentions/emotions(4). Experts cite disturbing results generated in response to queries, leading to the opinion that the use of Generative AI needs to be carefully and judiciously regulated.

Regulations and legislations

With incidents mounting and perceived threats on the increase, the call for a more judicious approach has been heard in the industry. Ethical frameworks for the responsible use of LLMs in fields of medicine and health have been proposed, even as regulators in the US and Europe have moved in on the case. Some lawmakers have suggested that new rules and regulations be put in place for AI tools, while some tech and business leaders have suggested a hiatus on the training of AI systems until a thorough safety assessment has been done.

The international scene has seen considerable action. The US Government(8) is facilitating an industry-level discussion on the known and potential risks, with a view to arriving at guidelines for the use of the technology.  The EU is working on a code of conduct for the industry. Some countries have started to temporarily ban these platforms altogether. These include Italy with the government specifically ordering ChatGPT’s developer OpenAI to temporarily stop processing Italian users’ data until GPDR compliance is reached.

The way forward

Perceived threats aside, an increase in both the number and sophistication of hoaxes and frauds have literally caused users at all levels to sit up, take notice, and tread cautiously where AI is concerned. Organizations have commenced issuing advisories to their teams to be mindful of copyright and data privacy concerns while making discreet use of the tools. The unveiling of remedial solutions by tech leaders at the RSA Conference that addresses hitherto unaddressed concerns is proof enough that the industry is seized of the need for remedial action.

Technology leader Plurilock’s recently released PromptGuard(9) takes a particularly studied approach to Generative AI. Taking cognizance of the unquestioned benefits of the tool, it proposes to safely enable the use of Generative AI interfaces like ChatGPT and Bard for business use, rather than block it. At the heart of the product is the safeguarding of the sensitive data that businesses will send to the LLMs, through a proprietary anonymizing process that in parallel, ensures user invisibility.

Final words

Gartner(2) boldly predicts that Generative AI is poised to positively impact a host of industries in the next few years. Primary among them would be the health and pharmaceutical industries, manufacturing, media, architecture, interior design, engineering, automotive, aerospace, defense, medical, electronics, and energy. It predicts a steep rise in conversational and developmental AI, with 2026 being the year several human tasks will be managed by robo-colleagues at the workplace.

Gartner’s predictions hardly seem out of place when one witnesses the current euphoria surrounding the use of the tool. Yet as with most things cyber, there are already alarming trends being witnessed in its inappropriate use. Generative AI has made it easy to perpetrate online impersonations.  McAfee(10) estimates as little as three seconds of someone’s voice is needed to successfully clone it and make it usable in a scam call. This is borne out by the Javelin Strategy & Research’s Identity Fraud Study (7) which puts identity fraud scams as having already affected 25 million individuals and resulted in losses amounting to $23 billion in 2022 alone.

Going forward, it is clear that the cybersecurity world will need to pull out all stops if it is to strike a favorable balance between the positives and negatives. Legislation alone will not save the day. Nor will human discretion and training. One thing however is certain as the battle of Safe Generative AI rages. Developmental AI from technology leaders that effectively harnesses the ample strengths of AI and LLMs will play a crucial role in the coming period.

As custodians of Generative AI and LLMs, the cybersecurity community undoubtedly faces significant challenges ahead. 

Discover the unstoppable power of DEFEND and PlurilockAI, the ultimate AI-generated tools that crush security threats.

Get in touch with sales@aurorait.com or call (888) 282-0696 to experience the unmatched protection that Aurora, a proud member of the Plurilock family, delivers through these groundbreaking solutions.

References

1.    https://www.forbes.com/sites/forbestechcouncil/2023/07/17/preparing-for-the-cyber-risks-of-generative-ai/

2.    https://www.gartner.com/en/topics/generative-ai

3.    https://livebookai.com/post/

4.    https://www.forbes.com/sites/forbestechcouncil/2023/03/22/from-boring-and-safe-to-exciting-and-dangerous-why-large-language-models-need-to-be-regulated/

5.    https://www.computerworld.com/article/3697649/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html

6.    https://www.gartner.com/en/topics/generative-ai

7.    https://www.paymentsjournal.com/generative-ai-is-pushing-fraud-to-new-levels/

8.    https://www.forbes.com/sites/forbestechcouncil/2023/07/17/preparing-for-the-cyber-risks-of-generative-ai/

9.    https://plurilock.com/products/plurilockai-promptguard

10.  https://www.axios.com/2023/06/13/generative-ai-voice-scams-easier-identity-fraud


Contact us at sales@aurorait.com or call 888-282-0696 to learn more about how Aurora can help your organization with IT, consulting, compliance, assessments, managed services, or cybersecurity needs.

Recent Posts