header-home.php How To Navigate Cybersecurity Curve Balls By Generative AI - Inovar-Tech

How To Navigate Cybersecurity Curve Balls By Generative AI

avathar By Anita Srinivasan
date 21st August, 2023

Cybersecurity

Cloud Security

Malware Detection

Data security

Generative AI is the new kid on the AI block that is scoring high marks in reducing the time taken in application development with improved productivity.  

Generative AI is estimated to boost economic growth and value by $4.4 trillion, delivering powerful capabilities to non-technical users.  

Despite the robustness of Generative AI models, they still present ethical challenges towards cybersecurity, balancing precariously between privacy and production.  

The technology has both fascinated and alarmed security experts due to its potential to create realistic and sophisticated content. 

This blog dives deep into the cybersecurity risks of Generative AI, what industry experts think of this emerging technology and some plausible solutions to prevent AI-driven cyber threats.  

Cybersecurity Curve Balls 

Generative AI has shown impressive capabilities in generating realistic results, synthesizing creative artworks, and even generating conversational responses indistinguishable from human speech.  

However, these same abilities can be exploited for malicious purposes, such as creating sophisticated AI generated phishing attacks, spreading disinformation, or even fabricating information for altering public opinion. 

Let us take a look at some curve balls Generative AI presents us with-

1. Data Privacy and Misuse 

Generative AI systems use extensive data to determine and deliver accurate outputs. This substantial data collection and storage raises concerns about user privacy and potential misuse of confidential information.  

2. Malicious Use Cases 

Generative AI is vulnerable to AI-generated phishing attacks, providing scope for impersonation, or AI generated deepfakes, making it challenging for users to distinguish between real and fake information. 

3. Bias And Discrimination 

If the training data used for Generative AI models are biased, the generated content may reflect and amplify those biases, leading to discrimination or doctored outcomes. 

4. Intellectual Property Concerns 

With Generative AI capable of creating original content, there are concerns about intellectual property rights and copyright infringement. 

5. AI-Augmented Cyberattacks 

As AI evolves, AI-driven cyber threats could become more sophisticated, with attackers employing Generative AI to create ever-changing attack patterns that evade traditional security measures. 

When asked about privacy concerns with Generative AI, Prashant Choudhary, Ey India’s Cybersecurity Partner, expounded, “Generative AI poses several privacy challenges. While some challenges have been discovered and many more are still coming out as more and more use cases pop up. And it is pervasive across all Generative AIs –Chat GPT, BERT, DALL-E, Midjourney, and so on.  

The whole model is that you use training data, and then the AI comes out with whatever output it is supposed to give. It will give (output) based on the data that was used to train the model. 

In this business, the data source is the internet and, there is a lot of web scraping involved, which brings the data to train these base models or the Large Language Models (LLMs).”( 1) 

Plausible Preventive Measures 

Although, we are unsure whether we can find foolproof solutions to mitigate Generative AI threats in cybersecurity, here are some plausible solutions we can adopt to address them. 

1. Responsible Data Usage 

  • One preventive measure that can be taken is responsible collection, usage, and storage of data, while adhering to privacy regulations.  
  • Limiting data retention to the minimum for model training and actively seeking user consent can also be helpful. 

2. Robust AI Verification 

Developing AI-powered malware detection and prevention solutions for detecting and verifying the authenticity of the content delivered by Generative AI; can help user identify potential risks effectively.  

3. Explainable AI 

Be sure to implement techniques that make AI models transparent and open to interpretation. This allows users to understand the decision-making process and identify potential biases. 

4. Collaborative Efforts 

Encourage collaboration between AI and researchers, cybersecurity experts, regulatory authorities, and ethical governance bodies to determine and address ethical implications of Generative AI in cybersecurity.

5. Adaptive Cybersecurity Measures 

Consistently update AI and cybersecurity policy to counter AI-driven cyber threats effectively. Also, use of AI technologies to develop proactive defence mechanisms against evolving threats can be beneficial.  

6. Informed Consent 

The use of Generative AI in various applications, such as virtual assistants, chatbots, or customer service interactions, raises questions about whether users should be explicitly informed when they are interacting with an AI system instead of a human. 

In addition to these preventive measures, EY Cybersecurity partner, Prashant Choudhary believes “Synthetic data is a very interesting conversation to address all the copyright, legal, and other concerns when it comes to training LLMs. There are multiple interpretations of synthetic data, but for this conversation, I am assuming that synthetic data is basically when you generate a data using a computer and then you use that to train the LLM.”  

He further explained that using computer-generated or synthetic data may appear to be a reasonable solution due to lower data costs, scalability, and the ability to generate multiple variants. However, this approach presents a challenge as the data will always reflect the algorithm used to generate it. 

Industry experts are participating in discussions about using anonymized or tokenized versions of Personally Identifiable Information (PII) and other sensitive data.  

With this approach, data is still extracted, but PII and other sensitive information are identified and replaced with anonymous labels to protect individual identities. This method can address privacy and other related issues and can be used to train LLM. 

There are several regulatory authorities around the world who are giving their inputs around this issue. The NIST (National Institute of Standards and Technology) has developed the AI Risk Management Framework; the European Parliament is insisting on the EU Artificial Intelligence Act; the European Union Agency for Cybersecurity under discussion about cybersecurity for AI; the US Securities and Exchange Commission (SEC) are having conversations around AI, cybersecurity, and risk management. 

Despite these factors, none of these solutions can create a solid defensive layer of protection against AI-driven cyber threats. These are not sure shot solutions but only preventive measures.  

Generative AI presents an intriguing frontier in cybersecurity, offering both innovative solutions and ethical implications.  

It is imperative to address ethical implications of Generative AI in cybersecurity, as the technology continues to evolve to guarantee a secure digital environment for all. 

By encouraging a comprehensive discussion between stakeholders, implementing responsible AI and cybersecurity policies, and deploying innovative verification techniques, we can harness the power of Generative AI while mitigating its potential risks and fostering a safer digital environment for everyone. 

For a deep learning of AI powered analytical tool development and implementation, please refer to our exclusive whitepaper resource on AI-based Gross To Net (GTN) Tool.  

References: 

  1. Cybersecurity in the age of Generative AI: solving the ethical dilemma. (EY.com).