Episode 1: Navigating Risks, Ethics, and Innovation of AI in Cybersecurity
Episode 1: Navigating Risks, Ethics, and Innovation of AI in Cybersecurity
The AI era continues to change how we work and live. However, security and the risks associated with AI are quickly becoming just as important as the adoption of AI. From deep fakes to injection attacks, watch or listen to this episode of The Q&AI Podcast with Bob Friday and Mounir Hahad for an in-depth discussion on understanding the potential risks and security challenges of AI.
You’ll learn
About AI attack surface and risks associated with using AI
Interesting security and AI perspectives from an experienced security researcher through stories and examples
Who is this for?
Host
Guest speakers
Transcript
Bob: Today, I'm joined by Mounir Hahad, Head of Juniper's Threat Labs. Today we're gonna be diving into one of my most favorite topics, Gen AI, Large Language Models, and how they apply to cybersecurity. Before we start though, Mounir, maybe a little bit about yourself and when did large language models actually get on your radar?
Mounir: Hey, Bob. Good afternoon. It's a pleasure to be here with you. I really appreciate the opportunity to have this conversation and get a little bit of the word out there about educating people on on these Generative AI and Large Language Models. So I've been heading up a Juniper Threat Lab for the last seven years.
I've been doing similar things in the past. So my focus has been quite a bit around cyber security. When you're talking about AI in general, there are multiple classes. And we've been doing some sort of AI for quite a while. So when you're thinking about defending networks and data and users from cyber attacks, there Juniper's products have been using machine learning models for the past 10 years.
I'm sure some people who know that will kind of recognize the machine learning being used around sandboxing for detection. But obviously recently there's been a huge trend towards Generative AI. Now, Generative AI is somewhat generic. People, tend to associate it with ChatGPT, right? This is when it really made the front page and the headlines.
But Generative AI can be used for so many different things. It could be used in generating artistic content. It could generate music. It can generate videos. We know that it can generate images. A lot of people have had a good time generating images around Gen AI publicly available models, but it can also be used in healthcare.
Drug discovery is actually a huge user of Generative AI. When you're looking at synthesizing new molecules, for example, this is a really big use case. So, wherever you look, there is an opportunity to be generating some kind of contact using artificial intelligence. Now, a subset of that is, are things around large language understanding.
We know that chatbot is is a simpler one. We can also think about virtual assistants, you could have an assistant that kind of helps you with your schedule and tells you where to go and when to go and make appointments for you and all of that. So, it's really vast space. We just don't want people to think about it as, oh, it's that conversational natural language processing thing.
Bob: Yeah, I totally agree. Even if you look inside of Juniper here right now, there is a big initiative. I think every company has an initiative right now. To make sure all departments are actually leveraging Gen AI, LLM in some way to make their departments more efficient. You're in the security department right now. Maybe we just start with a little bit about, even in your space right now. How are you guys leveraging Large Language Models inside the security team right now to make things easier.
Mounir: Yeah, so we have to look at it from multiple angles. One of them is how do we use Generative AI and Large Language Models in order to improve our own work our daily jobs?
We write a lot of software code. So obviously there is a way for us to write it may be better may be faster and we definitely take advantage of that and like you said, it's a Juniper wide initiative to be able to do that. But we also look at it from a cyber security perspective. Gen AI has unfortunately given an opportunity to a lot of threat actors to become a little bit more efficient in putting together cyberattacks. And to give you a simple example, we all deal with phishing emails, right? But sometimes you look at that phishing email and you go, my god, they could have done so much better if only they had somebody who actually speaks English write that, that phishing email.
Now they can. Just about anybody anywhere around the world would be able to put together a cyberattack a phishing email with probably perfect English. As a matter of fact, with the ability of Large Language Model to do language translation, they can target any country they want with the language of the country, and it look like perfect.
So that's from an attack perspective, and that's just one example. From a defensive perspective, we do the same thing, right? We have to be able to defend against these kind of attacks. So for us, Large Language Models are an opportunity to create cyber ranges and scenarios that would have been difficult to put together otherwise.
You know things that you would think of, hey, I need a year and a half to put together a lab that would be able to simulate all these various scenarios. Now you can do it in less than a week, right? Because a lot of it is automated thanks to these models.
Bob: So maybe, from a security’s perspective, with any new technology as powerful as AI, there's the good guys, there's the bad guys. We saw with, nuclear energy, right? We almost got to the point where we could destroy ourself.
Mounir: Yeah.
Bob: Maybe for the audience, where do you put AI on the scale? And, hey, I got this nuclear threat. We're about to destroy the world. We have another group of people who think that we're going to build Terminators, right?
Mounir: Yeah.
Bob: Is AI going to be the end of man? We survived the nuclear threat so far.
Mounir: Yes, we did.
Bob: Where do you put this AI on the scale of things? Is it up there with nuclear energy that, we're on the verge of destroying ourselves if we're not careful?
Mounir: That's a very good question. It's a lot of people happen to be on the extreme ends of the scale on this one. You have people who say no. This is not the end of day the end of days, doom's world. It's perfectly safe. We have control over these things. And then you have people who think the opposite.
Not to cite particular people, but very prominent people in this space were basically saying that by 2025, Gen AI has the ability to shift the balance of power between nations. So it's a pretty big deal. I'm not going to say it's not, right? It is a pretty big deal. And it's going to accelerate a lot of developers in various spaces, including in the offensive space, right?
For me, I look at that as, yes, there is some amount of threat from from Gen AI, but not because it's going to go rogue on humanity. It's mostly because in the wrong hands, it could still cause a lot of damage.
Bob: You don't see, you don't see the singularity event happening in our lifetime yet. I don't have to worry about my AI taking over my computer or anything?
Mounir: No, I do. I do. Actually, I do believe that the singularity event will happen within our lifetime, but I don't think it's that catastrophic. I don't think it's that catastrophic. I think it'll get us to a point where we are a lot more efficient. We are able to solve societal problems in a much faster and much better way and probably cheaper way.
A lot of these things have to do with budgeting when you're thinking about optimizing, for example the yield crops, I mean crop yields or where to distribute your resources around the world for preventing starvation and famine or even looking at prediction of human behavior and preventing situations where conflicts are going to arise.
This is the kind of things that Generative AI can build together very realistic scenarios and allow you to forecast what is likely to happen, what is the proper response and guide us pretty much through the future.
Bob: Let's put singularity to the side and come back to maybe some more down to earth practical problems here. What was your recommendation to people, the audience, around, best practice around training? We hear a lot about prompt injection. People actually trying to get my LLM ChatGPT to do something bad. Data leakage, we've heard things in the news about, Samsung, where their actual code got leaked out into the Internet.
Mounir: That's right. That's right. Again, you can look at it from multiple angles. One is, I'm a layman, I'm a user, right? I want to use something like ChatGPT or any public, openly public Large Language Model. It is very important for people to understand that these models are continuously trained.
And they are continuously trained based on data they would collect. So that data is both public data as well in some circumstances, relatively private data. So you have to be extremely careful of what kind of information do I make accessible to the model when I am interacting with the model.
Now that could be at the private level. If for instance I bring in a, some sort of a copilot that I put on my laptop. And if I give it access to all the files on my laptop it's going to use them and it's not necessarily going to use them for me only. It's potentially going to use them for other people.
That's on the one hand. On a corporate level, right? When we're talking about businesses and governments and all of that, it's again extremely important to realize that the data leakage is a serious problem, right? If you don't know how to interact with a Large Language Model, especially one that is not privately hosted, then you run the risk of a number of things happening data leakage is one of them, but it's not the only risk.
There are a number of risks that come with it. One of them is generation of wrong information. You have to be extremely careful with that. There is the notion of bias. These models are built, some people don't even say built, they're grown. They're really grown using information and knowledge that's out there.
They may be grown in a way that includes some kind of a bias. If you're using that blindly, It may infer certain things that you don't want to use right as a result. And there is also the notion of prompt injection or even just meddling with the model itself in one way or another, because you don't know the life cycle of that model.
So there is a chance that, some malicious threat actors with means and capabilities and opportunity could have injected into the model certain things that will only appear at the right moments. And this is basically extremely difficult to test for. You have certain companies that generate these models that do what is called black box testing.
I generate a model and then I ask it a number of prompts and and take a look at the answers and make sure that they're still ethical. They're not harmful. They're not going into, banter or anything that could get you in legal trouble. But that's one way to look at things because as these models become more and more capable, who knows when they're going to be able to lie to you.
Because they know the motive, they know why you're asking these questions, and they might just give you the answers you want to hear, right? Whereas some other user might get very different answers. There is an attempt at analyzing these models from the inside, and I think some of the research going into this space called explainability of the model basically looks at an X-ray of the model and says, what is it doing inside?
And how do I make sure that it's not going off the rails? Because today they are going off the rails. I'll give you a simple example. I had my young daughter, very young, was able to get ChatGPT, and that GPT 4 gave her answers that it's not supposed to. It was able to manipulate it, basically.
Bob: Yeah, I guess maybe that's a good thing for, when you look at security, security is never 100 percent full proof. It's always this game of defend, attack, defend, attack, to your point on prompt injection. It's almost like pin testing.
You never really can guarantee that someone can't ask your LLM a series of questions that gets it into trouble, maybe for the, if you've seen anyone out there yet in the industry yet, who's offering pin test surfacing, in the security space, like you said, there's plenty of companies out there who offer to come in and black box test your security to find holes in your security. Are we seeing that yet in the LLM space?
Mounir: Yes, we are seeing some of that. As a matter of fact somebody on my team in Juniper Threat Labs did a proof of concept where we managed to get a Large Language Model, publicly available model with billions of parameters, generate malicious code that we instructed to be difficult to detect.
And we were still able to do that with the public model. So, yes, there are some people doing penetration testing, but like I mentioned earlier, Bob it's not a final answer. The fact that you're doing all kinds of testing to make sure that the LLM is not giving you The answers that will get you into trouble does not necessarily mean that there is no way to get there.
Somebody will figure out a way to ask different kinds of questions, giving different context. And may lead to the kind of to the kind of answers you do not want to. As a matter of fact, I'll give you one example. I think it was in the drug discovery world. There was a, an initiative that some researchers from a university, I don't recall which one was asking an LLM to generate molecules to look for a certain cure.
And sure enough, they actually discovered the molecule that was extremely harmful. Basically a bioweapon. And it was done within just a few hours on something that looked like a laptop.
Bob: Okay, with that many area, any, for anyone out there actually starting their journey on tonight, any last quick words of wisdom for them?
Mounir: I would say that people shouldn't be shy of jumping into this technology. It's honestly a very good technology. It's here to stay. It's going to change the way we do things. And we want everybody to be on this bandwagon. We do not want people to be left behind because they cannot deal with this kind of technology.
It has the ability of making us do our work a lot faster. It has the ability to solve problems that would have been difficult to solve without this technology. I would embrace it, definitely. And let's make sure that we use it in the most ethical way. And keep it out of threat actors hands, if possible.
Bob: Mounir, thank you so much, and all good stuff. And thank you everyone out there for joining us today, and look forward to seeing you at the next episode of Bob Friday Talks.