Bob Friday, Chief AI Officer

Bob Friday Talks: Security and AI

Bob FridayAI & ML
Bob Friday Headshot

Bob Friday Talks: Security and AI

The AI era continues to change how we work and live. However, security and the risks associated with AI are quickly becoming just as important as the adoption of AI. From deep fakes to injection attacks watch or listen to this discussion to better understand the risks and security challenges of AI.

Show more

You’ll learn

  • About AI attack surface and risks associated with using AI

  • Interesting security and AI perspectives from an experienced security researcher through stories and examples

Who is this for?

Network Professionals Security Professionals

Host

Bob Friday Headshot
Bob Friday
Chief AI Officer

Guest speakers

Mounir Hahad Headshot
Mounir Hahad
Head of Juniper Threat Labs and Cloud Security Engineering

Transcript

0:00 hello everyone and welcome to another

0:02 episode of biday talks today I am joined

0:04 by manir aod head of Juniper's threat

0:07 Labs today we're going to be diving into

0:09 one of my most favorite topics geni

0:11 large lightning models and how they

0:14 apply to cyber security before we start

0:16 domir maybe a little bit about yourself

0:18 and when did large language models

0:20 actually get on your radar hey Bob uh

0:22 good afternoon it's a pleasure to be

0:24 here with you I really appreciate the

0:25 opportunity to have this conversation

0:27 and uh get a little bit of the out there

0:30 about educating people on uh on these uh

0:33 generative Ai and large language models

0:35 so I've been um heading up a juniper

0:38 threat lab for the last seven years I've

0:40 been doing similar things in the past so

0:42 my focus has been quite a bit around

0:44 cyber security um you know when you're

0:47 talking about AI in general there are

0:49 multiple classes and we've been doing

0:52 some sort of of AI for quite a while so

0:55 when you're thinking about defending

0:57 networks and data and users from cyber

1:00 attacks uh Juniper's products have been

1:02 using um machine learning models for the

1:05 past 10 years I'm sure some people who

1:07 know that will kind of recognize the

1:09 machine learning being used around

1:11 sandboxing for for detection but

1:14 obviously recently there's been a huge

1:16 Trend towards generative AI now

1:19 generative AI is somewhat generic people

1:22 tend to associated with Chad GPT right

1:25 this is when it really made the front

1:27 page and the headlines but generative AI

1:30 can be used for so many different things

1:32 it could be used in uh generating um art

1:36 artistic content you know it could

1:38 generate music it can generate videos we

1:41 know that it can generate images a lot

1:43 of people have uh had a good time

1:45 generating um uh images around J

1:48 publicly available uh models but it can

1:51 also be used in uh Healthcare drug

1:55 Discovery is actually a huge user of

1:58 generative AI when you looking at

2:00 synthesizing new molecules for example

2:03 this is a really big uh big use case so

2:07 wherever you look there is an

2:09 opportunity to be generating some kind

2:11 of contact using artificial intelligence

2:14 now a subset of that is is are things

2:17 around large language understanding we

2:20 know that chatbot is uh is a simpler one

2:23 we can also think about um virtual

2:26 assistant you know you could have an

2:28 assistant that kind of helps you with

2:30 your schedule and tells you where to go

2:31 and when to go and uh you know make

2:33 appointments for you and all of that so

2:36 um it's really vast space we just don't

2:39 want people to think about it as oh it's

2:41 that conversational natural language

2:43 processing thing yeah I mean I totally

2:45 agree I mean even you look inside of

2:46 juniper here right you know there is a

2:48 big initia I think every company has an

2:50 initiative right now to make sure all

2:52 departments are actually

2:54 leveraging geni llm some way to make

2:57 their departments more efficient y you

2:59 know you're in the security department

3:01 right now you know maybe we just start

3:03 with a little bit about you know even in

3:05 your space right now how are you guys

3:07 leveraging large language models inside

3:10 the security team right now to make

3:11 things easier yeah so we have to look at

3:14 it from multiple angles one of them is

3:17 how do we use generative Ai and large

3:19 language models in order to improve our

3:22 own work right our our daily jobs uh we

3:25 write a lot of software code so

3:26 obviously there is a way for us to write

3:28 it maybe better maybe be faster and we

3:31 definitely take advantage of that and uh

3:33 like you said it's a joury proide

3:35 initiative to be able to do that uh but

3:37 we also look at it from a cyber security

3:39 perspective jna has um for the you know

3:43 unfortunately given an opportunity to a

3:45 lot of threat actors to become a little

3:47 bit more efficient in putting together

3:50 cyber attacks and uh to give you a

3:53 simple example we all deal with uh

3:55 fishing emails right but sometimes you

3:57 look at that fishing email and you go my

4:00 God they could have done so much better

4:01 If Only They had somebody who actually

4:03 speaks English right that that uh

4:05 fishing email well now they can just

4:08 about anybody uh anywhere around the

4:11 world would be able to put together uh a

4:13 fishing email with probably perfect

4:15 English as a matter of fact with the

4:18 ability of large language model to do

4:20 language translation they can Target any

4:22 country they want with the language of

4:25 the country and it look like perfect so

4:28 that's from an attack perspective and

4:29 that's just one example um from a

4:32 defensive perspective we do the same

4:34 thing right we have to be able to defend

4:36 against these kind of attacks so for us

4:38 large language models are an opportunity

4:41 to create uh cyber ranges and scenarios

4:45 that would have been difficult to put

4:46 together otherwise you know things that

4:48 you would think of hey I need a year and

4:50 a half to put together a lab that would

4:52 be able to simulate all these various

4:54 scenarios well now you can do it in less

4:56 than a week right because a lot of it is

4:58 automated thank to these models yeah so

5:01 maybe you know from a security

5:03 perspective you know with any new

5:05 technology as powerful as AI you know

5:07 there's the good guys there's the bad

5:09 guys you know we saw with you know

5:11 nuclear nuclear energy right you know

5:14 you know we we almost got to the point

5:15 where we could destroy ourself yeah you

5:17 know maybe for the audience you know

5:19 where do you put AI on the scale you

5:21 know hey I got this nuclear threat we're

5:23 about to destroy the world you know we

5:25 have another group of people who think

5:27 that we're going to build Terminators

5:28 right you know as is AI going to be the

5:30 in demand you know we survived the

5:34 nuclear threat so far where you put this

5:37 AI on the scale of things is it up there

5:39 with nuclear energy that you know we're

5:41 on the verge of destroying our sou we're

5:42 not careful that's a very good question

5:44 you know it's a lot of people happen to

5:46 be on the extreme ends of the scale on

5:49 this one you have people who say no no

5:52 no this this is not the end of day at

5:54 the end of days Doom's world it's

5:56 perfectly safe we have control over

5:58 these things and then you have people

6:01 who think the opposite I mean uh not to

6:03 S particular people but very prominent

6:07 people in the space were basically

6:09 saying that by 2025 gen has the ability

6:12 to shift the balance of power between

6:14 nations so it's pretty big deal I'm not

6:16 going to say it's not right it is a

6:17 pretty big deal and it's going to

6:19 accelerate a lot of developments in

6:21 various spaces including in in in the in

6:25 the offensive space right so uh for me I

6:29 at that as yes there is some amount of

6:32 threat from uh from jni but not because

6:36 it's going to go Rogue on Humanity it's

6:38 mostly because in the wrong hands it

6:40 could still cause a lot of damage so you

6:42 don't see you don't see the singularity

6:44 event happening in our lifetime yet you

6:46 know I don't have to worry about uh my

6:48 AI taking over my uh computer or

6:50 anything no I do I do actually I I do

6:53 believe that the singularity event will

6:55 happen within our lifetime but I don't

6:57 think it's that catastrophic

7:00 I don't think it's that catastrophic I I

7:02 think it'll it'll get us to a point

7:04 where we are a lot more efficient we are

7:08 able to solve societal problems in um

7:12 much faster and much better way and

7:14 probably cheaper way a lot of these

7:15 things have to do with budgeting when

7:17 you're thinking about optimizing for

7:19 example um the yield crops I mean crop

7:23 yields or um you know where where to

7:26 distribute your resources around the

7:27 world for uh preventing starvation and

7:31 famine or or even looking at prediction

7:34 of human behavior and preventing

7:37 situations where um you know conflicts

7:40 are going to arise these this is the

7:42 kind of things that generative AI Can

7:46 Build Together very realistic scenarios

7:49 and allow you to forecast what is likely

7:52 to happen what is the proper response uh

7:56 and uh guide us pretty much through the

7:58 future let's put Singularity to the site

8:01 and come back to maybe some more downto

8:02 Earth practical problems here you know

8:05 what was your recommendation to people

8:07 the audience around you know best

8:09 practice around training you know we

8:11 hear a lot about prompt injection yeah

8:13 you know people actually trying to get

8:15 my llm chat gbt to do something bad data

8:19 leakage you know we've heard things in

8:20 the news about you know you know Samsung

8:23 you know where their actual code got

8:25 leaked out into the internet that's

8:26 right that's right um again you can look

8:29 at it from multiple angles one is I'm a

8:32 Layman I'm a user right I want to use

8:34 something like uh chat GPT or any uh um

8:37 you know public openly public um large

8:40 language model it is very important for

8:42 people to understand that these models

8:45 are continuously trained and they're

8:47 continuously trained Based on data they

8:49 would collect so that data is both

8:52 public data as well in some

8:54 circumstances relatively private data so

8:56 you have to be extremely careful of what

8:58 kind of information

9:00 do I uh make accessible to the model

9:04 when I am interacting with the model now

9:06 that could be at the private level if

9:08 for instance I I bring in a some sort of

9:10 a co-pilot that I put on my laptop and

9:13 if I give it access to all the files uh

9:15 on on my laptop well it's going to use

9:18 them and it's not necessarily going to

9:20 use them for me only it's potentially

9:22 going to use them for other people

9:24 that's that's on the one hand the uh on

9:27 on a corporate level right we're when

9:29 we're talking about businesses and

9:31 governments and all of that it's again

9:33 extremely important to realize that the

9:36 data leakage is uh a serious problem

9:40 right if you don't know how to interact

9:42 with a large language model especially

9:44 one that is not privately hosted then

9:47 you you run the risk of a number of

9:50 things happening uh data leakage is one

9:52 of them but it's not the only risk there

9:53 are a number of risks that come with it

9:56 uh one of them is um generation of wrong

9:59 information right you have to be

10:01 extremely careful with that uh there is

10:03 the um the notion of bias these models

10:06 are built some people don't even say

10:09 build they're grown they're really grown

10:11 using information and and knowledge

10:13 that's out there they may be grown in a

10:16 way that includes some kind of a bias so

10:20 if you're using that blindly it may

10:23 infer certain things that you don't want

10:25 to use right as as a result and there is

10:28 also the notion of um prompt injection

10:33 or or even just meddling with the model

10:36 itself in in one way or another because

10:38 you you don't know the life cycle of

10:39 that model so there is a chance that you

10:42 know some malicious threat actors with

10:43 means and capabilities and opportunity

10:46 could have injected into the model

10:48 certain things that will only appear at

10:50 the right moments and this is basically

10:53 extremely difficult to test for you have

10:56 certain companies that generate these

10:58 models that do what is called blackb

11:00 testing I generate a model and then I

11:04 ask it a number of PRS and and take a

11:07 look at the answers and make sure that

11:10 they're still ethical they're not

11:11 harmful they're not going into you know

11:14 banter or anything that could get you in

11:17 legal trouble but that's one way to look

11:19 at things because as these models become

11:21 more and more capable who knows when

11:25 they're going to be able to lie to you

11:27 because they know the motive they know

11:29 why you're asking these questions and

11:30 they might just give you the answers you

11:32 want to hear right whereas some other

11:35 user might get very different answers so

11:37 there is an attempt at analyzing these

11:40 models from the inside and I think some

11:42 of the research going into this space

11:44 called explainability of of the model

11:47 basically looks at an x-ray of the model

11:49 and says what is it doing inside and how

11:52 do I make sure that it's not going off

11:55 the rails because today they are going

11:57 off the rails I give you simple example

12:00 I had um my um young daughter very young

12:04 was able to get chat GPT and that GPT 4

12:07 give her answers that it's not supposed

12:09 to it was able to manipulate it

12:11 basically yeah I guess maybe that's a

12:13 good thing for you know you know when

12:14 you look at security security is never

12:16 100% full forist always this game of

12:19 defend attack defend attack you know to

12:22 your point on Pro prompt rejection it's

12:24 almost like pin testing you never really

12:26 can guarantee that someone can't ask

12:28 your l a series of questions that gets

12:31 it into trouble but you know maybe for

12:33 the have you seen anyone out there yet

12:35 in the industry yet who's offering

12:37 pinest surfacing I mean in the security

12:40 space like you said there's plenty of

12:42 companies out there who offer to come in

12:44 and black Pockets test your security to

12:46 find holes in your security yeah are we

12:48 seeing that yet in the uh LM space yes

12:51 we are seeing some of that as a matter

12:53 of fact uh um somebody on my team in

12:55 Juniper threat Labs did a proof of

12:57 concept where uh we man managed to get a

12:59 large language model publicly um

13:02 available model with billions of

13:04 parameters generate malicious code that

13:07 we instructed to be difficult to detect

13:11 and we were still able to do that with

13:13 the with the public model so yes there

13:15 are some people doing penetration

13:16 testing but like I mentioned earlier Bob

13:19 it's it's not a final answer the fact

13:22 that you're doing all kinds of testing

13:23 to make sure that the uh llm is not

13:26 giving you the answers that would get

13:29 you into trouble does not necessarily

13:31 mean that there's no way to get there

13:33 somebody will figure out a way to ask

13:35 different kinds of questions giving

13:37 different context and may lead to the

13:40 kind of uh to the kind of answers you do

13:42 not want to as a matter of fact I'll

13:44 give you an example I think it was in

13:46 the drug Discovery World uh there was a

13:49 an initiative that um some researchers

13:51 from a university I don't recall which

13:53 one um was uh asking an LM to generate

13:57 molecules to look for certain cure and

14:00 sure enough they actually discovered the

14:02 molecule that was extremely harmful

14:04 basically a bioweapon and it was done

14:06 within just a few hours on something

14:09 that looked like a laptop yeah no okay

14:12 with that man you know any you know for

14:14 anyone out there actually starting their

14:15 Journey on J any last Quick words of

14:17 wisdom for them I would say that uh

14:20 people shouldn't be shy of jumping into

14:22 this technology it's honestly uh a very

14:25 good technology it's here to stay it's

14:27 going to change the way we do things and

14:30 we want everybody to be on this

14:31 bandwagon we we do not want people to be

14:34 left behind because they cannot deal

14:36 with this kind of uh technology it has

14:38 the ability of U making us do our work a

14:41 lot faster it has the ability to solve

14:44 problems that would have been difficult

14:46 to solve without this technology

14:49 so I would embrace it definitely and um

14:53 and let's make sure that we use it in

14:55 the most ethical way and keep it out of

14:58 uh

14:59 you know threat actors hands if possible

15:02 well Min thank you so much and all good

15:04 stuff and thank you everyone out there

15:06 for joining us today and look forward to

15:07 seeing you at the next episode of Bob

15:09 Friday talks

Show more