Q&AI: From Reactive to Proactive - Enhancing Data Security with Mike Spanbauer and JJ Minella
Season 1 Episode 3: From Reactive to Proactive: Enhancing Data Security
How can organizations shift from reactive to proactive security management? In this episode of The Q&AI Podcast, we explore this critical topic with JJ Minella, Founder and Principal Architect at Viszen Security. JJ shares insights on operational challenges, including managing complex tech stacks and monitoring data flow. She explains how a strategic approach to data access and security tools can bridge the gap between organizations with robust systems and those still struggling to maintain strong security.
JJ also sheds light on how AI is transforming anomaly detection by establishing a network’s baseline and how organizations can adapt security measures during crises like natural disasters. She shares real-world examples, including why temporarily disabling multi-factor authentication may be necessary to maintain access.
This episode offers a rich blend of technical expertise and practical applications, making it essential for anyone invested in advancing security management.
You’ll learn
Proactive security planning to prevent last-minute scrambles
AI’s role in improving anomaly detection and cybersecurity
The importance of quality data for better decision making
Who is this for?
Host
Guest speakers
Transcript
Mike: Welcome, everyone. Nice to have you on this episode. So, Mike Spanbauer here. I am part of the security marketing function here at Juniper, and I'm extremely pleased to have with me today, JJ Minella. She is the founder and principal architect over Viszen Security. So nice to meet you, JJ. You want to say hi to the audience?
Jennifer: Yeah. Hey, everybody. Great to be here.
Mike: So we're here today talking through really what does it mean to manage right operations from a security and a network perspective? JJ's experience in this space, is considerable and of course, Juniper's interest in helping you the audience understand what's an option what's available?
And really just how are others working through this and considerations for it? Is what this episode is all about. So here are the next nine minutes. We'll get into it. So with that JJ, what I'd love to ask you just to start off this this topic is what exactly do you think most orgs even think about when they're considering, operational needs and, behind this, of course, the data requirements that have grown over the years.
Jennifer: I think part of the problem is they're not thinking about it up front. And it's one of those things that pops up and it rears its ugly head during an incident or a situation where it's usually typically something negative happening in the environment. And then people are scrambling to figure out where is the data, who has access?
So I would love to, I would actually feel passionate about this because I would love to see that thought process and planning happening earlier.
Mike: Yeah, no, I think most orgs do struggle to think through the operational implications of the data. And even really what are the tools that would be needed, in the scenarios that occur, right? Because, when you're in the middle of a fire you realize that an extinguisher would have been a really great thing to have grabbed on the way out the door.
But, You really can't do much at that point. So, what are some of the areas that you've seen where organizations struggle when they're sort of in the thick of it or areas that perhaps need more attention earlier?
Jennifer: I think all of it, because, I'm kind of brought up the example of an incident response, during a cyber security incident, but really, if we back up from that, there's a daily need to have access to the information. I think that's where we tend to fail. Just holistically as an industry here at different levels, because whether it's, help desk trying to troubleshoot and fix something an architect that's trying to, increase security or enhance something based on some new feature protocol all the way up into, of course, security operations and correlation and then getting into incident response and like Mike, I know we were talking about this the other day. I think one of the reports I read recently said, I think it was 84 percent of companies reported that they're struggling with the complexity of all the different like tech stack and tools and the specifically the monitoring.
So there's, uptime resilience monitoring, is the thing working, is it passing the packets, whether it's in the cloud or on prem, and then there's all of the security piece that goes on
top of that. Hopefully not separate, but yeah, they're struggling with it at all levels and that manifests a lot of different ways.
Having worked with literally thousands of organizations over the past, a couple decades, all industries. There's that kind of elite few, right? The little top of the tier that has everything mostly figured out and they're pretty robust and mature. But the rest of us, it's a daily struggle. Where is the data? And then there's sometimes there's just too much data. More is not always better.
Mike: At that point, right over the last 20 years, actors have grown more capable. We've had more applications added to the environment. There are more connected nodes. I don't even know how many devices I have in my home network.
Now, granted, I'm an anomaly, but I think I have something like 100 IPs allocated in my house and, similarly commercial enterprises with 100,000 employees, right? You can only imagine the number of managed devices and applications and all the work, all the application flows, cloud sources.
All this has to be monitored and then aggregated. In order to, in semi real time, right? Both recognize when something's off as well as then, act on it. It's a struggle for even the well-equipped orgs that you mentioned, right? That have the astute or really, robust teams, what do other smaller orgs even have a chance of, right? What can they do when, this is perhaps, where AI holds so much promise, right?
Jennifer: Yeah. And you were talking about you just said something like, when something's weird or something's off. But I think most organizations, again, other than that elite top of the tier, top of the food chain when it comes to security tooling, the rest of us don't know what baseline looks like.
Because unless you're, unless you have a team, not a person, not someone looking at firewall logs, right? The one security, I'm air quoting security analysts that you hired. But unless you have a whole team of humans with tools who are sitting there monitoring things constantly. And understanding you don't know what normal looks like.
And that's where I think and hope we can start to leverage, the suite of AI technologies because we just can't deal with the volume of data. You were talking about all the different applications and the user and the location sprawl. Like it's not back in my day, you had the, you had your campus network and everybody was inside of it and all of your stuff was neatly packed in there.
And you could look inside, and you could look at the stuff going in and out, and everything was good, and it's just, it's not like that now.
Mike: No, I think, actually, that point knowing your baseline, knowing what's running, and then, of course, using that as a reference to detect anomalies, to figure out, What is off. These are the types of scenarios in which AI or, similar tools that have the ability to do machine learning based on normative models, and then spotting the oddities are incredibly well equipped to satisfy as part of why we had our launch here in early October.
But the point, of course, is that these technologies lend themselves well to helping those operational challenges. And I think if you don't take advantage of some of these new
technologies, then you'll continue to struggle orgs have that aren't, well-funded or well-equipped enough to have a completely robustly staffed up org to do this human monitoring, which just doesn't scale.
At that point, I know we talked briefly, but we see it in the news all the time. I don't know if you have any ops scenarios or, examples where in places in which you've heard clients or orgs that you've spoken with struggle, it's a stark world, right?
Jennifer: Yeah. Yeah, there's a lot. I see this still daily and there's, some definitely some headline news floating around out there just in the past few days or weeks of, there's a county in Long Island that's under investigation because there's, 25 million dollars responding to an incident that should never have escalated to that point. It's not Oh, gosh, they got attacked. It was, they weren't doing the things that they needed to ahead of time, even after the FBI had warned them. And, that's taxpayers that are paying that 25 million dollars. It doesn't just, we don't magically print paper out of thin air.
But it was basic stuff like not, not doing tabletop exercises, not planning for incident response, not understanding where the data was and wasn't and was going and just not having that type of telemetry available. And I think, circling back to the more is not better.
If we distill this down into the three things to focus on, I think it's quality, correlation, and analysis. Not just volume. Is it high quality data? Can you correlate it and contextualize that as to what's happening in your environment off of baseline and then analyze that and take action on it because the data is not actionable unless those three things are met and that's where the AI engines are gonna before they take over the world and become our overlords they will help us in this mission I'm sure.
Mike: I agree. And I think that part about being able to, pull from all of these different, data points and help those, operations people be able to, both identify more quickly, but also then respond appropriately. This is sort of that perfect area.
Whether it's, querying even, the libraries of configuration manuals to help the guy that the gal that just started, a week ago, two weeks ago know how to both, troubleshoot a device or a specific element of the environment, or perhaps, reconfigure something.
These things are now much easier to get access to, whereas in my day having to pull the manual from the books and, figure out, all right, so which one was this in and, or, pulling up the help files, which was its own joy. It's grown a lot simpler these days. And these are the things that I think has started to really, apply, in real world operations needs and teams to just save time to solve real problems.
Jennifer: Yeah. And I have actually, this week I was talking to a friend and one of, one of the things just, offhand, off the cuff, random conversation. One of the things they're dealing with, because he's managing like a SOC, basically for the organization, that's a government agency and one of the things they're dealing with is, we had in Western North Carolina, We had the big flood that came, the hurricane that came through and just dumped an ungodly amount of water, flipped over cars, houses going down the river, like crazy stuff.
Same thing now Florida hurricanes, like we have natural disasters and situations. They had a need to turn off multi-factor. For a subset of users because nobody had power and cell
service, but there were certain things that they were still able to do because they could get to, certain types of hotspots that were spun up.
But those accounts needed additional monitoring, right? Because if you've turned off your other control, you've really got to dial into what's happening and keep a sharp, right? High-fidelity data on that. So there's stuff like that, that I think happens in the normal course of just doing business. That, that's one example tied to a natural disaster, but stuff like that, and this need to kind of twist and turn what we're doing in our operations happens regularly. Yeah, we just have to, we have to get better-quality data.
Mike: No, I agree, JJ, and I'm afraid we're out of time, but we could definitely talk for even longer on these topics. Obviously we're both quite passionate, and a lot is both advancing, a lot of exciting things are happening, but it has been an absolute pleasure. Thank you for joining me and certainly would love to have you back on at some point.
Jennifer: Thanks for having me. Good to see you, Mike.