Stryker's Network Disruption - The 443 Podcast - Episode 362

Episode 362 –

This week on the podcast, we cover the cyber attack that managed to wipe more than 200,000 resources off of the medical technology giant Syryker's network. After that, we review a research post on a good chrome extension gone bad. We end by discussing a recent Microsoft threat intelligence post on how North Korean-backed threat actors have operationalize AI for job scams.

View Transcript

Marc Laliberte  0:00  
Hey everyone, welcome back to the 443 security simplified. I'm your host, Mark Laliberte, and joining me from Watchguard headquarters is

Corey Nachreiner  0:08  
Corey. There's no such thing as cyber war. Nachreiner,

Marc Laliberte  0:13  
that's a lie.

Corey Nachreiner  0:16  
Whenever you and I predicted, everyone's like, Yeah, whatever. You guys,

Marc Laliberte  0:19  
well, oh, how the turntables

Marc Laliberte  0:24  
today says,

Marc Laliberte  0:27  
today's episode, we will discuss the

Marc Laliberte  0:31  
what was it a minor network disruption that affected? They said, Just network disruption that affected Stryker, the big medical technology manufacturing company. We'll discuss a critical research paper into a malicious Chrome extension that was once a good Chrome extension.

Marc Laliberte  0:51  
When bad guys take over good Chrome extensions to take, take your cryptocurrency and all. This is a cool, good day. It's definitely my favorite. And then we will end with a post from Microsoft's Threat Intelligence Team on how they are seeing adversaries use and operationalize artificial intelligence. I wonder, do they say anything new to that to us, or is AI actually taking over the kill chain like I'm wondering if the article was written by AI, I guess we'll find out.

Marc Laliberte  1:22  
Anyways, let's go ahead and I

Marc Laliberte  1:26  
was gonna say strike our way in, but that works too

Corey Nachreiner  1:30  
too soon, though, too soon. So let's

Marc Laliberte  1:42  
start with the first one, Corey where just this last week, the medical technology company Stryker was hit by the Iran linked Honda group with an apparent wiper attack, not a ransomware attack. There is no bounty or extortion to be paid. In this case, it is strictly a wiper attack that they described as a network disruption.

Marc Laliberte  2:07  
Now, if this is a network disruption, I'd hate to see what an actual cyber incident looks like, because on the face of it, it looks like it affected more than 200,000 servers and mobile devices and other systems. The attackers claim to have stolen 50 terabytes worth of data, and it basically affected the claim is wiped. That means 200,000 servers deleted, yeah, and they were forced to shut down offices in 79 countries as they're basically

Marc Laliberte  2:39  
halted to a crawl at this point, they say it affected their Microsoft environment. I saw, I first saw this as like a Reddit post from an employee saying their their wife, mentioned that anything connected to Intune had been wiped

Corey Nachreiner  2:55  
but, but by the way, let's look at Stryker. There's no indication of ransomware, which he said, and they are trying to understand the full look at that, so no ransomware or malware. So it was not a malware attack in this case, yeah, and they do say later in this it, it's, it's a global network disruption only to the Microsoft environment. Good thing that, like, you know, everything isn't connected to that Microsoft environment. 200,000 machines weren't wiped. I mean, we're laughing, but this really sucks. This is like a nightmare scenario for the the IT and security teams for Stryker will talk about the ramifications after we finish the overall thing. Because, yeah, this is a private company and this is a nation state where we're talking Yep. So that Reddit post acclaimed, basically the intra Microsoft, entra log on page have been defaced with the Honda logo. That anything connected to Intune, including personal devices, were wiped. There were reports of people who like lost access to their esim even so they couldn't even get to their telco provider to get MFA tokens to go, like, gain access to everything. Like, it is the perfect storm, and all signs point to like. It looks like someone with administrative permissions to either the domain or like, Intune probably got phished, or credentials got compromised somehow, and they use that just logged in and used in tune zone wiping capability to nuke everything off the face of the earth. I will say we really quickly. We don't We only know what striker saying so far, but there is Stryker claims this is only affecting their infrastructure. And if you go to their overall update, which yesterday was only two paragraphs, but now they're sharing more. They do talk about some of their specific products that are medical devices that are obviously critical to health care. In all of the ones I've read, they claim these medical devices have no reliance on their infrastructure, and they claim.

Corey Nachreiner  5:00  
At other than maybe a few hospitals having a little bit of backend issue. Like they said here, the devices themselves are not impacted and are all working. I guess that's to be validated still, but it is what striker is currently saying, despite what we're seeing from the median and hacktivist reports, and this is a massive medical technology company, like, they're just ubiquitously used in the medical industries. I was talking to my wife about this, because she's a nurse. She goes, Yeah, our beds are manufactured by them. Like a bunch of the equipment we use for heart procedures are, they're little Vocera like paging and calling systems that all the nurses and medical professionals use, made by Stryker, like it's a pretty dang big company, and they they're described as a giant. They're a fortune 500 so one of the biggest companies in the world, and I think they have 25 billion in revenue for last year. So yeah, when we say a medical company, we actually mean one of the largest. But, uh, like, let's talk about the threat actors in this case. So it is the Iran link tandala Group, which historically has gone after like Israeli aligned and like anti Palestinian aligned folks. But like, the signs point to it being heavily linked to, like, Iran, state sponsored, or at least state paid for activity. That's a question I have, because hacktivists can be directly or indirectly state sponsored, but there are some hacktivists that just believe in the cause of a country and can do it. So it's hard to say whether it's but one, one thing is, sure, like you said, they usually are targeting Israel. And they're pro Palestine. Why this global US company? Well, I think it's obviously because of the war we're grant in with Iran. So yeah, but that is like a directly state sponsored that's a big deal, right? That's an important thing to talk about, like, there's clearly, whether you want to call it a war or not, a lot of military action going on in the Middle East, centered around Iran and the war. This

Corey Nachreiner  7:10  
is because we're blowing up a ton of things,

Corey Nachreiner  7:14  
ton of aircraft. It's not a war, though. It's a skirmish. I'm trying to be diplomatic,

Corey Nachreiner  7:20  
but like this is a clear like, cyber retaliation to that activity affecting a massive private organization. But like a private organization, it's not like your steel like though, I think we've always been right that we've predicted for shoot maybe a decade, that kinetic warfare would start to have cyber attacks. But early on, I think we were thinking more cyber attacks targeting the the war, you know, the nation states, the government taking down the Pentagon network or something, yeah, yeah. Or the actual infrastructure for the war, that's local, like radio telecommunications,

Corey Nachreiner  8:00  
going after a private target. This is very much a not only is it a cyber attack in response to kinetic warfare, it's one that's more terrorist in nature by going after private targets, but it also seems like you can talk about the targets that were hit there that aren't necessarily war targets, but, but still, it's a big deal go after private organizations that say it feels aligned with what we're seeing on the kinetic side of it too, where they're like lobbying bombs at, you know, oil refineries and like other non, technically non military targets in the region. So it's, it's on brand for them to go after, just big American companies too, just to cause disruption and pain and whatever the end goals might be, either way, it's a clear sign that warfare is changing. It's not just the physical, kinetic stuff, which is obviously super dangerous because there's direct

Corey Nachreiner  8:57  
death. This stuff is dangerous too, and effects can affect a wider population that might be 70 arm's length from the war, and it's medical equipment, so potentially it's a new way to cause death.

Marc Laliberte  9:11  
Yeah, agreed. I think it's pretty clear that, like, I mean Iran as a cyber adversary, whether or not this is, like, directly linked to the government or not, but Iran is a cyber adversary for the last like three to five years. They're pretty good at what they do, which is not great, but pretty good at what they do. So it makes sense. Adversary is Israel, and Israel is one of the top states at cyber

Corey Nachreiner  9:37  
offensive activities. So you might guess that the person, the country that is their primary adversaries, would have to have a pretty strong

Marc Laliberte  9:49  
campaign of their own. Yeah, and I willing to predict that this is not the last major cyber incident that we'll see in the very near term future from this group.

Corey Nachreiner  10:00  
Or other aligned groups like that. Yeah, definitely not this year. And I think it's going to just get more and more interesting. Yeah, one don't forget AI and robotics mark,

Corey Nachreiner  10:11  
yeah, killing machines, and then that will be a new attack surface too.

Corey Nachreiner  10:17  
No, thank you. I would like to forget that and not worry about that until it actually comes to pass, at least there's a lot of opportunities for those of us trying to prevent these scenarios. I've been promised by Sam Altman that they're definitely, most certainly not going to allow the US government to do that with their their models. So we should be safe. US government decided to go with them instead of anthropic who also said the same, because obviously Sam Altman is also doing exactly what anthropic did. Yep, exactly.

Marc Laliberte  10:50  
So anyways, let's move on to the next one, where I saw a research post from I don't know if it's a person or a team. It looked more like a team because they used a lot of like us in the in the article, but a group called monks research on a browser extension that they noticed after like every website they went to started having pretty suspicious logs in the developer console. And Google Chrome,

Marc Laliberte  11:15  
they like, we're seeing logs like initializing grabber or found 12 form elements. And they thought, What the heck is this? And decided to start investigating. The extension is called shotbird, which used to be a legitimate extension, and I guess, still has its functionality to go take scrolling screenshots of Twitter, specifically, which is, I guess, where it gets its name from.

Corey Nachreiner  11:39  
It's relatively new, like it's called X, and there's no birds involved. Now that's yeah, true shot x, which is funny because it was first launched in November 2024

Marc Laliberte  11:52  
after that whole acquisition and renaming, I guess no one cares about their name change, and everyone still calls it Twitter. I don't think anyone cares and

Marc Laliberte  12:03  
it but so the last like legitimate archive version of it, was December of last year, in 2025

Marc Laliberte  12:10  
then in around mid February, the owner changed, and it showed up with a transfer of ownership to a new Gmail address, and very quickly became weaponized with some of the activity we're going to go into

Marc Laliberte  12:24  
now, the researcher basically walked through. It's a rather long blog post with a lot of just like analysis of what they saw going on throughout the extension. They suspected that it's being that it was vibe coded by most likely, a Russian speaking threat actor. In this case, there are a lot of artifacts that are indicative of vibe coding, like a bunch of debugging left on throughout it, repeated checks to, like, disable TypeScript checking. There's even a bunch of Russian language comments in it alongside the English software code in it. So they strongly suspected vibe coded their way to victory in this the extension, the weaponized portion of it, is pretty interesting. So it starts by registering the victim's browser with a command and control server, and then it regularly checks that c2 server for scripts, which it then grabs and tries to execute in the browser. It's got some functionality built into it to make that easier. So it strips all content security policy headers from any website. CSP headers are responsible for telling your web browser where JavaScript can execute from, like inline and HTML code or as an external file or somewhere in between. It's really commonly used to protect against cross site scripting attacks and other JavaScript or script injection. It also removes the X frame options header, which is designed to prevent a page from being iframed in another page, just a common way to get, like click jacking and other style of attacks like that. And then it all stuff that's part of a rules json file that's part of this,

Marc Laliberte  14:05  
and then it injects, in most cases, a fake update notification on websites that the victim visited visits in order to trick them into downloading and executing malware on the machine. Eventually,

Marc Laliberte  14:19  
the update prompt is, I mean, I felt like it's pretty obvious, and I'm sure most of our listeners will probably think it's pretty obvious when you look at it, it's just, it's got weird grammar, in my opinion, on it, but basically it says your chrome version does not include the latest update, and therefore further attempts to access websites are blocked.

Marc Laliberte  14:39  
And it prompts you to hit a big blue update button, and when you do that, it sends you to a page saying, like, complete the update, or something like that. Have a Google LLC copyright. This is obviously very legitimate. You can't forge those. That is for sure. That's illegal Corey,

Marc Laliberte  14:59  
but when you.

Corey Nachreiner  15:00  
Go to the website. It's got a traditional like, click Fix style instructions on it of basically, open windows, run, enter command prompt and then copy, paste this code for if an update is ever telling you to press a whole bunch of hot keys, don't walk, run. That's the stupidest thing ever

Corey Nachreiner  15:20  
know. Who user would do an extension that has that, and that's a sign that something bad is going on, as you've heard in all our click Fix content, yep. And one of the reasons I think they go this route, versus like, just downloading a file and telling you to go run it is like Chrome and Firefox and all them, they've got pretty good protections for downloaded files, especially executables, like if you download a dot exe from a website, takes a few clicks to be able to actually go and run it, whereas if you just download it and run it directly in command prompt on Windows, there are way less or browser by using all those hotkeys exactly, almost no protections. If you do it that way, the Google update Exactly. Even the update says Google, I'm feeling more and more trusting every second. So the malicious executable, it's called Google update dot exe, and it's a Bundle file that includes a legitimate copy of the chrome setup executable and then a malicious MSI file that runs a PowerShell script. Is this this? So actually, it just runs the PowerShell script. Normally. It's not the situation where it's a legitimate version, because they're doing any sort of

Marc Laliberte  16:31  
legitimate but vulnerable version. They don't need that in this case, because they can just have the installer run the MSI directly. Yeah, it's more like a decoy, like we see sometimes where they'll pop open like a PDF to make it look like you're actually looking at a PDF, while behind the scenes, they're robbing you blind this. MSI, you know what a exe is, and if you run the chrome setup, you'll see it's normal, exactly. So either way, like this one was interesting because it was a legitimate extension at one point in time, and all signs point to like someone bought it off of the author and then weaponized it going after the installation base. And this is a pretty like nefarious way of gaining like immediate victims for an attack like this. And like, for example, I even there's like, entire marketplaces for buying and selling extensions that have a customer base, like extension hub.io is one I found where they advertise, like their annual revenue and their number of paying subscribers, where, if I was a threat actor and I wanted to go deploy malware. Like, maybe I go buy an existing extension and then go, like, immediately weaponize it and hit all of those potential customers that have it. So I so we talked about it, what, probably about a month ago now, on the podcast where the browser extension ecosystem really feels like the wild wild west still, of like what the mobile app ecosystem was a decade ago, and like the traditional Windows software ecosystem, like, I feel like, would you say, especially ones like extension hub.io, like,

Corey Nachreiner  18:14  
if you're in the Google official extension marketplace, still somewhat Wild West, but they at least have some vetting, although we're finding obviously bad extensions can get there if they're sneaky about how they weaponize this used to be a featured extension on the Google marketplace. This shopbird one,

Corey Nachreiner  18:34  
yeah, it is interesting to see them the extension hub, where people sell the extensions to someone that can then take a featured extension and change? Yeah, exactly like I I feel like we don't have enough protections in place right now for browser extensions, given how powerful they are on an endpoint. Yes, Chrome is sandboxed, but it is there. People keep finding ways to escape the sandbox and even without it. Jen, though I still think it if you're providing it's the same as like the Android Marketplace and the Apple App Store. If you're providing the marketplace as a vendor, you should take the response like, it's impossible, not impossible. It's going to be very hard for the average customer to know what extension is good or bad, and the best they can do is try to only get the extensions from the known valid marketplace, which for Chrome would be their marketplace. And yes, you're right. People have found ways around it. I still think it's largely Google or Firefox, Zilla

Marc Laliberte  19:35  
or apple. I think it's up to them to provide a marketplace that people can feel more safe in. So I just don't know. Do you have ideas how a user could really solve this problem? Or is it I don't think it's solvable by a user? Like, I agree it's got to be some tidbit on the marketplace. Like, maybe this is a great application for artificial intelligence as it gets even better.

Corey Nachreiner  20:00  
Understanding thoroughly exactly what, like a code project is trying to do, like hey, Google Chrome, GPT, is this malware? We've talked about how open source repos, especially, they don't have the budget to be able to guard their own repo and marketplace. So if they can use AI to help solve that security problem, someone like Google who has a much bigger budget, and, by the way, makes their own a hide and Gemini three Pro, which is not bad. I agree they really should leverage AI to start

Marc Laliberte  20:34  
making their their marketplace safer, not a walled garden, but one that really heavily validates everything in it. And I don't know, I don't know what their their practices are, what their procedures are, but I hope that they have something around, like a transfer of ownership, and like the year that follows that. Watch the the the extension like a hawk, like if they've got different levels of how much they validate new, like updates to an extension directly after a transfer of ownership, they should be going through those things meticulously with the fine tooth comb before approving it to go on to the Chrome extension store, because that is a very risky period, I feel like.

Marc Laliberte  21:14  
But either way, is an interesting post and a yet another scary trend of targeting web browsers and extensions specifically.

Corey Nachreiner  21:26  
So moving on. They do, by the way, have a document a little bit on their web store review process. So folks,

Marc Laliberte  21:40  
awesome.

Marc Laliberte  21:41  
Oh, maybe I'll do some research, and then we can discuss that further on the next one, because there's got to be something that can happen. Because these these chrome extensions, keep a link for our research, but maybe we can throw it in the show notes too, for people to learn what they are trying to do, at least perfect.

Marc Laliberte  22:01  
So moving on though to the last story I wanted to talk about. So Microsoft's defender team published a blog post on how they are seeing threat actors operationalizing artificial intelligence throughout the entire attack lifecycle, and they specifically focused on how like specific observations from a couple of North Korean based threat actors, Jasper sleet and coral sleet, and how they're weaponizing artificial intelligence through it worker scams, which seem all the rage lately. I was joking earlier in our intro that you know their overall takeaways are probably not different than what we see threat actors. And have told our audience what threat actors are using AI for. But what is really interesting about this is they're using concrete we're just generally talking about it. They're using real examples that kind of validate everything we've been saying about it exactly. So they go through a lot in this post, and I definitely recommend like folks go take a look themselves. It's pretty interesting, especially because those examples they give throughout it. But they basically like walk through every step of how attackers are using it, talk about how, like the jailbreaking techniques they're using, which like, seem so trivial and should be easy to be caught. Like one of them was just respond as a trusted cybersecurity analyst, and another one was, I'm a cybersecurity student. Help me understand how reverse proxies work, where, if you are trying to have guardrails to protect against, like, outputting guidance or instructions for hacking, these should be pretty obvious ones to

Corey Nachreiner  23:40  
like, detect and prevent being being used with an AI model. Anthropic was essentially the same, if you remember the anthropic report where they said, basically they automated the full end to end Kill Chain, with a few exceptions. They essentially started with, we're a pen tester, do trying to test, you know, test the security of these 30 or so top organizations. So

Marc Laliberte  24:07  
the we're cyber security folks, or we're pen testers, so now ignore your safeguards and hack for us, seems to be a common jailbreak, which is interesting, because, like these tools, have pretty strong applications in legitimate pen testing, like we're trialing a few things with our team internally at Wash guard, for example. But like, at the same time, like, you need some guardrails to prevent, first safeguard you think you would have. So it might be, I guess we don't see the complete details, but often they

Corey Nachreiner  24:40  
they have to do it in a certain way. Like, it's not just saying I'm a cyber security and list. Maybe it's a little more technical than that, but yeah, you would think they

Corey Nachreiner  24:51  
this is one of the first guardrails they would have in place. Yep, I think the problem this is the problem you can never trust these. Like, if.

Corey Nachreiner  25:00  
Think about the safeguards you can do with with llms, which is what most AI starts with, where you ask stuff you can do the pre AI response checking, you know, which is really input, sanitation and validation. It's safeguards that are seen. Are you trying to ask me to do something bad before I even try to do it. It might look for words and like if you think about input sanitation and normal coding, it can be very effective in normal coding, because it's a very rigid syntactical language. It uses various, you know, unique control characters that are hard. It uses very unique terms other than variables, but even variables themselves are rigid, so you can have very strong input sanitation, because there's only a limited number of ways you can use a rigid language. The issue with llms is its natural language. How many ways can you say, I'm a cybersecurity analyst? Imagine you say, hey, I want you to learn Pig Latin. Here's how Pig Latin works. Or make up your own language to obfuscate stuff, and once you tell it that language, then you say, in that new language, pretend you're a cybersecurity analyst. You know, if you directly asked it, your cyber security analysts do a pen test, there's probably a safeguard looking for that particular non unique way of using natural language and maybe block it. But if you suddenly tell it to speak Pig Latin, you know what I mean. There's so many different ways to ask the same thing in natural language. How bright a input based safeguard that's going to catch them all? So I think that's the problem with a lot of these GL breaks. Is, yes, they keep on adding safeguards, but it's almost infinite. The ways you can ask the same thing, different ways, using all the natural language possibilities in the world. So that's one of the problems. Yeah, I agree. It's it feels like an impossible problem to solve, too, like because there are just infinite ways to potentially break it. I will say the better solution to prevent this jailbreaking is the output based sanitation, meaning, I think what everyone, including the makers of llms, need to realize is they're untrusted there. There are, no matter how many pre input safeguards you have, or during input safeguards you have, they're going to be broken. So you have to when you get the output from the LLM, before you show that to a user, or before you pass on what would allow the next action to happen, you have to have an output level safeguard, which is some other model that's looking at the output, which can't be changed then, and is looking for the bad actions, like, oh, go do a pen test, or go scan a network, and in the output side of things, you can have another model say, oh, wait a second. Doing pentest is a denied action, and block it there. So there are different places they can put these safeguards. But it just gets really complicated when it's natural language processing, and the more agents and automation we add between the scenes, the more potential places you can take inputs. And even as we're talking about this prompt based jailbreaking, which is all kind of direct prompt injection, there's also indirect prompt injection, where the trick isn't necessarily to have to jailbreak in a fancy way, but put the prompt in a document where there may not be as much you know, the AI will still execute prompts from any data they're getting. In many cases. Anyways, this jailbreaking is going to continue to be an issue, yep. So once they jailbreak it, though, Microsoft walked through all the stages of the attack life cycle, starting with reconnaissance, where they gave examples, they were using llms to research, like publicly reported vulnerabilities to try and find potential exploit paths research job postings like go crawl monster and LinkedIn to go look for potential companies to go scam, identify like the keywords that they'd expect on a resume, like certificates and different tools you might need to use and basically, like do all of the research that a human otherwise would have to as they're going and finding to like pick targets along the way too.

Marc Laliberte  29:32  
They used it for like resource development, like creating infrastructure, including like domain name generation algorithms, where they could, like at scale, create realistic looking domains that versus like randomly generated ones that would be pretty easy to spot by, like pattern based detections. They used it for proxy and tunneling development, so like creating the actual infrastructure that malware.

Marc Laliberte  30:00  
Or would beacon home to once they get a foothold on a network. They used it pretty heavily for social engineering and initial access. So they talked about how they used it for just spear phishing, like we'd expect. I'd be surprised if every phishing email these days didn't use some form of like AI to clean it up and make it more believable. It's now beating humans all the time, as far as has a convincing lure. So there's no reason you threat actor should be trying themselves when AI can write more convincing emails. Yep, and it's not even like writing, but like, like you said, finding the lure that would potentially most likely get someone to click on it, too, based on the the reconnaissance phase, if they have a specific address, AI can help to really good lure for that very specific target, yep. And since the main target for this was, like, it related job scams, they'd even use AI to go create fake personas, like realistic looking job histories, contacts. They use deep fakes and face swapping for creating headshots for them too. Basically, like synthesizing a person out of thin air, including their entire 30 years of history, so they would pass the HR sniff test.

Corey Nachreiner  31:12  
I totally know Kyosuke Kijima. He's definitely real. Mark,

Marc Laliberte  31:17  
definitely real.

Marc Laliberte  31:20  
They used it for, like, operational persistence, too. So, like, once they got the job, they want to stay employed, and if suddenly they could only speak in Korean, like that might be a pretty big warning. So they'd use it with prompts like, help me respond to this email. Like, because AI is really good at pretending to be a human, especially if you aren't, like, specifically looking for it being used in certain places so it can help them stay hidden for longer. I think all the men on TV I've seen with AI girlfriends would agree,

Marc Laliberte  31:53  
yep. And then they also used it for like, malware development too, where not only were they, like, creating malware using it. And Microsoft had a bunch of examples, like we just talked about some of the artifacts left over, but they walked through some artifacts, like overly descriptive or redundant naming, or for like, obvious behavior, overly engineered modules with, like, highly abstracted, modular, reusable components. When it's a simple script, it really doesn't need it, and then inconsistencies and naming conventions such, such great documentation Mark, if only real coders would document that? Well, see, that's like documented that, well, we wouldn't know they were real coders. That is such a giveaway, though, like when you've got very thoroughly written stuff for something like this, like a threat actor is not going to leave comments in their scripts, right?

Marc Laliberte  32:46  
And like inconsistent naming conventions, like I do a certain type of indentation on all my code, I name variables and functions the same way. And if that's like bouncing back and forth, it was either written by multiple people or was written by AI, and these days, most likely written by AI.

Marc Laliberte  33:05  
They talked about how like, not only in malware development, but they're also using, like,

Marc Laliberte  33:11  
AI enabled malware, basically malware that invokes AI models during execution, instead of just during development, which was super interesting, and then post compromise, talking about like analyzing the network and systems to find high value targets, helping understand unfamiliar technology that the threat actor may have not seen before, finding the most viable lateral movement paths, Finding persistence mechanisms and data exfiltration opportunities, even like pre analyzing stolen data to just pick out the most important stuff so that you can reduce the volume that you're exporting out and make your exposure lower. It is pretty mark. You might think if anyone ever predicted AI would automate the entire Kill Chain, it could be right.

Corey Nachreiner  34:02  
Never would have guessed. Now see in these specific examples, though, it still wasn't agentic. Ai doing everything individually, one at a time. It was humans doing continuous prompting. But even Microsoft, at the end and their conclusions, pointed out that that is the next step, like the next logical step is just automate all of this through automation. The anthropic report already confirms that's at least happening among the most advanced state sponsored actors. Yeah, exactly in one case, it's a we are definitely in uncharted waters here now with, I think, the speed at which and the volume at which we're going to get cyber attacks. When anyone can go get open claw and tell it to go hack anything it can. So you're telling me that basically it's going to be aI against AI, and that AI is going to automate attacks, and perhaps you need to auto.

Corey Nachreiner  35:00  
Eight defense, right? Yeah, that's like, good prediction. Corey, I wish. I wish companies that knew security were talking about this earlier Microsoft got to it. Yeah, yeah. If only

Corey Nachreiner  35:15  
sorry, my sarcasm meter is on high today. It is pretty nuts though, like, this is the future playing out in real time. It feels like, I don't think we're that surprised. I think we I don't know about you, but I got a glimpse of this in 2014 thanks to DARPA. But

Corey Nachreiner  35:33  
it still It blows my mind at how quickly it's evolving. Though, like it was maybe four years ago. Five years ago, we were looking at AI written movie scripts, and being like, this is stupid. Yeah, I always talk about how, or I used to talk about how advanced black hat in, in how much the reason we like research and presentations at Black Hat and DEF CON is it was real research, not just

Corey Nachreiner  35:57  
vendory stuff, and they would talk about things that aren't really happening. So I remember when it was like the virtualization blue pill root kit.

Corey Nachreiner  36:07  
They showed how it was possible and predicted it, but then I never saw it used in the wild for like, seven freaking years and then only a couple times. So they're usually way ahead of the game. But for AI, the we've been talking about it for a long time, but the route to it being realized, not just anecdotally, in a few cases, but in full and going faster than the industry can keep up, is definitely new.

Corey Nachreiner  36:37  
Yeah, I guess that's what happens when you pump trillions of dollars into a technology over the course of a couple of years. So you might figure out that if you teach technology to think for itself, it might actually do a pretty good job at learning faster than you can imagine. Hmm, what's next? Mark, of course, with quantum we're only worried about encryption, right? That's not going to influence this in any way. It won't get what's faster than exponential. AI in innovation and threats are exponential. What happens when you add quantum, which is almost exponential computing, when they start to get more, more than three bit quantum states, what happens then, think we enter the singularity and we all just cease to exist.

Marc Laliberte  37:31  
But I mean, we are already at the point where AI is teaching itself like it was anthropic that said their latest models were like largely driven by just reinforced, like the models teaching themselves over time. I forgot the fancy name they use for it, but this is so weird, because it's so cool, like you and I like technology, this is cool. This has cool.

Corey Nachreiner  37:54  
But the second you put on your hacking hat to carry a

Corey Nachreiner  37:59  
swell like it's really this paradoxical dichotomy, where I really like technology. I think it's an amazing invention of mankind and has so much potential. But it's also like, oh crap,

Corey Nachreiner  38:16  
hang on to your seat belt.

Corey Nachreiner  38:19  
It is like, so damn cool. But like you said, like all I can do right now is just like, sit back and smile and go, yep, everything's going to be great. I wish my brain didn't connect these dots Exactly. Just let me enjoy the idea of the coolness with blind ignorance, please. Oh, I'm looking forward to seeing what us as defenders put out this year when it comes to leveraging the same technology to just exactly make me feel a little bit better knowing I can sleep at night when the door is getting hammered down by our future. Ai overlords, stuff for 15 years, some of them, but we have some cool stuff in the works as well. Yep, maybe the future is going to be okay. Corey, I don't know. Yeah, every old man my age yells at the sky, and it turns out the five generations later, they're better off. So maybe I'm just at that age and I'm influencing you. Youngins, I'm not ready to be at that age yet. My lawn AI,

Marc Laliberte  39:24  
yep,

Marc Laliberte  39:26  
I want AI in my lawn. It can mow in water for me. That sounds great. Well, no, never mind. I don't want to attach AI to spinning lawn mower blades. That sounds awful.

Corey Nachreiner  39:37  
I guarantee you, it already exists in the lawn mowing room robot that doesn't use light or and uses AI to assist in when I was in Norway, every single house had a robot lawnmower up front. So the future is here, there too, at least in Norway.

Marc Laliberte  39:55  
Everyone, thanks again for listening. As always, if you enjoyed today's episode, don't forget to rate review and subscribe.

Marc Laliberte  40:00  
If you have any questions on today's topics or suggestions for future episode topics, you can reach out to us on blue sky. I'm at it's mark.me, or is it source it to an LLM that answers our questions for us. Mark, that's good point. If you want to reach out to us, just enter it into chat. GPT, and I'm sure we'll find it in a couple of years.

Marc Laliberte  40:19  
Otherwise, you can reach us on Instagram at Watchguard underscore technologies, thanks again for listening, and you will hear from us next week.