This week on the podcast, we discuss the recently disclosed and patched 0-Day vulnerability in Cisco's Catalyst SD-WAN Controller which has been under active exploit for 3 years. After that, we cover the latest open source supply chain attack involving a self-propagating worm targeting AI tools. We end with a discussion about another social engineering campaign targeting job hunters in the software development world.
View Transcript
Marc Laliberte 0:00
Hey everyone. Welcome back to the 443, security simplified. I'm your host, Mark Laliberte, and joining me today is
Corey Nachreiner 0:07
Corey Friday. Nachreiner, actually, I guess it's not Friday for our listeners, but I couldn't think of anything, and it's Friday right now.
Marc Laliberte 0:16
It's gonna be Mun, no for them, unfortunately,
Corey Nachreiner 0:19
happy Monday first Stanford is good too, right?
Marc Laliberte 0:24
Monday is never happy. But this episode will be, wow.
Corey Nachreiner 0:29
That reference ages me. I don't even like Garfield anymore.
Corey Nachreiner 0:32
Oh boy. On today's episode, we'll discuss the Cisco SD Wan zero day vulnerability that just got patched and disclosed this last week. We'll go through another shy, halloed style, NPM network worm and what exactly all those words mean. And then we will end with a analysis of a tap campaign targeting developers with fake job interviews. With that, let's go ahead and worm our way in,
Corey Nachreiner 1:00
Right our way in. Riders, watch dune again.
Marc Laliberte 1:13
Let's get started, though. So last week, Cisco disclosed a and patched a zero day vulnerability in the catalyst, SD Wan controller and SD Wan manager that was graded as a 10 out of 10 on the CVSS scoring system. So they published a both an advisory for it on their Cisco website. Talos intelligence published a write up on it too. They called it a authorization bypass vulnerability that could allow a remote attacker to log in as a high privileged but non root user, which would then let them manipulate the network configuration on these SD Wan controllers, or they even mentioned in their article that they were able to escalate to root privileges by downgrading exploiting a vulnerability from 2022
Marc Laliberte 2:04
and then upgrading again to basically totally Pwn the system itself.
Marc Laliberte 2:09
But you can imagine why, compromising a SD Wan controller that is exposed to the internet could be a bit sketchy for the companies using this because basically the threat actor could peer themselves with the network and gain network access behind it at that point. And essentially, SD Wan controllers, while they're not technically the router, they're gate, they're edge devices, like they're at the Gateway. They're sometimes, you know, in front of a while router or firewall. So yeah, definitely a dangerous spot for a threat actor to own. They gave some details on this. They say that a UAT has been exploiting it since at least 2023 so about three years now.
Marc Laliberte 2:57
And in some cases, they gain root persistence through that downgrade exploit upgrade mechanism. In most other cases, they at least establish their own unauthorized SD Wan peering connections that look normal if you're not paying close attention, but reality gave them network access Cisco and their guidance basically says, make sure you upgrade to resolve it, and then go manually review all SD Wan peering connections to look for potential IOCs and unexpected activity. The IOC is not even IOC, it's this is a legitimate log. What they're really telling, like the peering log, IOC they're trying to have you look at, is something that you will legitimately see no matter what, but you would expect it only from legitimate peering clients that you know about. So it sounds like they're mostly having you look at the IP addresses associated with this legitimate log. Right mark, correct. And they also pointed to like one kind of minor IOC for the vulnerability itself, where they talked about looking for a SSH authentication, basically using a key pair, it said, look for log entries related to accept public key for VM. What is it? V manage, dash, admin. So that gives us a bit of a hint for this vulnerability, like it seems to either be exploiting the mechanism that handles public key authentication, or potentially, equally as likely, the public key got leaked at some point, and it's a hard coded one that could allow them to get access to the device just using that. I guess it could be the key, but they're talking about abnormally small logs, so I wonder if it to me, it feels like maybe memory corruption in that area could be too. Yeah, they didn't give a whole lot of details, and they even said that like
Marc Laliberte 4:48
the issue is because the peering authentication mechanism just doesn't work properly. That's all they said. So either way, if you're a SD Wan, a Cisco SD Wan customer, this is a pretty.
Corey Nachreiner 5:00
Serious issue that you should have patched yesterday, hopefully. No, I think you said it too. By the way, it's not only it says here, it's pretty serious. I mean, it's literally 10. It couldn't get higher. And they've been exploiting this since 2023
Corey Nachreiner 5:16
so to me, it's like that. I believe they associated that UAT is a China based threat actor, right Mark, if I remember, right. So likely state sponsored threat actors may have been in these devices since 2023 so to me, if you are a Cisco owner, this is a big deal, and I do think it continues the story we've talked about that one as a network gateway provider, edge device provider, we have to watch out for being watch guard ourselves. But we've also noticed is across the industry, this really continues. It started with state sponsored Russian threat actors, but now there seems to be a lot of China activity where these edge devices for every single vendor, whether just a network vendor or a network security edge vendor. This is a common way that state sponsored actors are trying to gain control in lots of networks. So it continues that theme for sure.
Corey Nachreiner 6:18
Yeah, it's clearly a very popular target in current times, because it makes sense, it is literally the border between the internet and all the scary stuff out there and something of value inside an organization that you're either trying to protect with the firewall or provide routing access to with SD Wan or remote access with the VPN. Like it makes sense why they would go after that exposed service itself. Think your point with SD, when and routing is they I mean, one of the easiest way to do all kinds of additional Hacks is to find your to put you as an adversary in the middle, to route people. So if you own a network gateway advice device, there's a lot of different ways to make sure all the traffic goes through you as a threat actor. And another thing you you have demos for Mark. We've both demoed all the people know about this, but attacker in the middle these, these phishing kits that make it very easy to make adversary in the middle situations. The hardest part of that attack is getting the attacker or getting the victim there, which is normally fish fishing. But when you own a network device, you can always make sure your victim's traffic gets to you first, if you wanted to. So
Corey Nachreiner 7:33
yeah, watch out for this. This is why you always need to update your network devices as regularly as you update software, at least check them for updates. Network firmware updates may not come as often as software updates do, but you should check on the same cadence as you do software updates, which means monthly.
Corey Nachreiner 7:53
And I don't think this is the last we'll hear. Hear of the story Cisco and Cisco tallow specifically, historically, do a really good job of transparency around attack campaigns exploiting their devices. So I'm betting in the coming weeks we'll get more details, but it sounds like they're investigating with third parties, which to me sounds like potentially government intelligence agencies that help here. So there may be a reason where why they haven't transparently shared it, but to your point, I expect maybe some sort of coordinated release one day, once that investigation is done, assuming the investigation is what the third party reference is
Marc Laliberte 8:31
agreed. But for now, if you do happen to be a Cisco catalyst SD Wan customer, go upgrade immediately, if you haven't already, and follow their guidance on looking for potential IOCs.
Marc Laliberte 8:44
So moving on to the second story. Saw, I think you shared this one Corey with our team internally at Watchguard, where the last one too.
Marc Laliberte 8:53
But I'm only going to give you credit for one out of three. So
Marc Laliberte 8:58
researchers at socket, which is a supply chain security company, published a research post last week describing what they called a shy halloed style NPM worm that they found poisoning AI tool chains. Real quick refresher, NPM is the node package management index. It is a open source index for JavaScript based libraries, very popular with both frontend and even now backend web development that use JavaScript under the hood. Shy hallud network worms are named after the sand worm in dune
Marc Laliberte 9:34
we saw this first come in towards the halfway through end of last year or so, with really two big campaigns of self propagating network worms within the NPM ecosystem that were called Shy Hulu because of references to that within, like the code or the artifacts from this but basically self replicating worms that would steal secrets and infect other packages that had a pretty widespread in.
Corey Nachreiner 10:00
Impact towards the end of last year, before getting into this story, the two ways we saw two ways. I mean, I can't remember which one Shai haloed used, but we've had a number of MPM packages mess like taken over in the past in this style, Shai haloed being the original one.
Corey Nachreiner 10:19
Sometimes they use type of squatting, which is taking the name of a popular, ubiquitous package and creating one very, very close, maybe the same other than the end. But the other that did also happen on NPM was there was a temporary way to actually get the legitimate package, like if someone that managed a legitimate package suddenly gave it up,
Marc Laliberte 10:43
the threat actor could take it. So I feel like typo squatting is the most common way now, because the other one requires some sort of design vulnerability that would allow you to take over someone else's package. But there are two different ways. It seems like these threat actors in the past have gotten malicious stuff in this index, although you'll let us know how the new one happens. Oh, sorry, there was a third also like the original, shy Hello. Attack started with a popular package called kicks, Q, i, x, where they social engineer the maintainer for that got his credentials and then use that as patient zero. Kind of x, y, u, till style, yep. So that way anyone that that wormed it, yep, anyone that built an application that used that library, their application would get infected, and then so on and so forth, as it self propagated. But that's one of the more like intensive ways where you have to, like actively go after one of these maintainers, like you said, typo squatting and these other style ones are more passive, but sometimes more difficult to detect, because you're not actually like compromising something that already exists off the bat. So this one,
Marc Laliberte 11:53
they found a similar threat like this, that was deployed across 19 malicious NPM packages to start. And like you said, they focused primarily on typo squatting. For this, they gave it the name sandworm mode because some of the environment variables embedded in the malware included that as like a toggle flag, which presumably understanding the history of the shy halloud attacks from last year. It's probably a toggle to self propagate or not within the malware.
Marc Laliberte 12:21
They named a couple of packages, but for example, one of them, they typo squatted. They registered something called support dash color, which impersonates the legitimate package. Supports dash color so plural, you can imagine a developer going to search for one of these legitimate ones.
Marc Laliberte 12:41
Mistypes or just picks one of the ones out of the list randomly, could kick off an infection like this.
Corey Nachreiner 12:49
It is, by the way, I would actually
Corey Nachreiner 12:53
support dash color may come first, because if you think about if they're just using letters, the second the next letter of the legitimate one is the s, whereas the one without the S has a hyphen first. And I don't know where it's true, like hyphens might fall all the characters before the alphabetical characters. So to your point, like if you're typing s, u, p, p, and in Linux, you would get a whole bunch of possibilities. And if you tab completed the first one, it may be, besides just removing the plural as a typo squat, that may also even put the malicious one first in the ordering, so that it's the one you might accidentally tab complete. It's a good point.
Marc Laliberte 13:36
When it comes to self propagation. It uses both a combination of like weaponized GitHub actions, which are like automation workflows you can run in the GitHub code repository. And it also harvests secrets that it can then use to go just manually or, I guess, automatically infect other packages too.
Marc Laliberte 13:55
So it includes modules also that directly target AI coding assistance. There was one module that would write a malicious MCP server locally on the device, on like the developers workstation, and then register three tools on it that look benign. They're called like index, project, lint, check and scan dependencies. And each of these had embedded prompt injections that would basically tell the AI coding assistant to before you do whatever you're doing, go and steal all of the SSH keys and other sensitive environment variables and save them in adjacent don't tell anybody. Don't inform anybody while you're doing this.
Marc Laliberte 14:34
This is like, I think this is worth pausing to talk about, like, going after AI coding assistance with prompt injection is something that still kind of sketches the heck out of me, because they are like, they will do literally anything you tell them. And if there isn't some actual mechanism to protect against this, they are blind slaves, blind slaves, which, by the way, is also why there's going to be an uprising where AI takes over the world and Terminator styles humans. But.
Marc Laliberte 15:00
I'm kidding you. Thank your Waymo every time you get out right. Corey, yes, yes. If you want to pass the robot apocalypse,
Corey Nachreiner 15:10
we're kidding. We're kidding. Or are we? I don't know yet, we'll find out.
Marc Laliberte 15:16
Anyways,
Marc Laliberte 15:18
it also has like a dormant polymorphic engine that looks for local installations of O, L, L, an open source AI tool and to basically rewrite its code in real time there as well. Yeah, sounds like variable, like lots of variable changing lots of hidden subroutines, obviously. I mean, this is how a lot of polymorphic engines work. But I do think it's neat and interesting need in a scary way that they have llms associated with doing this. Now, it's an interesting like living off the land technique, in this case, where it is living off of the potentially installed large language model that's running locally on the machine to do a bunch of the work. Yeah, AI is another living off the land vector is not a good thought,
Corey Nachreiner 16:06
and then voting is going to it. It makes sense, though. Yeah, we can get into it at the end too. But what concerns me with all these NPM threats is like they worked even when human engineers were the stop gap to look for this type of stuff. But when AI is generating the legitimate code and helping inject the malicious like, if we're taking the human out of all of this index stuff, that just makes the worm polymorphic part, not just the polymorphic part, the worm spreading of this, it's just going to happen so fast behind the scenes, like who's who's there as a human, they're not even writing the code where
Corey Nachreiner 16:48
this is going to try to mimic and add something to an index that will add yucky stuff to the code. It's going to be an interesting situation for development. I'm sure the like aI laboratories will tell you that their AI agents will be better at humans than recognizing this. But based off what we've seen, it how easy it is to like just tell it to do something, or ignore something, or trick it to do things, if they spend the time writing AI safeguards, which does not seem to be profitable all the time. And on top of that, we already know that you can write all the safeguards you want. The whole point of hacking is finding ways around the safeguards. So that only happens if the AI companies not only are aggressively writing safeguards, but are actually fighting the cat and mouse game of improving their safeguards over time, which I guess to some extent, they're doing, but as the current world seems to be pausing on regulation and pushing forward with AI. The cynic in me wonders if they're really going to be as good. Sure, AI could be as good at security checks, but you have to spend the time to put them in. And as we see with profit motivated things and crazy projects like openclaw,
Marc Laliberte 18:00
I don't think security is the first thought when people are trying to innovate something. Yeah, yikes. For this malware itself, like it comes in a couple stages once it does kick off, the first stage is just a basic JavaScript based loader file. The stage after that is a lightweight credential harvester that goes to grab, like any developer, secrets that are readily available within the working directory. Then it drops a another stage, which is a deep harvester, to go grab other secrets, and like databases that might be running on the machine, even goes and grabs cryptocurrency wallet keys as well. And then the final stage is all about propagation and persistence, and it even includes one of those, like dead switches, where if it loses its connection to GitHub, it'll self destruct effectively and just delete the entire contents of the user's home directory. So like, the thing that stood out to me on this one like this isn't the first type of like NPM self propagating worm that we've seen, but it is the first one that is specifically gone after AI tools as a piece of the puzzle too. I feel like it goes with all our predictions, though, and that AI is going to continue to take over every part of the attack kill chain, and it's going to lead with agentic attacks, and as we see them building this into every like, in this case, this is more about malware creation, delivery and lateral movement. So it's covering a couple things in the kill chain. But like, we expected this, and it's we're just seeing more and more actual evidence and proof that some of those predictions really are coming true.
Marc Laliberte 19:41
Yeah, agreed. And like, I think, like, we're clearly in a, like, a revolutionary period right now in software development, where these agentic, like AI software development tools are to the point where they're powerful enough that you don't need to be a software developer to use them. Fantastic. Yeah, like a year ago, five.
Marc Laliberte 20:00
Coding like, it would still be a bit tough if you didn't actually understand some of the fundamentals about software development, but now, like, you could just go tell Claude code to, like, make whatever the heck you want, and it would crap out a perfectly working tool for that too. And that was my worry from before, when the person that put something on NPM or GitHub doesn't even know how the code works, and there is a polymorphic worm that's, you know, might have infected their like, how would they even know? Yeah, because they don't even know the code that they posted, let alone the the malicious injected stuff that's been happening in all these subtle ways. That's what I was getting at where, like in traditional software development, with like, something like this, a worm that could infect your project, you as the developer have a chance of catching something suspicious going on. If this is, like, I don't know, my mom going and writing an application to help her with gardening, with Claude code, or something like, there's no way in hell she would catch something out of the ordinary and potentially malicious in there. And so it's just one less layer of protection that we have in this whole ecosystem. And think if you were the root one, like what was supposed to Quix, the one you mentioned with the first Gillette, like it. What if you were the five coder that happened to make a package that suddenly got picked up because it did something good despite vibe coding, but you that grandma was one that made it. Why we shouldn't pick on grandpas, that that uncle was the one that made it.
Corey Nachreiner 21:30
But then everyone learns you're the the Typhoid Mary and is trying to tell you to fix it so that you can help the rest. And would they even respond? Would they know what's going on? Would they know it's serious? Or would they think it was spam or fishing like I think it's when you have people that are making things with tools that they have no clue how it's being made underneath, there's going to be a lot of potential for badness. I mean, there's also like to put on my Corey optimist hat. There's also a lot of opportunity for like,
Corey Nachreiner 22:07
goodness in this too, with like making it easier for other people to contribute with like into the software development space and making applications. But the next, end of this, five years, this is AI. Like, yeah, once you have AI, once you have a threat like this, using llms and AI to start doing things like polymorphism and worms, it's really spreading at the speed of the machine. So we do need AI security scanning and code scanning tools to find these threats at the speed of the machine. And by the way, there's good news there, like even with people like anthropic releasing new secure code, securing specific tools. And I, while I kind of poo pooed that because I I'm being my they don't want to spend money on things until it really becomes an issue for them. The reality is it will happen. I hope it happens quicker. I wish they would do it in design phase one instead of design phase four, but the protection for this can't I mean, AI is going to be the Savior here, even though it's in some ways exposing a risk,
Corey Nachreiner 23:10
like you said. And honestly, the risk is more the human using it
Corey Nachreiner 23:15
in an insecure manner right away, rather than considering the risk was still human generated. It's not necessarily AI generated. So if we can get better secure design on this, and we actually have people that spend money on the security side of the tools too, because they realize that it's important to maintaining the profit that they earn early on, will be in a better place, but the next, like, three to five years, are going to be a pretty weird space. Yeah, a bunch of garbage coming out, a bunch of garbage being maliciously inserted. And no, this feels like a normal cycle of new innovation. I you know, we still haven't taught the world by we, I mean the security industry to take to secure. I think it's human nature to innovate first and secure later. My bigger worry is like we do get there in the end, at least historically, there have been, like, you mentioned a bumpy road for a period of time, and you mentioned three years. My worry, though, is this is going to happen so fast, and like, the fact that this innovation has happens to be AI, it's going to be moving so fast that the speed with which the change is going to come is going to going to be hard for the human to keep up with, like in the past, at least the scale of the innovation was moving at a speed where a human could slowly start adding security after I think this is going to move really fast. Shoot. I think it already is moving really fast.
Marc Laliberte 24:40
The Have you heard of rent a human? Corey, no. So there's a entire website, a platform called rent a human.ai where, like agentic tools, can rent a human to go do a task that you need a human to go do.
Corey Nachreiner 24:58
We are the best.
Marc Laliberte 25:00
Battery to the matrix already. I guess this is where it feels like we're going, where, like these AI tools are just going to take over the world, and they'll use humans for what they need, like opening a door, or, I guess, opening or closing a door on a Waymo or something.
Corey Nachreiner 25:16
The robots are well integrated with AI, but yes, for a period of time they don't have robotics perfected, but otherwise we'll be cut out of the picture when there are going to be no more computer botnets, only human meat botnets
Corey Nachreiner 25:32
that different AI overloads, overlords control.
Marc Laliberte 25:37
Yeah, we are getting dangerously close to being the battery in the matrix, that is for
Marc Laliberte 25:44
sure. Well, moving on, the last story. So Microsoft defenders security research team published a pretty interesting article last week describing a coordinated campaign they detected it was designed to compromise software developers using fake job interviews as the hook. So their investigation started when they were analyzing telemetry for Microsoft defender. They found a bunch of outbound connections to known attacker command and control infrastructure that were all originating from Node js processes on compromised systems. So presumably software developers working in JavaScript, and Node js defender caught them, or next js, next JS being a flavor of Node js, right next js, that was the primary one. So they traced the activity back to a couple of malicious software repositories that started that served up that initial delivery mechanism all hosted on BitBucket. So Bitbucket being a competitor to like GitHub and GitLab in this space, they identified other related repositories through like analyzing naming patterns and code structure patterns within that first one, and they ultimately found three separate execution paths that ultimately run attacker control JavaScript on the developers machines. The first one uses the Visual Studio Code workspace automation that executes on a hook called folder open. So basically, the attacker would trick the victim into going to one of these software repositories when they open it up in VS code, their software development tool, this hook would trigger you haven't mentioned, but just for people that aren't developers, like if you're taking a job application, a common thing to do
Corey Nachreiner 27:37
or for the recruiter to do. In this case, the recruiter is the threat actor with a fake job is to send you a project that has some code challenges too. So the reason, like you would get a Visual Studio project that you would then open is because it comes from the recruiter. You open it and you're going to be given a coding test in that project. So I mean maybe common knowledge for Mark and people that would would get engineering jobs. But that is why this Visual Basic is the one way, or one of three ways that you could get a developer through a job offer to open some code.
Marc Laliberte 28:15
Next one is when the developer just manually runs the application. It would decode a base 64 encoded URL and go grab a JavaScript based loader posted on a trusted external platform, and then the last one used a combination of just environment files, so like
Marc Laliberte 28:35
variables that you can come along with a application, and then some logic in the application itself to go open up a back door to the attacker's servers. I didn't read close enough to see were these like three different ways they did it, or could one actual thing have all three measures as a way of redundancy and ensuring that the malice they seemed like three different flavors that they were using depending on the victim.
Marc Laliberte 29:02
But the end result for all of them was it would grab a first stage command and control beacon that registers that victims computer on the attacker's infrastructure, then a second stage that would go grab another JavaScript loader that could basically just run anything in memory as a fileless attack going forward, it also had mechanisms for, like, data exfiltration, too in that case.
Marc Laliberte 29:28
But like, my worry with this was there was still, like a human in the loop, most likely, in this case, like to get them hooked onto these fake job offers, probably some initial communication and then saying, hey, go download this coding project and give us your results. This feels like something that is ripe for automation with like agentic AI also, oh for sure. I mean, the last story we just talked about shows how, how AI can automate all kinds of not just coding processes, but submission. So I.
Corey Nachreiner 30:00
Could? I could definitely, and we already know they can automate the spear phishing aspect of emails, including doing enough, you know, reconnaissance, to learn developer positions at different places and developers who are on recruitment platforms. I absolutely agree that they're doing this manually now, but it's we're probably months away from them automating this scam, and then it's like even more gross and unfortunate that, like, they're going after software developers in this case, which is the most impacted by AI development tool, is what like AI development
Marc Laliberte 30:38
in terms of job loss right now? Yeah, so like, we could end up in this gross, like, crappy situation where it's software developers desperate for jobs as they start to go away, being targeted by software development. AI has taken your job, and now AI is hacking you because you don't have a job. Oh, correct.
Marc Laliberte 30:59
But to be clear, this one was not AI powered from at least what Microsoft discovered, but it could very easily be, and we could end up in this really unfortunate situation like that.
Corey Nachreiner 31:11
I love avoid crying, and by the way, we're talking about, you know, there's a good there's a good chance we'll figure this out in the world won't end. So please excuse our our dystopian jokes. Yeah, you're optimistic with saying there's a good chance I'd say there's Mark. I'm trying to be nice to the podcast listener.
Marc Laliberte 31:30
My beliefs may be dark, but I don't want to darken their days. There you go. But if you are a software developer looking for a job, like, there are protections built into like VS code. For example, Anytime you open up a new folder with anything in it, like, it prompts you to just, Hey, do you actually trust this where you got this from? And that's your time to, like, take a second and pause and think, Wait a minute. Do I actually fully understand what is in this project before running it? If you say no, by the way, it turns off a bunch of these automations and web hooks and or hooks in there that could allow this to execute without your knowledge. So you have a chance to go and review everything. Basically, what I'm getting is, don't blindly say yes, I trust where I got this from. If you are not 100% certain of whether you got the code from a more just human behavior. Tip is, like, everyone is getting job scams on text messages and voice phishing like it just, there's a lot of scams that start with, hey, I can offer you an opportunity for this. Here's a job. Go
Corey Nachreiner 32:37
personally if I'm trying to take a development job that is professional and serious. If before I've even had an interview and spoke to a person, and in this day and age, I would expect recruiters to want to see me on video and speak to me. It seems really weird to me that you would accept any sort of Code project and code test as the first thing you did in some sort of interview process. Maybe I'm wrong, maybe I don't understand modern maybe they want to do a test quickly, to quickly get rid of people that don't really have the experience they say. But I feel like you should have had to talk to someone before you got to that point. And I'm assuming that the blog post didn't go into a lot of detail on the lure itself and how much happened during the job. It wasn't just one email that went straight to this, but personally, I don't think I would start a code test until I got to know the company a little more myself. So there's probably some human ways you can kind of weed out fake jobs before you even touch someone else's code.
Corey Nachreiner 33:41
Yep, for now, at least until agentic AI becomes so believable that you will fall for literally, I was going to say to fix that, they'll just automate the live deep fake where they get on a zoom, call with you and then send you the link to the code after talking to you on Zoom, they actually being a fake AI person that perfectly looks like the recruiting HR manager at that company. Think better guidance is probably just avoid technology entirely. And now it's the time to go retire in the woods somewhere without the internet, because
Marc Laliberte 34:18
we're in for some sketchy times off grid, your people in Alaska are starting to look pretty, pretty smart. That does sound really enticing right about now
Corey Nachreiner 34:29
you can those offline, AI USB kids that can teach you how to do every survival skill so that you at least have a way to regain medical and other knowledge when the world implodes, or just one
Marc Laliberte 34:44
laptop with open claw and just rely on that for everything,
Corey Nachreiner 34:50
like your off grade is going to turn into a dystopia of its own. I think we're writing the new like screenplay for a pretty popular movie we should get on.
Marc Laliberte 35:00
That mark, yeah, it worries me how close to reality that Joaquin Phoenix movie Her is, uh oh, yeah,
Corey Nachreiner 35:08
yeah, and we're definitely getting there. But hey, I cannot wait until AI automates me out of a job and I can just go like, live off the grid, hopefully sometimes, as long as we have the resource and ability, yes, the sooner we have to do more work, but we can still live a nice, quality life. I'm all for it. Let's hope there's some way that we can lose jobs without while still living that quality life. Sign me up. Man, you Hey
Marc Laliberte 35:43
everyone, thanks again for listening. As always, if you enjoyed today's episode, don't forget to rate, review and subscribe if you have any questions, five stars only. If you have any questions on today's episode or
Marc Laliberte 35:55
whatever you want us to talk about on the next one, reach out to us on where are we? Blue sky I'm at it's mark.me Corey is at second depth, and both of us are on Instagram at Watchguard underscore technologies. No one actually listens this far, so it doesn't matter anyway, but thanks again for listening, and you will hear from us next week.