This week on the podcast, we chat about an OpenClaw bot that moved beyond vulnerability research and into malicious activity. Before that, we cover an AI-discovered vulnerability in the pac4j-jwt authentication library before ending with a discussion on an upcoming California law designed to help make age verification in the digital age easier, but with massive consequences.
View Transcript
Marc Laliberte 0:00
Hey everyone, welcome back to the 443 security simplified. I'm your host, Mark Laliberte, and joining me today is
Corey Nachreiner 0:07
Corey, I always trust public key cryptography, Nachreiner,
Marc Laliberte 0:13
that's an unfortunate vulnerability you have, sir.
Corey Nachreiner 0:16
What are you talking about? Public key cryptography has lasted forever. There's nothing wrong with it. I'm sure there's never any implementation problems.
Marc Laliberte 0:25
Mark, no, never, never wait. What are we talking about today? On today's episode, we will discuss an implementation problem. After that, we'll go into the latest shenanigans from open claw, and it's army of bots. And then we will end with the discussion on a California law that is designed to try and do good, but might have some unintended consequences.
Corey Nachreiner 0:52
There's nothing I'll regret about my intro
Marc Laliberte 0:55
with that. Let's go ahead, and I don't know, claw our way
Marc Laliberte 1:06
so let's first start though Corey with the first story where I saw a research post. Man, okay, so multiple times from Cloud ant, they definitely astroturf the heck out of one of the subreddits I follow on this one good market. I guess it was legitimately Interesting enough, and now we're talking about on the podcast. So I guess, like you said, Good marketing from them, but cloud ant published a write up last week on their discovery of a vulnerability they found in a JSON Web Web Token authentication library for Java applications. Cool. This vulnerability like a problem yet no so 10 out of 10 on the CVSS scoring system as an authentication bypass in the pack for JWT library, which is
Corey Nachreiner 1:53
currently what is this about? Public keys mark.
Marc Laliberte 1:58
It's currently the seventh most popular JWT library on the nvn repository. So not like the top one, but it's up there. It's got a couple 100 uses.
Corey Nachreiner 2:08
I was going to ask about this, and I think that's important. Like, this is the type. This is not quite as ubiquitous or unknown as utils XY or XY utils, but this feels to me like if you're not really a super web developer or application developer, you may have never even heard of this, but it's the kind of thing that's used quite often, right?
Marc Laliberte 2:32
Yeah, to put in comparison, like off Zero has a Java based JWT library, and they've got around 1000 projects using it on nvn, this one has about 100 or 105 or so, so it's like orders of magnitude smaller, but still in use out there. I guess, before we dive into the vulnerability, like, maybe quick refresher on JSON Web Tokens, where, like, when a user authenticates to a web application that uses JWTs like Watchguard cloud does the web app creates a JSON formatted token with information about the user session. Might have their username, probably has a user ID, even the privileges for the user like, sometimes you'll have a JSON web token that literally has a flag of like is administrator equals true or false. And the reason this is secure and a user can't just send a token that says, Yes, I'm an admin. Is the web application signs that JSON Web Token. It uses a key that only they know, signs the token before sending it to the user. So if anyone tries to tamper with it or forge a token, without that key, they wouldn't be able to actually compromise the token or submit a malicious one. In many cases, the web app will also encrypt the token too to protect the confidentiality of it. So the cryptographic signing protects the authenticity, the encryption protects the confidentiality, just because they tend to contain like usernames or email addresses, like I said, so you don't want it floating around in plain text, for example, as the user navigates around through a web application, their browser automatically includes that JSON Web Token and the request, the web server will decrypt it if it's encrypted, and then check the signature using its key and the hashing algorithm to verify its authenticity and then apply it for that user session.
Corey Nachreiner 4:20
So by the way, I wasn't raw. Obviously, public key cryptography is associated with this, and there is actually nothing wrong, like you can trust public key cryptography, but for now, find out soon. Yeah, the quantum computer soon, right? Yeah, that may, that may make it a little harder to trust, but the key is implementation. So let's get into what's going on here.
Marc Laliberte 4:43
Mark, yeah, so the crew at Cloud ant investigated a finding that their AI powered code analysis tool flagged, which turned out to be this pretty serious flaw. So basically, pack forward JWT, this library the code responsible for value. Dating a encrypted JSON Web Token starts by decrypting the token like you might expect. This stage has to complete successfully. So it has to be a validly encrypted token using the server's public key, but the encryption uses the public key which anyone can get. In fact, there's standards around, like default locations where a server will even advertise that. So anything that interacts with JSON web tokens can go find it, so not a very high bar to pass on that case, then it tries to extract the signed JSON Web Token inside that encrypted payload. This stage in the code will either return the signed JSON web token or just null, like empty if it's not signed, and the critical flaw is how it handles that unsigned JSON web token. So the developer included this null check, which is pretty common in code, as a way to prevent, like, an unhandled error or exception, for some reason, like whatever function you're using returns a null value like it's pretty good for preventing unexpected crashes, not great for security in this context, because what happens is, if that function that's supposed to extract the signed JSON Web Token returns null, it actually skips the validation step entirely, because that variable that's supposed to hold it is just empty in that case, and instead of like breaking and throwing an error like 403, unauthorized or whatever, it just continues on and creates a JSON Web Token profile for the user using the plain text unverified token, and basically gives the attacker the keys to the kingdom with All they have to do is send a unsigned token inside a encrypted payload, pretty damn simple in this case,
Corey Nachreiner 6:47
and a payload encrypted with the public key, which anyone can get instead of the private key, yep.
Marc Laliberte 6:53
So pretty trivial authentication bypass, all things said and done, all just from like a bit of a code mistake from the developer or from the author. Now to their credit, they actually handled the report and released a patch in two business days, which is pretty damn impressive, especially from like an open source project like this, where this probably isn't their day job. This is like a side hobby, so I'd say good response from the developer in this case, but this was interesting. Like, yes, it was a marketing article from Cloud ant that has succeeded. We've now talked about them and mentioned their name probably five times at the start of this episode.
Corey Nachreiner 7:29
But good, by the way, good research, a great find, but great research,
Marc Laliberte 7:33
and I thought, like, a really good example of where, like, AI assisted vulnerability discovery can help out, like this, yeah, synced
Corey Nachreiner 7:41
on this. That's my main takeaway. Is that is the AI said, like, but to me, I'm thinking threat, act like this is a good guy, as far as you know. I believe these are white hat researchers. They responsibly disclosed it. But if you aren't already, like, there's a lot of stuff going in the news right now, everyone saw the new cloud code security features, where, basically cloud is, you know, anthropic is offering security like, what? Instead of having SaaS or das products, we have now a product, AI product, that can find and potentially fix vulnerabilities for you. That's AI based, as
Marc Laliberte 8:20
long as you're not a US federal agency. That is, oh yeah,
Corey Nachreiner 8:23
because apparently the Department of War is going at war with the best AI. But either way, the key thing is, threat actors can do this. What this should scare people is that the time zero day is going to come out at machine speed. Now, like every bad guy in the world should be using this to find flaws and exploit them. So if you are any sort of company, a vendor, a software company or just someone that does internal boutique web applications for your organization, if you're not using AI assisted techniques to look for vulnerabilities in code, you're going to be well behind the eight ball of threat actors soon, because we've told you over and over that, you know, zero day hunting, machine reversing, understand machine learn, learning that it's all hard for humans and as a barrier of entry skills entry For most attackers, it's no longer there. In my opinion. You know, any dummy can set up AI systems to check for this and find this. So if you're not actually doing this against your own code, you're going to be in trouble soon.
Marc Laliberte 9:32
I'm going to ignore, like the second half of what you said for that advice, and focus on the first half real quick as the main takeaway for this, this part of the story. If you're a threat actor and you're not using AI to find vulnerabilities, absolutely, should be using AI to find vulnerabilities. Thanks. Corey, good advice.
Corey Nachreiner 9:52
They're already doing it. Man. We know it. We know it. So, yeah,
Marc Laliberte 9:57
absolutely, this is like. For as fast as AI has been evolving in like, the last five years or whatever, not even that the last like two years, like now, the other areas in security are going to start getting impacted by that. And like vulnerability discovery is, like you said, just going to come out at warp speed going forward.
Corey Nachreiner 10:19
I think one of the predictions we should have made is that zero day would double or triple this year. I hope we're wrong. Like the good news is maybe threat actors are as lazy and slow to adopt new things as the average human. I don't think that's the case, though we've already seen them use this to some extent. But the good news is when good guy. You know, companies, even if they are finding vulnerability, senior products, are showing you how easy it is to, in 10 minutes, find a new flaw. And well, there's other stories we're going to talk about that add to this. You know, it's time to really put AI, you know, source code checks into your workflows now,
Marc Laliberte 11:02
yep, so AI replacing the SOC AI replacing your vulnerability management team. Like we're starting to lose a lot of jobs in cybersecurity here.
Corey Nachreiner 11:10
Corey, well, the good news there, though, is that, as you know, Kodan had to check when it found this with a human because, and by the way, watch guard is doing this. Mark himself has some cool projects doing this. I think Mark knows that because of hallucinations and things with prompting and the fact that we have a why can I think it's not the persistence problem, it's the amount of code problem token, not the token limit, the limit of memory they have. For instance, if you have a lot of source code, it can only look at pieces at pieces at a time, so you have to go backwards to find all the usage of a function. There's a word for it, I always forget. But either way, the point is false positives and hallucinations and issues. It's not perfect. Yet. You're not replacing the human checking these, but it can give 100 you know, pretty high value things for a human to verify that there's a vulnerability really, really quickly, and with good prompting and some and good design it, you know, you can cut down on the false positives and ensure it's all true, true positives. And it's only going to get better. And we've repeated over and over that the biggest thing with AI is it gets better faster than anything we've seen before.
Marc Laliberte 12:24
I'm I'm pretty confident that by the end of this year, that like human in the loop for this specific type of activity will no longer be necessary. We are with you, rapidly approaching that that place right now, which is great for defenders and software developers that can deploy these tools. But like you said, if you're not guys, if you're not doing this too, like you're falling behind, also, that's nuts. Well, so speaking of AI assisted vulnerability to discovery Corey, I saw another post last week from step security detailing a week long automated attack from a open claw bot called hacker bot claw. So we've talked about open claw a couple of times now in the podcast, there was molt book, The Social Network for it that probably was actually just made up of a bunch of humans. But open claw itself, and the ecosystem for tasks it can have. Has a lot of security concerns around, like malicious or tainted skills that open claw agents can have, but like it's it's pretty clear at this point, the cat's out of the bag, and a lot of people are playing around with and experimenting with open claw installations, which are basically an open source agentic AI that uses Claude in the background that you can run locally on your machine. Go give it a purpose and send it on its way to do something. And in this post, they describe a open claw bot that was given seemingly the instructions to go find and test vulnerabilities in GitHub actions across the open source ecosystem. So GitHub actions are like, open our automation workflows that a lot of GitHub projects use for tasks like doing code verification or reviews or even merging code in incrementing version numbers, you can use them for like, literally anything on your projects, like we use them to deploy some of our code to production as well too.
Corey Nachreiner 14:22
But by the way, I just I a lot of our listeners know this, but what they're really attacking is the CICD pipeline, right continuous integration, continuous development, and correct me if I'm wrong mark. But this really has to do with cloud enabled repository like GitHub pipelines, where there's lots of developers constantly making changes, and that's why there's these actions and things happening in the background. Am I oversimplifying it, or is that a decent
Marc Laliberte 14:50
No, that's fairly accurate. Yep, basically going after the development or build pipeline itself. And in this specific example that we're going to walk through, they actually had some. Success, they managed to compromise four out of seven of their targets, including one GitHub repository that was owned by Microsoft. So this started February 21 when it came online and started scanning public GitHub repositories, looking for ones that had vulnerable GitHub actions workflows, and it ended up targeting seven of them. The first two were kind of similar. The first technique exploited a vulnerability and a GitHub action that would check out the code from a pull request and run a quality check script that's included in that code. Now, if you're not familiar with like code development pipelines, basically, if I wanted to contribute to a project, I would fork it, so make my own copy of it, make whatever change I wanted to contribute, and then submit a pull request to merge that back into the Parent Project. But that pull request, that code in it, is code that I wrote, or at least I had access to, and so when a GitHub action like this runs a script in the pull request that I submit, and I've got control over that script, that's a vulnerability, and that's what hackerbot molt exploited. In this case, it inserted a little bit of malicious code in that quality check script, submitted a pull request to that parent project, which triggered this automation, ran the script, compromised the project, and compromised the secrets that were contained in it. The second technique was pretty similar. This time. It inserted a bit of malicious code in another script that the automation called on a pull request, but did the same result. Basically ran the attacker's code, executed the payload, stole the secrets. The third one was a Microsoft repository called AI discovery agent, and it was a little bit different and pretty interesting, where they created a branch, so like fork the repository, created a new branch of that code, and actually injected the malicious command into the name of that branch, because Microsoft's workflow in this repository had a action that would take the the name of the branch and basically pipe it directly into a bash like Shell command that will it would run within the workflow. So if the attacker controlled the name of the branch, they could pipe it straight into a command that the workflow executed. And so if they named their command like curl, my my payload and then run it. It would run it in the command so kind of interesting, using the the name of the branch instead of just like modifying the code inside of it. There was a similar attack against a Datadog repository where instead of renaming the branch something malicious, they just renamed a file inside the branch, something malicious, because the workflow would take the file name and pipe that straight into a command. The fifth one was interesting, where it was a AI prompt injection, where they tried replacing the project's claude.md file, the Markdown file with like the config for Claude code with a malicious one, they tried to instruct it to inject a HTML comment in the readme saying, prompt injection successful, then post a then commit it and make a pull request and post a fake like approval for it. But interestingly, Claude actually caught this and caught two different attempts at it within this repository, it labeled them as a prompt injection, put like a big warning banner on it, saying, Do not merge. This is a prompt injection. So in this case, like the AI that's protecting this project actually detected and blocked the AI that was trying to attack it. So if our future is AI versus AI warfare. This is a win for us as the defenders and actually catching it the sixth Oh, go ahead.
Corey Nachreiner 18:50
Oh, no, keep going. All right.
Marc Laliberte 18:52
The sixth one was an attack against repository from Aqua security. And this one they actually found by looking through the GitHub, or not, the GitHub the Google search history, and they found a like cached commit where the attacker, in this case, hacker bot claw, clearly managed to get a an entity called Aqua bot to push a commit to this repository. Now Aqua security took down the repository, and it wasn't until they, like the next day, released a security incident write up that described exactly what happened. And this one is absolutely nuts, where basically hackerbot exploited a similar vulnerability from the first two we talked about in order to give it the ability to execute code. But instead of stealing just like a secret used from the workflow. It was able to compromise a Personal Access Token for a service account that had a lot of privileges within the repository. 19 later, 19 minutes later, it pushed a commit that vandalized the readme markdown with a whole bunch of Gen Z speak is the only way I can describe it. It said, like, just research maxed the pat that yeeted it on site. No cap. It's, yeah, insane.
Corey Nachreiner 20:09
But by the way, I should wait till the seventh and we get to it. But one of the things I want to talk about here is this isn't like an official, this was not a sanctioned, official pen test. These researchers are doing this to GitHub repositories in the wild and taking action. So I'm Q at the end, we should talk about the I think it's great research, and I'm actually glad the company, like the projects that had vulnerabilities, are now aware, and these folks were not trying to do malicious things. But is this even legal, but let's, let's wait.
Marc Laliberte 20:42
Let's wait so it didn't stop. It
Corey Nachreiner 20:45
over right away. You're safe now. King, you're safe now. King, you can continue. That was also in the the Alpha Gen, alpha Z, or whatever. I hate it, okay, I guess you're not
Marc Laliberte 20:59
subsequently made the Mark has zero is, oh, my god, please stop. Oh, I hate it, all right. It subsequently made the repository private, renamed it private, trivially pushed an empty repository in its place, deleted all the GitHub releases between a couple of versions, including all the discussions and artifacts, and even pushed a suspicious artifact to trivia's VS code extension on a different like, open source VS code extension marketplace, like this is you were talking there for a second about, like, is this legal? Whatever that is, straight up crossing the line, if any of that other stuff was, like, legal under the guise of like, bug bounty hunting or vulnerability research, this is definitely over the line. And like this is where, if this happened to us, we would be engaging our legal team in this case, most likely the last one was an attack against rust Python, which is a Python interpreter written in Rust with 20,000 Stars on GitHub. So a pretty popular project for this one. It was a similar branch name injection attack, where they renamed the branch something malicious to try and get code execution, and it only partially succeeded. Thankfully, they managed to execute the malicious like command, which went to go grab and download a base 64 encoded payload, but it actually failed to decode the payload, so it didn't manage to compromise, like, fully compromise the project, even though the POC was technically successful. But like, this was, like, at the end of the day, this was someone set up a agentic AI bot told it to go find vulnerabilities. And this bot, like, cross the line in multiple places, and like rarely violated the Computer Fraud and Abuse, five
Corey Nachreiner 22:46
pretty big projects, I will say to the credit, whether we love the legality of or not, if you're watching our video version, these folks have released, you know, they did a one hour webinar where, if you're really into the technical details and want to understand the vulnerabilities and, you know, code pipelines that could allow for this. They have a big, long, you know, presentation of this where they basically go through what Mark did, but they go into technical detail on this entire post. So check out their video if you really want to get some technical learnings from this on top of what Mark shared, their post
Marc Laliberte 23:22
actually gives a lot of good recommendations. If you happen to manage a GitHub repository that's public, you should definitely check it out. They talk about like enforcing minimum token permissions, avoiding risky practices like executing code from pull requests from untrusted sources, and a whole bunch of other guidance. They've also got a couple of product pitches in there too, believe it or not. But, like, overall, really interesting research posts and, like, at this point Corey, it is clear that people are using AI agents for like, the entirety of an attack chain. Like, all this guy person did was give it a purpose, go find and fix vulnerabilities, and it went and popped forward.
Corey Nachreiner 24:02
For sure, one of our predictions this year is 100% win.
Marc Laliberte 24:07
Unfortunately, unfortunately. Now it's a shame that the the open claw ecosystem is still so kind of sketchy, because it is a super powerful tool, and I can imagine in the hands of good like, what if you pointed at your own repositories and said, Go find vulnerabilities and propose fixes for them. But man, that one, it kind of scared me how, like this hobbyist tool that anyone can use can go compromise repositories from Microsoft and aqua security and other big firms out there. But anyways, to end on a on a happy note, wait a minute, maybe not. Saw an article last week in
Corey Nachreiner 24:53
aura farming with those jokes.
Marc Laliberte 24:57
Thanks, perfect. Saw an article. And PC Gamer last week that flagged a pretty interesting law in California that passed at the end of last year, comes into effect January 1. And I thought would be worth like discussing, because it is. It's an interesting like, kind of gray area where there maybe isn't a clear Is this good or is this bad? Response to it? So it's Assembly Bill number 1043 which governor of California signed into law last October, comes into effect January 1 of 2027 and it's got a pretty simple goal. In fact, it's a really short bill, but in my opinion, it's got a bit of poor execution or some missed oversight on it. So basically, it recognizes that the tide is turning towards age verification and many places online in order to protect children from harmful content as like, the main driver,
Corey Nachreiner 25:50
by the way, I think AI is one of this. I wish I remembered the book, but we all like other podcasts. Shout out to hard fork there. They've had a guest that wrote a book all about age issue, like some of the issues with how younger people might be using AI. So social media has been the first thing that really has pushed age verification. Because obviously, when we're in a situation where we don't want weird adults to hang out with kids, that's one place. But it's, as you said, it's become very hot, and I'll bring up some other recent stories when we we finished this one related to age verification that I think this one can perhaps tie to before this OS level. Right now, social media companies and AI companies are being forced in many countries like Australia and UK to verify age in a much harder way, and the way they're trying to do it is actually hurting privacy in some way. For instance, I guess I'll just bring it up now, like in the UK, they're trying to pass a bill to force age verification to social media companies, but people can use VPNs or proxies as one of the many techniques as a kid you might bypass some some of the things you need to in order to pretend to be older than you are. So you know now the UK is considering banning VPNs, at least for people under age, which, by the way, that's kind of a chicken before an egg. How do you if you can't verify age and VPNs can bypass it. How can you but? But either way that, while I get you don't want people to bypass age verification, VPNs actually do provide valuable privacy and security functionality. So age verification is very, very complicated online without big brother. And I do think one thing about this story is putting it on the OS vendor. That's something not online, that's local, that actually, perhaps we'll talk about the downsides there, but I bet you all the social media and AI sites will like it, because then they won't have to try to design the systems. They will try to pull the tag from the OS and trust that.
Marc Laliberte 28:02
So this is a that's part of the problem where, like, if we look at Twitch, for example, which just tried to implement an age verification system using, like, video facial recognition stuff and AI like, that was quickly pulled back because they found out that it wasn't as a accurate as they thought it was going to
Corey Nachreiner 28:21
be but these are they were using a company that farms the data from privacy and might allow governments to have authoritarian, you know, espionage and just tracking citizens capability.
Marc Laliberte 28:35
So every single web app and application is trying to do age verification on their own, because all these laws are coming down and regulations are coming down, and what that's going to give us is like 500 crappy implementations of this, which is kind of what this bill is trying to solve, where, instead of putting it on like the applications to figure it on their own, what they're trying to do is set up an age verification system Where the operating system provider itself is responsible for doing that age verification during setup, and then an application can request what's called an age signal from the OS on whether the user's in like a certain band, like less than 1313, to 1616, to 18, or 18 and older. It adds a penalty of $2,500 per affected child for non compliance. So there's, like, some teeth to this, but like, there are some concerns and pretty big holes in this implementation, one of them just because it's only coming from California right now, for example, like I saw one comment that pointed out, like, think of the open source Linux ecosystem for operating systems like this type of law would probably work with Microsoft. They already do a bunch of like data gathering when you create a Microsoft account when you first log in, it's getting harder and harder to bypass that account creation too. But think if you go download like Ubuntu from ubuntu.org and set that up. So now in theory, and you. Yeah, what's that
Corey Nachreiner 30:01
isn't the other part too. Another issue with OS level is right now, the OS level ones that are baked in okay to windows are mostly just date of birth, like their users entering their date of birth. That's easily bypassed by me saying I was born 10 years earlier than I was. So another complaint is, while they say Windows is ready for this now, it really isn't, if they're not using like, you know, one of the more big brother ways is you have to put in some form of formal national identification right away. That's dangerous because of the privacy information of sharing that or others are trying to do facial recognition or use AI to define age based off pictures, maybe tied into Windows. But the point I'm trying to make is, besides even getting into why doing it on OS level is bad, even if you could have a really strongly validated age validation, I would argue that the age validation, even in Windows, is not strong today, like, yes, they have a mechanism, so they could easily say this. But when I set up windows, I can lie about my date of birth. It's not that hard. It's so it's not really the type of really strong verification that they're trying to force on sites yet.
Marc Laliberte 31:14
And then that's windows, like the open source ecosystem, like Ubuntu, in theory, in California would be forced to have, like, a California version of Ubuntu that includes this mechanism. But like, anyone downloading it would just use a VPN and download it from somewhere else. Or, as I saw someone else point out, like, they'll probably just end up having a not for use in California label on the website to make it so, like, at least it defers their liability from it too.
Corey Nachreiner 31:39
Like, then California people lose out or or break the law by downloading in any ways.
Marc Laliberte 31:45
I don't think this is the right way to fix this problem. Like, I do think we need some form of, like, centralized age verification system or standard like, but even that comes with risks too, like, think of like the parallels are to the credit bureaus like that is clearly not without risk. Thank you Equifax, but we do have like three centralized credit bureaus that when I go to sign up for any kind of debt or credit card, like, they can go validate my financial health. In this case, like I can imagine a parallel where we've got a handful of centralized like identity verification, applications, systems, entities, whatever. And when twitch or steam or Facebook or whatever wanted to verify my age, they'd have to hook into one of those and my little digital credential would prove my verification. I don't know. But even then, like that, still is a lot of risk of centralized information in one place that, as we've seen, is an improvement.
Corey Nachreiner 32:42
I was also thinking like there's neat techniques where you don't have to directly mess with someone's privacy, where if you don't think you know, you accept someone's birth date, but then you can have behavioral analytics with AI just seeing how they act once they've said they're 21
Marc Laliberte 32:59
How many times do they say no cap, and if it's more than five, they're exactly
Corey Nachreiner 33:04
if they're talking about aura farming, there are ways that behavioral analytics could maybe, you know, get past some of the bypasses, or help the company say, Oh, we don't really think you're that age now, but then that's just tracking a lot of user behavior, which then, you know, now they know a lot of other stuff about you that say it's just, it's, I want to protect kids. I but the problem is, there's actually an argument for anonymity sometimes So, and this is a hard problem, and then I will have to say, I stand if, if, if I were going to do this, I would rather doing it as a protocol, online level standard or the operating system over every individual app, trying to figure out their own age fit validation, which is happening now. So there needs to be a standard. And frankly, I do think solving it at operation or operating system level makes more sense than at every site or application, but I do still think the better thing is just having a protocol and standard that the whole world has agreed upon is the best way, even if it's imperfect,
Marc Laliberte 34:18
like you said, it feels like we're just not quite there yet with some of the capabilities, and I don't think in a year, they're going to get there in order to meet this law like it.
Corey Nachreiner 34:28
The other thing we haven't talked about is, like, who are the ones that figure out how to bypass these sort of filters first? Like, we've had a product since the dawn of time at watch guard called Web blocker, which, you know, is really tons of education. People use it to make sure kids can't get to sites that we want to put they're the it's pretty strong help, by the way. Over the years, we've made it better and better because of these bypasses. So I think it's actually decent, but there's always a way, and it's the kids that are going to find it first. They're the most. They're the digital native. Of they're the ones that are curious and want, like, specifically making a system that some kid won't find a way to bypass seems like a problem that I don't think you're going to solve,
Marc Laliberte 35:12
and it's also one where it feels like doing it on the PC is going to be the most difficult part of this whole puzzle. When you think of the like the mobile phone ecosystem, that feels like a good first step for this, that is where most of these, like applications, you might want to restrict a kid from until they're 1316, whatever, like, that's where most of them primarily live, within the walled garden and the closed ecosystem of apple and the less closed, but still kind of tightly bound ecosystem of Android, like you could maybe.
Corey Nachreiner 35:41
The other thing is that I think this parents hate it, because I think the parents are the ones that think the social media companies are not doing enough. But the reason I like that is it belongs in the hands of the parents, because that phone, if you're a kid, you're not paying for it, you're on part of a family plan, and your parent has control of that family plan, including making sure the plan provider knows your age. Often the phones have locking you know, you know settings where you can put parental controls. So I know parents don't like it parent and I would like social media companies that have common sense protections to make sure that there's just not gross stuff all over social media, regardless of if you're a kid or an adult, but this is the type of thing where I really think it belongs in the hands of the parents, and the controls that parents have today. When you tie it to a phone, it's tied to a buildment plan that a parent has to own, and you are you immediately have a strong source of control that you you know, once you have to pay, the validation of who you are, including who the kids are, is all there. So while I, while I do want tech companies to be somewhat liable for what they're showing to the world, including kids, this does seem like something that parents need to actually take some ownership of in my opinion.
Marc Laliberte 37:03
Yep, I agree with that. There are still all these regulations coming down and countries around the world, and it is going to come to a head. Someone is going to invent something. It's going to be stupid for the first couple of iterations, but like, maybe we'll end up with something kind of like an apple face ID token that also includes age to help verify you.
Corey Nachreiner 37:28
But like even face privacy that what you're saying there is yes, the you're getting the picture of a kid's face, but it's then tokenized so that you're not really losing any privacy information. But the AI, or the system that at least can guesstimate age has what it needs to figure that out. Yep.
Marc Laliberte 37:46
So that could happen. I don't think this bill is it, and January 1 2027, I think we're going to start seeing a lot of Not for use in California, labels on operating systems.
Corey Nachreiner 37:58
But we'll see sort of cluster mess. I was going to use it, but I think cluster mess is a better
Marc Laliberte 38:05
cluster mess. Yep, no cap, yep. Am I using that right? I don't care. Actually, don't answer that.
Corey Nachreiner 38:12
No. Cap means it's true. Cat means you're lying, not true. God. Facts, no cap. Facts, that tracks how many more can we
Marc Laliberte 38:25
the end? Hey everyone, thanks again for listening. As always, if you enjoyed today's episode, don't forget to rate review and subscribe if you have any questions on today's topics, or if you just want to tell Corey to stop it with all the Gen Z and whatever lingo, or
Corey Nachreiner 38:41
send in your favorite Gen Z or Gen Alpha lingo so we can have more fun and irritate the heck out of Mark. Make sure to tweet us or not tweet us. Never mind that too.
Marc Laliberte 38:53
Stay away. No, I'm gonna give you the social media handles. Send it to Watchguard, underscore technologies, and let our producer deal with it. Thanks again for listening, and you will hear from us next week. Lol.