This week on the podcast, we cover a recent report that highlights a drop in ransomware payments in 2024. After that, we discuss a recent attack targeting ASP.NET web servers before ending with a report on sensitive data leakage caused by AI model use.
View Transcript
Marc Laliberte 0:00
Hey everyone, welcome back to the 443 security simplified. I'm your host, Marc Laliberte, and joining me today is
Corey Nachreiner 0:07
Corey the
Corey Nachreiner 0:08
ASI’s favorite student Nachreiner, please don't kill me.
Marc Laliberte 0:13
Something tells me that's not correct. But anyway, on today's episode, we'll be discussing why AI is going to take all of our jobs, or actually, more specifically, why AI is going to get a bunch of people fired from their jobs. Before that, we'll talk about a recent attack against IIS servers that are running asp.net web applications, and what the heck that means. And then we'll start with an interesting stat on the latest ransomware payment trends. With that, let's go ahead and generate our way in.
Corey Nachreiner 0:48
You don't have to generate let's use machine learning. Stop generator for us. In fact, this podcast is not me. Stop generating this.
Marc Laliberte 0:56
Why is it not working? There we go.
Speaker 1 0:59
Stupid. Corey AGI won't even listen to commands, still hallucinating too. So
Marc Laliberte 1:11
let's start this week with what used to be I feel like our favorite topic to talk about on this podcast. But
Unknown Speaker 1:19
was it ever really our favorite, or just prolific? No,
Marc Laliberte 1:22
not at all. I Well, ah, this was my favorite topic probably eight or nine years ago. Yeah. But when it comes to burned out, anyways, we're going to talk about cryptocurrency and ransomware and all sorts of fun stuff. So blockchain analysis firm chainalysis just published a report last week of their analysis of ransomware payment activity in 2024 specifically, more the second half of 2024 to round out the whole year. The main highlight, though, that was really interesting and stood out to me, was that ransomware payments were down 35% when compared to 2023 they were just around 815 million, so below a billion dollars now, and down by like a third, which that's kind of interesting. They mentioned that this is despite being a it was actually up slightly at the halfway point for the year. And there was even some record breaking payments, including a $75 million payment to the Dark Angels ransomware group earlier in the year, but Payment Activity slowed considerably in the second half of the year. Looking at the top 10 ransomware strains, Akira is the only one of the top 10 that actually increased their revenue in the second half of the year compared to the first half, all the other ones dropped pretty considerably, with a couple just dropping off the face of the earth entirely, for reasons that we'll get into again but we've discussed on the podcast, I guess. Let's actually go over two of those right now. Like lockbid used to be one of, if not the biggest, ransomware operator, but they had a international takedown back in early 2024 and chainalysis noted that their payments decreased by 79% in the second half of the year. Black Cat and Alfie, another major one, just entirely exited in January of last year through an exit scam after what was it the HCA breach, I think they ran off with the extortion amount and didn't pay their affiliate and just completely dropped off by the end of the year. So like pausing here, my main takeaway is, like with black cat dropping off and like, no one really filling the void that they left. Same with lockbit as well, too. It looks like disruption efforts, or at least law enforcement activity, is having an actual, tangible impact on ransomware. Do you agree with that? Hot take? Corey,
Corey Nachreiner 3:54
well, yeah, although I don't know if it's just I think that definitely authorities are helping with lockpit and Alfie specifically. But it sounds like, if you read crypto analysis, is information. It also is people are just not paying more and more. People are deciding not to pay, even when they get to negotiations and stuff, they decide not to or they can recover much better. So I think it's a mix of both. And by the way, the only reason cryptocurrency is involved in this is all, all this comes from tracking wallets. Obviously, I don't know if you mentioned at the beginning, but the way crypt analysis gets this is they track known wallets as they bounce around all kinds of exchangers and other things that maybe we'll talk about later, that that they can track where the money is going and how much payment they see coming to these new wallets. I guess, theoretically, there may be new wallets they don't know of, but it sounds like they're also tracking new ransomware groups, and the new ransomware groups are often making crap up. But yeah, it sounds like, according to their analysis, it's a mix of authorities, which makes sense, specifically with lockpit and alpha v because of. Takedowns, but also just victims finally getting the message that if you keep paying, the attacks will keep coming.
Marc Laliberte 5:06
That was one of my like first thoughts was, is ransomware down just because they aren't finding stuff? But I think you're right on that end, that maybe, maybe it is actually just people not paying, like they actually showed some stats later in the report where they tracked both the like the number of leaks that came out of ransomware operators when compared to the number of payments. And they assumed that they would probably have some correlation, but they showed that the number of leaks were actually increasing over the course of the year, even though payments were dropping, which seems to indicate that ransomware attacks may still be trending up, but people just aren't paying the ransom demands. Yeah,
Corey Nachreiner 5:46
and to be fair, it seemed like at the beginning of the year maybe there was a little bit more correlation, but sometime during the year, maybe victims got fed up or something. Who knows, or could be when takedowns happen too. Yep, they
Marc Laliberte 5:59
had another interesting chart where they showed like the spread of payment amounts. And historically, if you look back like four or five years, the bulk of payments are in between the 500 to $1,000 range. But that has slowly started skewing towards more expensive payment sums. Now there's a pretty large chunk of them in the 100 1000s, like million dollar range, which they noted, was because of a few pretty major outliers that caused some of that skew. But just in general, ransom payment amounts are going up now too. The report, they have a lot of analysis, including a big section on how they how these threat actors launder their money too, showing how they go through different wallets and split up payments. Go through different exchanges, like you mentioned, including ones that don't verify identities using things like tornado cash, which is, or I guess was, a open source way to launder cryptocurrency. I think the author of that one we talked about maybe a year ago got arrested for developing that software because the the US government decided it was specifically made for money money laundering, which is illegal in this country, but either way, like interesting report, but it's, I feel like we've talked about a few times in the last year, that our hunch is the only way to end ransomware is to have people stop paying. And this seems to kind of back up that theory that, you know, at least people are slowly stopping to pay, and maybe that means we will stop seeing, or start seeing reduced ransomware attacks. I don't
Corey Nachreiner 7:43
know. I definitely hope it'd be interesting if they just changed the leak tactics, although, how do they monetize that or just resale of the stolen data? Yeah, but I'm glad.
Marc Laliberte 7:54
I feel like we, collectively as an industry, have gotten better about data backups and data recovery, so organizations may not need to pay in order to recover their systems. And so it really is just that secondary extortion of them saying they're going to leak your data, which, like my own opinion on that, is once they've stolen it, it's out there, and me not paying or me paying them isn't going to stop it from being you
Corey Nachreiner 8:18
trust them not to leak it later anyways, once, exactly like
Marc Laliberte 8:22
a lot of this data does end up leaking, no matter what, and so why bother paying them money for it? And maybe, if that's the that the take that organizations start having, maybe we will finally see a big drop in just ransomware in general. I don't know. Maybe it
Corey Nachreiner 8:39
would be fantastic. No, I think it's possible they may have to move to, you know, they had plenty of other techniques they used to monetize before they stopped working, and that got them to ransomware. But maybe now ransomware dries up, they'll have to find other ways to monetize again. We'll see. What
Marc Laliberte 8:57
do you think they will use for monetization next, or go back to using again.
Corey Nachreiner 9:02
I don't know what data is going to become important now that AI is important,
Marc Laliberte 9:05
just data in general, I feel like
Corey Nachreiner 9:09
I don't, yeah, well, we'll see. We'll see. We will see credit cards are definitely maybe there'll be new flaws in EMV readers, and suddenly credit cards will become valuable again, if they find some way to fraudulently use them. Yeah, it's it evolves. But hey, if we, if we get a decade of them not really doing ransomware and not figuring out a new way to monetize, I won't complain either 100% agreed with that one, then we just have to worry about the nation state spying and knocking things offline. Yep,
Marc Laliberte 9:42
good thing. We are setting ourselves up for success as an American country, at least on that. Oh,
Corey Nachreiner 9:47
yeah, sure. We totally not doing any mistakes. There
Marc Laliberte 9:50
no exactly moving on, though, there was an interesting post I had stumbled across just last week, or actually just the other day. But. From Microsoft's Threat Intelligence Research Team, where they posted a blog post about a unknown threat actors campaign that delivered the Godzilla post exploitation framework to Microsoft IIS servers that were running asp.net Applications. There's a couple of acronyms in there. Maybe we should talk about asp.net real quick for folks that didn't grow up making websites in the 1990s asp.net it's a server side programming language used to create websites or APIs. You can think of it as like a competitor to like PHP or other server side languages to deliver active content. But it's got this, in my opinion, kind of funky mechanism for how it maintains state for, like, a user session. As they are browsing across a website, you know, most other, or at least some other popular website frameworks, they maintain the user state, like locally in a database, and as the user loads up a new page, they'll use the session cookie from that user to go retrieve relevant data and then send it back to the user. Asp.net, does this, in my opinion, funky thing where they use something called the view state variable or mechanism, or it's basically a big blob of formatted data, that is, that users state as they are browsing the website. It could be like their account info, their shopping cart, the contents of the page that's dynamic for that specific session, all of that is actually transmitted to and saved locally by the user. And when they go browse to another page or submit another request, they send that whole blob back to the server so the server knows what to work with. And as you might suspect, without any form of like cryptography protection on that the user could theoretically just modify whatever's in there and send it back to the server, and the server wouldn't know. But thankfully, there is a mechanism for protecting it. They've got what's called a validation key that's saved somewhere, hopefully securely on the web server, that's used to cryptographically sign a message check algorithm for that that blob of data so that when they send it to the user, if that user were to tamper with it, the Mac the message authentic author authentication code would no longer be valid, and the web server would know that the user tampered with it. There's also a decryption key that the server can use to encrypt more sensitive values that it passes to the user, but basically these two really important keys that if an attacker were to compromise them, they would be able to forge a quote, unquote, valid view state blob and basically control whatever data that the server was processing for the user. So go ahead.
Corey Nachreiner 12:45
Well, I was just going to say they do point out in their blog post that this was all pointed out in research about using view state stolen keys for malicious activity. But I found it interesting that Microsoft obviously is pretty good to bear it. About being transparent now, and in this case, they're being very transparent. But in this case, if you go to the research from zero day initiative on this particular thing at first, during a patch Tuesday, Microsoft called this flow flaw, I think, just a normal memory corruption issue, and didn't actually describe it correctly. And it took a zero day releasing how those view state decryption keys work and how you can actually use it to execute code. So they don't always get it right, or at least you got to realize in Patch Tuesday they may not completely describe detail. Yep.
Marc Laliberte 13:38
So anyways, Microsoft found starting in December of last year, there was a threat actor that was conducting a view state code injection attack using a quote publicly known machine key. Basically, they had access to both of these keys, the ones used to cryptographically sign and or encrypt the data. They could create their own custom view state object. When that gets sent back to the server, the server validates it and marks it as valid because the keys check out that were used to sign it. It then loads it into a worker memory process and then executes it, which basically gives the attacker code execution on the server. They found that the attackers were loading a tool called Godzilla. It's a toolkit that has functionality for like, executing malicious commands, injecting shell code into processes and other things you might expect from like a stager or a loader type malware on a web server. But a couple of things stood out to me. On Microsoft's like report for this, they pointed out, like explicitly, this was a publicly known machine key, which, in my head rings, people were using, like open source example projects that had keys baked into them from GitHub, and they didn't bother changing the key before deploying it to a production server. In their guidance, they also recommend securely generating. Keys, obviously avoiding using default keys or ones listed in public repositories. So again, calling out that this seems to be a case of a either like a handful of leaked keys or ones that were a part of public example repositories that people never knew to change or never bothered changing. And
Corey Nachreiner 15:18
I don't, I don't remember if you use, you mentioned this number at the beginning when you went into what ASP was, but part of this is Microsoft has found 3000 of those publicly exposed keys on these probably primarily Source Repositories. So hopefully all those people are now being or people organizations are now being, you know, notified and changing that. But as you pointed out, another issue of people accidentally leaking keys in public repositories.
Marc Laliberte 15:50
This is one of those where, like, I use, like, I still like working on coding stuff in my free time, too, just to keep my hands dirty, and I often reference, like, example projects when I'm working on something, just to either get an idea of how something should work or, like, depending on what the project is like, using that as a framework to build something. Yeah, and more often than not, like when you set up a, let's say, a development build of something, the project on GitHub will come with an encryption key or a session, like a hard coded secret material in there, just so it works when you're working in a development state, and if you're not like reading the docs and understand that you need to actually change that with something more secure, you could end up with that in production. And now everyone knows your shared secret for something. So
Corey Nachreiner 16:37
it's almost like some of the framework repositories default state is insecure, and maybe they're not. What's the word? Proactive enough to let you know you should have a more secure set of settings to start with, or you need to go back and change this after you prototype.
Marc Laliberte 16:56
I I kind of understand wanting to have, like a config, a default config that you know just works without a user needing to do something, whether building an app or whatever. But I think there is absolutely merit to not including secret material and throwing a just intuitive error when they try and run it so they know they need to go generate a key just to prevent this type of thing. And I think that falls in line with, like, what CISOs recommending, or is that still, is it? Are they still recommending it? I don't know. Did CISA kill the secure by design initiative yet? Has that been eaten up by
Unknown Speaker 17:32
we don't know, but probably,
Corey Nachreiner 17:34
like everything else, you're all fired, or maybe they're deciding whether or not to take their early quitting payout Exactly.
Unknown Speaker 17:41
Anyways, politics aside,
Marc Laliberte 17:45
like it goes aligned with what they're saying, Don't include default passwords anywhere, and maybe for like web applications, even example, projects have a mechanism for randomly generating something for secret material would probably be pretty beneficial to prevent this type of attack.
Corey Nachreiner 18:03
And if you are a framework that has those example projects to start with, just at the very least, be very vocal. If you do have a default basic password that shares that happens everywhere and it's not changed exactly,
Marc Laliberte 18:16
if you do have an IIS server, make sure that your validation key and decryption key are securely protected. Microsoft recommends also encrypting that sensitive information in the web dot config file once you've deployed that application to make sure that those secrets are never in plain text on the file system. That can make it more difficult for an attacker to gain access to them and take over the whole system. But either way, it was an interesting campaign, and maybe at some point we will finally stop using default credentials all over the place in every application that people seem to use.
Corey Nachreiner 18:54
But yeah, because in the history of time, people always decide to eventually make the right decision and never make security mistakes. Exactly. Sorry. Maybe the news is making me overly cynical, not that that is unusual for us. Well,
Marc Laliberte 19:09
here's another happy story for us to talk about. This. One's a report from harmonic. They're an AI risk management company. I tried to go through their website to figure out exactly what they do, and it seems to be mostly about like monitoring organizations use of artificial intelligence. But anyways, they put out a report called from payrolls to patents, the spectrum of data leakage to Gen AI in 2024 where they analyzed what they said, 10s of 1000s of prompts that were sent to chat GPT, copilot, Gemini Claude and perplexity to look for data leakage trends and quarter four of 2024 and they had a couple of interesting stats that I think are worth talking about. So the first one was they found that 8.5% of prompts sent to these chat GPT. Tools contained what they called sensitive information, and for that sensitive information, the categories were customer data that was around 46% includes like billing info, customer authentication, material, profiles, payment transactions, even credit cards, in some cases, employee data was 29% mostly payroll information and PII, legal and financial data was 15% mostly financial projections, portfolio data, legal disclosures and M and A activity security was 7% and that was mostly access control policies, system configs, network configs, incident reports and user access logs, and then what they called sensitive code was 6% that was almost entirely access keys, but in some cases proprietary code, but still eight and a half percent sensitive information sent to these, these chat agents, Corey, any hot Take on that,
Corey Nachreiner 20:59
it's just an ouch. I mean, I do still think we have more training to do within our own organization, but I think you and me have pointed out many times that if you're using a public LLM that you're not paying for you should not put any data in a prompt that's important to your company or yourself, because it's and this is just proof that people are and I mean, I get it from a usability standpoint, Gen AI and llms become useful when they have all your company's data for internal AI assistant. You want it to know everything, because that's how it becomes efficient and does everything cool for you. But you should know that you don't want to do that in a public free AI. That's why we talk about pain getting the right license, not just to give you your own instance, but to ensure that the company that is providing the LLM is not taking data from you or just doing the hard work of building your own internal LLM. So it's just to Ouch that despite the fact that I think every security expert has talked about how dangerous is is to share information when you're prompting a public open AI, people don't care.
Marc Laliberte 22:14
I am so I use chat GPT all the time. I've got a major version. My latest use has been, uh, one of my kids is at the point in their physics class where I don't remember how to do things anymore. And chat. GPT is a great way to, like, give an explanation that I can then turn around and first make sure it's accurate, but then help them solve, like, whatever or understand whatever problem they're working on. It's been great for that. Yeah, absolutely, thinking of, like, my own use for it, the amount of times I use it for like that and just, like, stupid stuff, like, I don't know, creating, well, I guess creating the images is technically through dolly and not chat, GBT,
Corey Nachreiner 22:53
but, like, but just so Gen AI, yeah,
Marc Laliberte 22:55
totally. Like, just benign things, eight and a half percent seems really high, really high for, like, just sensitive data input into chat GPT, like, but
Corey Nachreiner 23:05
I think people are starting to realize they can use it for work, right? I mean, I do use it for work, but the if I'm using I also pay for chat G, P, T, but if I'm using one that's not under our control, I will only do it for where I'm taking public security threat research and trying to quickly get summaries of it or for basic writing tasks. If this the second there's any data that belongs to like my company doesn't want to get out there, WatchGuard doesn't want out. I'm not going to try to use that with the external AI at all. So I just you 8% is a lot, if you think about, yeah, people use it to write valent days cards and other things. But I do think now people have realized, if they're smart, that, you know, if you're a writer, and you spend four hours of your day trying to come up with to be nice, I'll call it the stinky first draft. I had a writer tell me the hardest part of writing is just crapping out that stinky first draft the first time. And once you have that, it may actually suck, but making it better once you have the basic outlines, much easier. So, like a writer can just take a bulleted outline and get something in writing really quick. And if you think about it, if they're trying to do that for work, they might have to use the latest corporate stats on something. Or, you know, you could see why. Now they're learning, oh, I don't only need to do this to do funny, creative things for home. I can actually cut out the amount of time it takes me to write a report for work. And I think once more and more employees do that, and they don't have a corporate policy to keep them away from doing this in public prompts, it's, it's going to be dangerous. And I don't mind people drafting, writing with it. You know, I have my own ego where I want to make it my own, but I it's, it's dangerous for a company. Me if people are getting the efficiency using sensitive corporate data that they don't want released publicly.
Marc Laliberte 25:05
And they actually they had a stat that they tracked for that too. In their report, they found that a 64% of chat GPT users used the free tier, and that was also 59% of Gemini users, 75% of Claude and 50% of perplexity. And they noted specifically that the free tiers generally explicitly state that they will train on customer data, which we've seen that there are mechanisms for tricking these models into regurgitating their training data, in some cases, so inputting sensitive information into something that trains the model is a pretty big risk for that ending up getting becoming a data leak event.
Corey Nachreiner 25:45
On the flip side, by the way, if you can find efficiency gains on non sensitive things, there's apps, I think you'd be not smart as an employee not to use this for non sensitive things. But just think about, you know, are you giving it your finance numbers if you're writing an email to the customer? Does it include actual private customer information? And maybe then you shouldn't use it to write an email to the customer that contains private information. Or you can maybe make it start the intro to the email that is an intro, and you can manually do the private information separately. Yeah, we're going to see more of this, and I think companies need to really solidify their how to use AI and have more security awareness training. In fact, Marc, actually, that should be one of our custom videos this year for our security awareness training. Is 100% people are going to use AI, but we need to show them how to do it without just killing all everyone's security.
Marc Laliberte 26:46
Yeah, because, I mean, I think you hit the nail on the head that you're going to be falling behind if you're not using these tools, either personally or as an organization, using artificial intelligence. I hate that I'm on that bandwagon now, but I'm 100% on that bandwagon. But it doesn't come without risk, and so managing that risk with a good policy and training for your employees is, I think, the most important thing.
Corey Nachreiner 27:12
Yep, that's out of the bag. We might as well play with it.
Marc Laliberte 27:16
Yeah, I, for one, love that copilot can just write all of my code for me these days now too, makes it really easy. Just tab complete everything. Yep.
Corey Nachreiner 27:27
And I hate to say, by the way, we've talked about it before, but for something like code writing, where you can actually verify the results with logical result, it's actually one of the places AI, I think a I kind of worry about architects or engineers nowadays, because they'll have to find more architectural design parts of their job, because code writing is one of the first things I think AI is going to explode, yeah.
Marc Laliberte 27:51
But I mean, thankfully, it does still make mistakes, and so we will all still have jobs at least for a little bit before we move into AI wrangling instead of,
Corey Nachreiner 28:00
too bad, it learns faster than human babies.
Marc Laliberte 28:04
Not ideal.
Corey Nachreiner 28:07
What's your prediction for ASI, not AGI, that's probably going to be here in two years. Asi, what year? Si? Is that a new artificial super intelligence beyond race? Oh, good, even I think 2030 2030
Unknown Speaker 28:25
that's I don't want
Marc Laliberte 28:26
to think about it. You
Corey Nachreiner 28:27
might want to lighten your prompts now and just say please and thank you.
Marc Laliberte 28:33
Just now on stanch your bed and chat GBT and idiot, anytime it gets the wrong answer, I need to stop
Corey Nachreiner 28:39
complaining about how much Siri sucks at dictation because she's listening. Oh
Marc Laliberte 28:43
God, Alexa is going to murder me in my sleep. Oh, nightmare fuel. That's okay. That's a problem for future us, though, right. Yeah, right. Now,
Speaker 1 28:56
for now, well, happily, we have plenty of fun things to deal with now, exactly,
Marc Laliberte 29:00
oh, man, another day, another new fear. Perfect. Thanks. Corey,
Speaker 1 29:08
that's why we bring to security. Gotta hedge, you know, gotta mitigate the risks. Perfect. Hey
Marc Laliberte 29:16
everyone, thanks again for listening. As always, if you enjoyed today's episode, don't forget to rate review and subscribe if you have any questions on today's topics, or if you have suggestions for future episode topics, because sometimes this is hard, you can reach out to us on blue sky. I'm at it's Marc.me Corey is at secadept Dot blue sky dot. Social. The both of us are also at WatchGuard underscore technologies on Instagram. If you want to get to us there, maybe send some cool pictures of podcast material, I don't know what, or just click
Corey Nachreiner 29:45
on the QR code I send you in WhatsApp. Don't worry about it. It will get you to us.
Marc Laliberte 29:51
Perfect. Thanks again for listening, and you will hear from us next week.