In this episode of the Futurum Cybersecurity Shorts series, I’m again joined by my colleague and fellow analyst, Fred McClimans for a conversation on cybersecurity issues in six quick vignettes. Today, we covered:
- Google’s rollout of mandatory 2FA
- The targeting of Passwordstate, an Australian-based enterprise password management app, by hackers
- The discovery of over 40 apps with more than 100 million installs between the found leaking API keys
- Peloton’s leaky API and what that means for user data privacy
- The massive DDoS attack targeting Belgian ISP Belnet, and the impact of that attack on the government, public, science, and education agencies, including the Belgium Parliament and some law enforcement agencies.
You can watch the full episode here:
Or stream the audio on your favorite podcast app:
and if you’d like just the short vignettes on specific topics, you’ll find them on the same channels, so be sure and subscribe once you’re there.
Disclaimer: The Futurum Tech Webcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
More Insights from Futurum Research:
Shelly Kramer: Hello and welcome to this episode of the Futurum Tech Webcast. This is part of our Cybersecurity Shorts Series, and I’m joined today by my colleague here at Futurum Research, Fred McClimans. I’m Shelly Kramer. I might have forgotten to say that.
Fred McClimans: Yes, you are, Shelly. I’ll introduce you. This is Shelly Kramer, my cohost.
Shelly Kramer: Awesome. There you go. Today, we’re going to talk about some exciting things, some interesting in the cybersecurity space, and I think you’re going to kick us off, Fred.
Fred McClimans: I am indeed. With Google, everybody’s favorite search engine platform, they are going to do something that every organization and every individual out there should have done last year at least, and that is they are going to slowly start to force users of their platform and some of their software tools we would expect to actually two-factor authentication in their apps. Now, two-factor authentication is simply a way that an organization when you try to log into a site or update something in your security settings or your profile, it asks you to verify that you are in fact who you are.
Two factors. Two factors come into verifying your identity. Usually this is in the form of a text message or an email, or even in some services they actually call you up and say, “Hey, you’re trying to log into your bank account. Is this really you? Enter your private pin number here.”
Now, the important thing about this is that in the world of cybersecurity, phishing attacks are an incredibly way for threat actors to gain information about an organization by simply putting information out there a little bit that is used in turn to gather a little bit more information, that’s used in turn to gather a little bit more.
At a certain point in time, individuals on the threat side have pretty much figured out a number of ways that they can impersonate somebody in this process here. Two-factor authentication kind of bypasses that and eliminates the threat a bit. Because when you’re trying to log into a site, it will ping you only on a known number that you have chosen. For example, the other day I logged onto iCloud. And as soon as I did on my phone, I got a message saying, “Hey, look, we need two-factor authentication on this. Go find another one of your devices that’s already logged into iOS or into iCloud and we’ll verify you on that device.”
A nice form of two-factor authentication there. But what’s interesting, Shelly, is so many people just don’t do that today. They just use a password and log in. We don’t know exactly when Google is going to be doing this. All they’ve said is somewhat soon, and they haven’t really released any of the details there, but we know it’s coming. At this point, pretty much any organization that is concerned about security, and that should be every organization or every individual. Even before Google gets there, go to all your sites and turn on the two-factor authentication.
It can be a little bit of a hassle sometimes, but the security feeling you get when you log into an app and it pings you on another device saying, “Look, is this really you,” is very much worth it.
Shelly Kramer: I think the only people who really think this is too much of a hassle to do anything about it are the people who’ve never had their identities stolen in some way or their systems breached. I haven’t, but we’re security freaks. Something I thought that was interesting when I knew you were going to be talking about this was I came across a tidbit when I was doing a little research was that how strong is my password searches went up by 300% in 2020. At least people are thinking about that. Another thing that’s kind of cool, I’m Gmail user.
I mean, there’s millions and millions and millions of Gmail users, but you can go to Google’s Security Checkup page and it will show you any of your passwords that may have been compromised in some way. It’s just a good thing. Again, turning the two-factor authentication on… I was actually sitting on my couch the other night, and I was trying to do something. I have a number of different Gmail addresses, and I was trying to do something. Log in and Google said, “Go to another screen and tell us if this is you,” and it was like, “Yeah, I don’t need this badly enough. I’m not going to do that.”
But it forces you to take action, to check, to get another device. All good things.
Fred McClimans: There are some downsides. Some of the implementation, Shelly, are not that great. I can tell you firsthand, I was on a mobile device where I used a browser to log onto the site and the two-factor authentication came as a text message to that device that I had in my hand. It’s not always foolproof.
Shelly Kramer: That happens.
Fred McClimans: I will say, when you were talking about passwords, a lot of people don’t realize that the old password engines of the past where it had to be exactly X number of characters and they could only be a certain type of character, those are pretty gone. You can in many cases actually have passwords that have spaces. Instead of coming up with a random series of numbers, you can use a phrase that you’ve changed the words around, changed the letters around, something like that. Password creation is not as difficult as it used to be.
Shelly Kramer: Yeah, absolutely. Well, good stuff. All right. Now we’re going to move on. Speaking of passwords, and I’m going to talk a little bit about supply chain dangers and why your password management app might be targeted by threat actors. In this story of the week, Passwordstate, which is an Australian-based enterprise password management app, its parent company is Click Studios, they alerted customers last week of a breached that they said occurred just on a two day period, between April 20th and April 22nd. A password management app is breached.
That seems a little ironic, right? What happened is that hackers inserted a malicious file alongside one of Passwordstate’s regular updates. This made its way into the system largely by way of what’s automatic in place updates onto Passwordstate’s users computers and devices. And then when customers performed just the regular updates, and some of them again were automatic, over the course of that two day period, a malicious file was downloaded. And then this set off a process that extracted a bunch of information, and this included all of the data that was stored in Passwordstate.
Think what do you put in a password management app? URLs, usernames, passwords, and it also included information about the computer itself. Click State reported that user’s password were only exposed for about 24 hours.
Fred McClimans: Only.
Shelly Kramer: Actually 24 to 28 hours is what they said. I wanted to step back a minute and just think about the potential damage. Okay? Passwordstate’s parent, Click Studios, it claims that a Fortune 500 customer base of 370,000-ish security and IT pros. That’s a big customer base. And then a smaller customer base of 29,000, I would assume individuals.
Fred McClimans: Go back for a second, because that security base or that base of users you talked about, you mentioned those are security professionals.
Shelly Kramer: Yeah.
Fred McClimans: These are the people that… If you’re a devious mind out there, these are the people you want to get. Because when you get them, you recognize they control so much for everybody else.
Shelly Kramer: Right. They manage credentials across organizations for all of their devices and all of their services. When you think about it in that way, it’s really kind of impossible to know at this point what the damage here is again. This breach did occur over a fairly short period of time. But importantly, this is a risk at the supply chain level. There’s always a risk at the enterprise level, at the government level. But going back even to one of the earliest big, big breaches that I can recommend is Target.
When Target’s system was breached, it was because of a vendor and a lapse of security in the vendor that provided some kind of service. Again, the supply chain. You can have all the best security practices and procedures in place, but you can have a vendor that you rely on something for like a password management system. And just like that, you’re in trouble. This is why threat actors target supply chains. They look at who’s this organization and then who are the vendors supplying. It’s really not all that hard to figure out that. I thought it would be an interesting segue from your conversation about Google.
Fred McClimans: There was an interesting point there. The vector of attack? Automatic updates that were sent out to a group of people. What does that remind you of?
Shelly Kramer: SolarWinds.
Fred McClimans: SolarWinds.
Shelly Kramer: Exactly.
Fred McClimans: Same approach. They’re getting smart. They’re finding ways to use the systems themselves to perpetrate increased penetration into organizations.
Shelly Kramer: Absolutely.
Fred McClimans: Clever. Clever.
Shelly Kramer: Absolutely. We’re going to move on.
Fred McClimans: Let’s talk a little bit and expand on that a bit with the hardware side of that. We’ve been talking about passwords and some of the risks and challenges there, but within the applications themselves… On your smartphone, I don’t know how many apps you have, Shelly. I go up and down. I’m constantly realizing I have about 300 too many. Delete a whole bunch and still we cruise back up again. But we use these applications not just on the local device, but most of the applications on our phone, they also connect into the cloud.
Shelly Kramer: Right.
Fred McClimans: They have some resident database somewhere. They have software. They have updates that are coming from the software provider. In one instance, that’s been shown to actually itself been a risk. When these devices, your mobile device, has an app installed, there’s a particular way that it communicates back to the cloud, to its parent company, to the developers, the other users, to the common shared database in the cloud.
What happens is some organizations have actually hard coded a connection point between the device or the password to hand off, the handshake there, between the device and the cloud database. Some organizations are now realizing on the threat side that they can actually figure out what that key is looking for specifically within a device that’s hard coded, and then use that to exploit and gain access to data. Why all that? There’s a Bengaluru company called CloudSEK, and they recently set up a site, a little platform, where people could test their apps.
And coming out of that, they realized that just in their limited exploration here, they found 40 apps that had I think it was somewhere over 100 million installs combined that all had what they’re calling leaky AWS keys. In essence, these apps had hard coded the permission handshake between the application and AWS’ system in the cloud. Who does this impact? Adobe Photoshop, Hootsuite, IBM’s Weather Channel, to name a few. It’s a big issue out there. It’s a major issue out there. Again, it’s something that users themselves don’t necessarily think of.
It’s not a password or something that they need to take care of there. It’s just something that’s embedded into the hardware. That’s an important point because we’re increasing seeing hardware-based attacks, hardware-based flaws. Zero Days that are something that’s simply hard coded into a device and those are a challenge to fix. It’s not as if you can actually just log onto to your device, make a few changes in the application configuration, and be all set. These typically require firmware updates from the manufacturer, and that’s an increasing risk and an increasing challenge.
For enterprises out there, you’ve got to be putting this onto your dashboard. You’ve got to be looking at the hardware level risks that are out there and not just scanning software vulnerabilities or for accidental user configurations. You’ve really got to start thinking, how do we actually make sure that we have the right intel, the right data to make sure that our hardware is secure?
How do we set up the process so that when we do discover that there is an issue there, that we ourselves separate from what the vendors might do in their hardware platform, but what can an enterprise actually do to mitigate the risk of that type of an attack when it occurs?
Shelly Kramer: While you were talking, I was randomly thinking about the absolute dirt of tech talent, especially as it relates to not only IT in general, but cybersecurity in particular. You know how your brain just kind of fires and then I was just thinking about what kind of person wants to do this. You love solving problems. You love riddles. You love competing against hackers, the adrenaline rush. I mean, there are so many fun things, but then you also maybe don’t like sleeping or a stress-free life. I mean, really is just like you never are covered.
You know what I’m saying? Again, we’re going to be good because we’re going to use a password manager app. Oh wait, that was breached. It really is so challenging. I’ve got the software covered. Oh crap, I forgot about the hardware part of this. It really is ever growing. There’s just always a danger. Were you going to say something? I was waiting. No? You were good? You were good?
Fred McClimans: I’m good. Thinking about it though, I was thinking back to the Dell study that we did last year on hardware security. How many data breaches involve the physical bias itself, the firmware in these devices? Users, overwhelming, they expect the providers to be secure and to provide that level of security. I don’t think that enough organizations actually once they get the hardware in sit down and say, “What’s our plan or our strategy for how to address that?” I would also throw into that same leaky bucket there the idea of just misconfiguration issues out there.
There are a lot of devices. You have an organization that has five, 10, 20, 50,000 employees. All those devices that come in, all the laptops that they’re using, all the phones, if they’re controlled by the organization, there’s a whole process for ordering and procuring those devices or validating the devices. You ordered it. You got it. It’s right. You do your security checks, and then you push it out to the users in the field. Well, there’s a lot of firmware in those devices and that firmware needs to be updated on an ongoing basis.
During those updates, if your organization isn’t set up to properly to do that, you can actually put users in a situation or… I say users. Individual users, but perhaps remote IT groups that are having to do updates themselves. The more people you have doing updates in an organization, the more likelihood you’re going to have of an accidental error, a configuration, something that’s not right in there, which is another significant area that we see targeted threat actors exploiting. They find vulnerabilities in the configuration setups of organizations.
Shelly Kramer: And that finding vulnerabilities is a full-time job for threat actors.
Fred McClimans: It is. Not for me.
Shelly Kramer: Absolutely. All right. With that, we’re going to move on and we’re going to talk about Peloton. Peloton I believe has a bit of a uberous problem, at least when it comes to user’s personal safety and data privacy. If you’ve been paying attention at all this week, for the last couple of weeks, it has not been a great period for Peloton. A few weeks ago, the consumer protection agency issued a warning about Peloton’s Tread ad Tread+ treadmills. One child has died and at this point, there’s been some 70 some injuries as a result of this Peloton treadmill.
I actually considered buying one of these and instead bought a different treadmill. But from what I can see, one of the dangers is that this particular treadmill sits up a little bit higher than others, which allows a little one to get underneath it. By the way, all treadmills are dangerous. I remember when my twins were little. No matter how many thousands times you tell them not to play on the treadmill and you keep the key away from it, sometimes toddlers or young people can find a way to get it.
I remember one of my twins learned a valuable lesson that treadmills are dangerous. But anyway, when the CPSC came out with their warning about Peloton Tread and Tread+ treadmills, the company really just poo-poo’d it, and they really kind of arrogantly said, “You don’t have anything to worry about. This is not accurate. This is just a random warning. We’re all good.” What happened this week was that actually the company recalled all of its Peloton Tread and Tread+ treadmills and combined that with a statement, “We were wrong.”
I mean, there’s so much at stake from brand reputation. When you’ve had a death, it’s kind of an arrogant stance to take that the CPSC is wrong. But then in addition to that this week, there was news that the Peloton API was leaking private customer data. I’m sure many of our listeners are Peloton customers. You can see mine in the background. Peloton has done a great job of creating community. And when you set up your Peloton account, you can choose to have your account be public. You can have it be private.
But when you’re setting it up, the systems asks you for details about yourself, your height, your weight, your age, your gender, your city. Some people put a lot of information in. Some of these things people just don’t think about when they put their information in. They just put accurate information in. The thing about having a public profile is that part of the beauty of Peloton for many people is that you can connect with your friends. I can see that Fred’s working out or that Daniel’s working out or whatever. I can be motivated by the fact that you’re working out every day.
I need to get off my butt, right? But anyway, this API vulnerability was discovered by a guy named Jim Masters. He’s a researcher at a company called Pen Test Partners. They’re a security company that actually researches breaches and vulnerabilities. What Jim found is that the bug allowed anyone to pull user’s private information directly from Peloton’s servers even if a profile was a set to private. Hi. I mean, that’s kind of a big deal. I will say my profile is set to private. I’m not really interested in my information being there.
What happened though, I thought was equally interesting, is that they reported this issue directly to Peloton in January and Jim published a blog post about this and he said he gave them what is a standard in the industry 90 day notice, a 90 day deadline. Fix this bug. I’m telling you about this bug. Here’s 90 days to fix it, and then I need to go public with it. He submitted that notification to Peloton. And he got a confirmation that they’d receive the notice, and then there was radio silence.
And then a couple weeks later, Pen Test noticed that Peloton had done what they thought was a partial fix of this problem, but they didn’t say anything about it. This partial fix meant fixing the API so that the data wasn’t any longer available to anyone on the planet, but it was available to anyone with a Peloton account. Okay. Hi, that’s not really very much of a fix. So then they tried again to connect… Pen Test Partners tried again to connect with Peloton and they were ignored.
It was only when Zack Whittaker, a TechCrunch reporter, was writing about this and he was one of the first to report on this leak. When he asked about it, the company decided it was probably a good thing to do something. Masters published a blog post on this issue, and he updated it just this week following with a conversation with Peloton’s new CSO, who advised that the vulnerabilities had mostly been fixed within about seven days. Okay. It’s really hard to believe anything that Peloton says at this point. Again, I’m a customer, but it is interesting just kind of the attitude that Peloton has.
This really doesn’t concern me. It shouldn’t concern you. We’ll deal with this in our way, whatever, just seems like a really arrogant posture on the part of the brand. Oh, and the company stock was down about 15% on Wednesday. Maybe that’s what happens when you act like a jerk. I don’t know. I like to err on the side of optimism. My hope is that Peloton will learn from this and realize that this is not the best way to treat customer’s data certainly, to treat injuries as a result of its products.
I know that you have some thoughts on a famous Peloton user and really what that might be. Why don’t you share that?
Fred McClimans: Shelly, you’re the most famous Peloton user I know.
Shelly Kramer: Oh, by far not.
Fred McClimans: But there are others out there. And in fact, the issue of the API here… By the way, this kind of sounds a bit like without some of the arrogance what Zoom went through when everybody just jumped online and started having Zoom calls for business, for personal, for schools, the works. Zoom really hadn’t been set up at that point to really handle that type of massive wave of user adoption in security and everything that came with that.
I think Peloton probably got caught up a little bit in that type of a situation where they just hadn’t really thought about security and how important and critical it is in its type of an environment. Now they realized, look, after a year of pumping bikes as fast as we can and building our base and doing lots of marketing and saying, “Hey, let’s go Peloton,” now they actually have to pay the price for ignoring security. This came up this whole issue though back in January when Joe Biden was sworn in as the president of the United States.
President Biden is, I guess, the famous Peloton user out there. The Secret Service and everybody looks at all this device and they go, “Hey, this is great. He’s got an exercise bike, but it’s got a camera. It’s got data. It’s got all sorts of things in there. It can listen to conversations. It can see what’s going on in a room.” Essentially you’re in the situation here where the tech that we’re using for fitness, in this case, the president using it, which is a great thing, has so many potential vulnerability points in it, so many risk points.
That in order to really make it truly safe to put into the White House, you would literally have to disconnect everything that’s electronic on it and just simply have an exercise bike without all the two way communications and the sharing of data. I know that seems a little bit extreme, but there is something in the cybersecurity space here beyond even just the API issues here, the fact that these devices are all connecting wirelessly into some wifi device within your house, your apartment, wherever you happen to be.
There’s an attack strategy in cybersecurity called the man in the middle, where an organization puts a device or a person somewhere in between one person and another. They capture the data that goes back and forth between them. This is most famously used with devices like the Stingray cellular devices that were found floating around Washington DC a few years back, where a user comes into DC, they turn on their mobile device, and there’s a femtocell or a picocell, basically a small portable cell tower, that mimics the real cell tower.
It accesses that gateway. Your phone connects into it. You make a phone call and everything you do, everything you transmit over there is now captured by this picocell or femtocell, the Stingray type of an attack here. But the same thing theoretically is possible with a wifi device. When you go into a Starbucks or a Cozy or wherever you happen to be and you connect up to the local wifi, very often those devices themselves are fake devices.
It’s very easy for somebody to go into a shopping mall, for example, and just set up a fake wifi device, say, “Hey, look, it’s free. Dallas Town Center Mall. Free access.” People log into it.
Shelly Kramer: Which is why I never do that.
Fred McClimans: I know. In the wifi case though, theoretically it’s possible to do the same thing. Basically set up a very strong wifi signal that somebody within their own home could accidentally log into. The security risks here are very high. I would say just based on the fact that so many of the people that we engage with are now working in their homes, all the devices that they have there, all the electronics, everything, that’s a window into their home.
In particular, something like the Peloton Bike, which you see a lot of in the background of the offices or other type devices, you got to be really careful. When they’re not being used, turn them off. When they are being used, make sure it’s properly configured and you’re at least making yourself as secure as you possibly can be. Just to kind of wrap this up, we don’t know if the Peloton Bike release… I don’t know if the bike actually made it into the White House, how many layers of duct tape it has and how many wire connections were cut.
Definitely an issue here. I think you’re spot on, just the sheer arrogance. I mean, I know Peloton’s CEO said, “Hey, we admitted that we were wrong.” But the fact that they were so vehemently opposed to action on both the API issue and on the physical risk issue of their treadmills, there’s just no place for that. If you want to ruin your brand in 24 hours, tell your users you don’t care about safety in their devices.
Shelly Kramer: Right. I think sometimes people don’t think enough about the information that’s out there. But when my name and city and my gender and my… I think I read that not necessarily your date of birth, but if it happened to be your birth date, that information was available. I mean, the reality of it is there are all these little breadcrumbs out there on all of us, right? It’s not hard for somebody. By the way, again, these databases, this information is routinely leaked on cybersecurity forums, on hacker forums. Huge data dumps.
We talked about that last week. It’s not hard to go through these. By the way, now they’re also using AI machine learning to automate some of these processes and to pull information out of these databases. It really is important to protect your data and protect your privacy and to patronize brands that care as much about that as you do. I think to me that was what was important here. I am a Peloton fan. I am a customer, and there are many things I love about the experience that Peloton has created.
This experience, their attitude as it related to the consumer safety warning that led to an ultimate recall and their attitude here, although the one thing I will give them a pass on is that in Jim Masters’ blog post about this instance, he indicated that the CSO at Peloton was new to the role. Did they have a CSO? You know what I’m saying? A lot of times in an organization, when somebody’s coming in, somebody’s going out, sometimes things drop through the cracks. I’ll give them one teeny tiny pass for that.
Fred McClimans: They had a CSO in place. I don’t know what happened or what that transition’s like. We’ll have to dig into that a little bit.
Shelly Kramer: Yeah. Yeah. Anyway, I thought it was an interesting… It wasn’t a great week or two for Peloton.
Fred McClimans: No. Let’s talk a little bit now about denial of service attacks. We’re going to shift from Peloton Bikes. And by the way, you plugged Peloton. I’m a Trek rider myself. My Trek 1200 triathlon bike, I’ve had for a long time and I love it. Best bike ever.
Shelly Kramer: There you go.
Fred McClimans: Denial of service attacks. Shifting gears completely here, a denial of service attack is essentially a way that threat actors out there can literally flood a network, a node within a network, so that it’s overwhelmed and the device simply cannot handle the massive amount of traffic that’s coming through. When this occurs, you’ll have pretty much a staged attack where an organization will have multiple different devices around the internet that are all targeting and flooding a particular device with requests.
The device is overwhelmed. Nobody can get access. All access is denied in the device. Not just their access, but the access of all the users that are going through that device. Well, this happened last week in Belgium. Belnet, which is the country’s research and government network in place there. It serves law enforcement, education, scientific community, public services. It was hit by a massive DDoS attack. This attack was particularly challenging because they said when they realized they were under attack, when Belnet realized this, they tried to take a countermeasure to it.
They tried to really stop the attack, but the threat actors were anticipating that. Throughout the period of this attack, they kept adapting their attack. It wasn’t as if it was just a static, “Hey, let’s turn the devices on. Let’s flood the Belnet network and let’s kick everybody out,” they were actually going back and forth in there. Strike, counterstrike, strike, counterstrike, back and forth. It just heightens the feeling that you get that denial of service attacks like this, they’re kind of crossing over into the realm…
It’s not quite cyber war yet, but it’s definitely above the typical criminal that’s out there. Sort of a cyber conflict state that we see going on here. They do believe the attack was politically motivated, because it directly targeted Belnet, which again is the government’s network that they use for all their research, for government activities. And in fact, Parliament in Belgium uses this as well and they were shut down for a while. They could not conduct all their virtual meetings or literally send an email while this attack was going on.
Now, what’s also kind of troubling about this particular attack, and they don’t yet know who’s behind it, is that Brussels, that is the headquarters of the European Union. That’s where everything in the EU goes through from a policy perspective, from a political perspective. This attack, if you think about it, it’s not Belnet. It’s not just one network that’s isolated. The internet itself and the way we access data, we transit through multiple virtual internets that belong to different companies, AT&T. Maybe it’s Verizon.
Maybe it’s Rackspace. We’re going through some different organization’s network to actually get to our end destination. This attack here, though, in Belnet definitely impacted other organizations outside of the government within Belgium. You can see that kind of spilling over and the potential risk for an organization that says, “Hey, maybe we want to have our own little sort of insurrection around a particular event. Maybe we want to shut down the EU for a little bit. Give them a little bit to think about or slow down their electronic transmission, their decision-making capabilities.”
That’s where we start to see these denial of service attacks that have been around for decades actually, evolving from “this turn it on and bring it down” approach to “let’s now use this as a political tool, let’s get into that cyber conflict state,” where again not all out war, but definitely politically motivated and incredibly disruptive and dangerous.
Shelly Kramer: We know that government entities are very high on the list of targets for threat actors. It’s kind of like what we covered last week with the one point X number of billion email passwords that were leaked and a huge percentage of them belonged to both the United States government, people with .gov and .gov.au and .gov.uk email addresses. Those are really big targets, and even government at lesser level. I happen to read just briefly today and not enough to coherently talk much about it, but in a city in Alaska, the court system, maybe it was a municipal court system, was hacked.
It completely shut down every bit of operations within that particular realm of government. It’s high level, like Belgium, and then it’s at the state level and the city level. It really is very highly targeted and something that… The other thing is, is that government systems are typically what we call laggards as it relates to digital transformation. Across the world, governments have been challenged fighting a global pandemic. Budget dollars to have the best IT talent, a lot of times government work doesn’t pay as well as private sector work does, right?
You’re talking about attracting the right kind of talent, having the right budget for the right software and hardware, all of that. But it’s also what makes governments an attractive target.
Fred McClimans: Yeah, it does. If you think about a play on the supply chain attack strategy, you find the weakest link in the supply chain and work your way into an organization and start to spread laterally. In a lot situations, and the coronavirus pandemic here is a great example, you have this instant requirement for massive levels of government-private industry partnerships and collaboration taking place. If you’re trying to get into those systems, where do you go? You look for the weakest link.
You look for the entities out there that may not be quite as secure, that all of a sudden today are storing data and accessing data and providing potential access points to a lot of private organizations that may have better security tools and processes in place. Because you’ve now connected them all together very quickly, there’s bound to be some vulnerability somewhere, some gaps, some misconfigured setting. Something out there that gives somebody an access point in.
It’s really to the point where you have to live in almost a zero trust world, where you assume that nothing is trusted until you’ve verified it every time. But then you also have to start to think, we need a much broader system in place. We need a change perhaps in our thinking around policy and what it means to be private and secure with your data.
Shelly Kramer: Right.
Fred McClimans: We’ve talked about this in the past. I think there’s a line where an organization has to ask, “Hey, we can offer these great services, but can we secure the data?” If you can’t guarantee the data is secure, then you shouldn’t offer the service.
Shelly Kramer: Absolutely. Absolutely. All right. We’re going to move on to the final topic that I wanted to hit on today and that is about, again, government. There’s a theme here. I wanted to talk about the fact that the United States and the UK governments issued a cybersecurity advisory today on Russian threat actor activity. This advisory was published by the Cybersecurity and Infrastructure Security Agency (CISA). We talk about that a lot, but sometimes I hate to throw acronyms out there and just assume everybody knows what an acronym stands for.
CISA is the US Cybersecurity and Infrastructure Security Agency, which is a mouthful. The UK’s National Cyber Security Centre, the FBI, and the NSA. This report, this advisory was focused on Russian Foreign Intelligence Service, they called it SVR, and their tactics and their techniques and their procedures that they used to target victims and really how their methods have evolved. Again, what the cybersecurity landscape is, is a constantly evolving landscape. These organizations came together to publish this report and to provide some best practices to defend against it.
In the show notes, I’ll include a link to the full advisory. There was another alert published by CISA on April 26th. In that report, they outlined the Russian operations and trends and really how to think about working through these if you’re a network defender. We mentioned SolarWinds earlier. This report also provided some additional details on the SolarWinds attack, which was spearheaded by these same Russian SVR threat actors. What was interesting about the SolarWinds attack, what happened there is we saw malicious updates from compromised SolarWinds systems.
And that breached hundreds of organizations. We don’t yet know the full scope of the damage, and we won’t for a long time. Last year, we saw that same group of threat actors targeting vaccine R&D operations around the world. This involved malware that was tracked as something called WellMess and WellMail. What comment I hear when I was looking at this information is that one of the things they highlighted in this report is that, as we’ve talked, threat actors are agile. They’re adaptable. They’re extremely adaptable.
For instance, with the WellMess and the WellMail instances, as soon as they were detected, they pivoted and they started doing something different. What happened here, which I thought was really fascinating and frightening, is that they started using Sliver, which is a security testing tool that was developed Bishop Fox which is an offensive security assessment firm. An offensive security assessment firm is just like it sounds, right? Their whole job is to be out there on the offense providing tools and probably services that help organizations be on the offense rather than be defensive about security.
Sliver is a legitimate tool that used for adversary simulation. You want to protect your network, you use Sliver, right? What’s scary about that is that now part of this new report that’s out is on helping organizations detect Sliver, see if it’s in use, and then try to figure out if it’s a legitimate use or a malicious use. To me, wrap your head around, this is a good tool by a good company that’s doing good things. These smart SVR threat actors pivoted to use this tool to help them do what it is they want to do. To me, that’s really interesting.
But what I want to end here with in terms of my own comments is that it’s really important to understand that these threat actors are constantly… They are incredibly smart. I’ve used the word agile. They are looking for vulnerabilities, and they’re using technology to help them spot… They’re scanning the internet. They’re using technology to help them spot vulnerabilities.
Some of the biggest ones recently and that are still very active and that the government warns against, warns organizations who haven’t perhaps yet patched these, these vulnerabilities include, of course, the Microsoft Exchange Servers. Many of them remain unpatched. VMware’s vCenter Server product is on the list of the top five that are focused on. Fortinet, FortiGate VPN. The Pulse Secure Connect VPN, which we talked about last week, is on the list. Citrix Application Delivery, Controller, and Gateway. And one more, Synacor Zimbra Collaboration Suite.
They’re five things plus Microsoft Exchange Servers that the government in this advisory is warning do not let your guard down if you use any of these products and you haven’t yet updated and patched. It’s a lot.
Fred McClimans: It is. I think on the positive side here, and there is a positive side to this. We’ve known for years that threat actors out there, they don’t exist in a vacuum. They learn the same way everybody else learns. They have access to the same tools. They may not have the budget out there for some of the tools and hardware, but the cost of these security tools, the tools that we use every day to protect our assets, they’re coming down, and they’re becoming more available. There’s always been a market on the dark web for these type of code and tools and hardware resources.
It’s becoming more common today. But still through all of that, enterprises just fundamentally they do have an advantage. I mean, yeah, there’s that threat actors just need to find that one spot. That’s true, but it’s like finding a needle in a haystack. Already a good organization that has a strong philosophy around behavior risk, technology risk, process risk, they’re going to put in place at least enough buffers to make that haystack as large as possible, to make that needle incredibly hard to find.
Now, granted, again, the threat actors are coming at that haystack with lots of magnets and they’re trying to find that one spot there. But organizations can be effective in putting up at least enough hay around that to mask where that needle is. I think the real effort becomes into how do you detect as quickly as possible and then isolate that threat, identify where the threat is, figure out what’s going on, and then hopefully notify others. Share that information with other organizations that you recognize, “Hey, we have a vulnerability here. We’ve been attacked here.”
I know we talked about the Peloton hack earlier. One of the interesting things in that article, Shelly, was at the point where they said, “Look, these are all the things that have been disclosed.” I’m reading this and I’m thinking, that’s a lot. That’s really a lot. And then they have that line at the bottom saying, “We didn’t publish all of it, because we’re still investigating.” You’re thinking, well, what else is out there? But again, we have the tools. You’re never going to be able to protect something 100%. We can make that haystack a lot larger and even camouflage the haystack.
Shelly Kramer: I will also give a nod here, and we’ve talked about it before, to the research that we’ve done with Dell for sure is top of mind. We’ve done some work with Splunk as well. You have to have a security operations center. You have to be using a dashboard. You have to understand that hardware can be as much of a threat as software. Our research showed that a considerable number of folks who didn’t think that they’d been breached were operating out of ignorance, because they weren’t using any kind of a dashboard.
Whereas people that we surveyed who are using dashboards, who do have real time insights at their fingertips know, of course, we’ve been breached and here’s how many we’ve detected and here’s how many we’ve prevented. It is a haystack and it is like trying to find a needle in a haystack, but you’re much better protected when you have a security operations center, when you understand the risks posed by both hardware and software, and you’re using a darn dashboard.
Fred McClimans: What’s at the core of all that though, Shelly? Data. You need the data. You know to know exactly what you’ve got. You’ve got to identify any shadow IT, any shadow data that exist out there. You’ve got to capture data. You were talking about Splunk. One of the things I really love about the Splunk model, in fact, Splunk, when they’ve now started to implementing… HP has been implementing Splunk in the GreenLake as a service consumption based mode. It’s phenomenal because you can scale to ingest as much data as you can possibly throw off.
Capturing that data, using the right analytics to identify anomalous behavior patterns that are out there, I mean, you’ve got to have that level of visibility within an organization if you want any chance of really trying to be able to identify very rapidly and to counter very rapidly the threats that are out there in an organization. It’s a challenge.
Shelly Kramer: It’s a challenge, but it’s not nice to have. It’s must have. It really is. Security has to be a foundational part of business strategy. I think we’re getting there. I mean, it’s always a slower process than folks like you and I who are immersed in the space would like. But we’re getting there. Unfortunately, I think a lot of it is these breaches and these leaks along the way that are very eye-opening.
And again, when you look at something like SolarWinds and you think about the fact that we absolutely have no idea what the true, true depth and breath of this attack is and how many companies are affected. Anyway, with that, we’re going to wrap up our show. Thank you as always for hanging out with me today, Fred, and talking about cybersecurity. And thanks to our viewers and our listeners and we’ll see you again next week.