Beyond Shadow IT: Understanding the True Attack Surface of Your Software
Episode Summary
This week's episode dives deep into the concept of shadow exposure and how it relates to third-party software, often overlooked in discussions about shadow IT. We explore the historical context of shadow IT, its evolution, and the real risks associated with widely deployed enterprise software that organizations may not fully understand.
Join us as we discuss:
- The origins and implications of shadow IT
- The challenges of visibility and transparency with third-party vendors
- Real-world examples of vulnerabilities in critical software, including ServiceNow and IBM's ASPR Fastback
- The limitations of security questionnaires and self-attestation processes
- The importance of proactive security measures and effective disclosure processes
We also share insights from our security research team and discuss how organizations can better manage their attack surfaces to mitigate risks associated with shadow exposure.
Fundamentally change how you secure your attack surface. Assetnote's industry-leading Attack Surface Management Platform gives security teams continuous insight and control over their ever-evolving exposure.
For more details about Assetnote's Attack Surface Management Platform, visit https://assetnote.io/
Transcript
MG:
Alright so today we're gonna talk about a concept we've touched on in some of the other episodes that we've done around recon and the security research. And this is the concept of shadow exposure or this idea that third-party software is the real shadow IT. So we're going to dive into that a little bit more. And maybe a good starting point is to just talk about the concept of shadow IT and where that comes from and why that's a thing. Historically, Shadow IT has really been about unknown assets, you know, inside your attack surface. You know, the marketing website that gets spun up by the marketing team without security knowing about it, or the IoT device that gets plugged in on your network. And, you know, that was really where it came to sort of prominence around the IoT sort of security craze not too long ago. The idea being that, you know, anybody could plug some internet connected device in on your network. And, you know, a lot of the research that was coming out at that time was, you know, these devices had vulnerability straight out of the 2000s. You know, it was, it was that kind of vibe. And so, so there was a lot of, um, There was a lot of talk at the time around Shadow IT, and in a way, like, you know, Shadow IT, some of it was just, you know, vendor nonsense, I think, particularly around the IoT side, less so OT as it related to, you know, more of those style of networks, but certainly from a corporate perspective, you know, some of it was a little bit overblown, some of it's genuine though, right? You know, thinking about, you know, you know, the prevalence of, or the sprawl of attack surface from cloud infrastructure, as an example, you know, that's a very… That's a very real issue. But, you know, there's a lot of, you know, processes and tooling now that, you know, people have adapted, whether it's ASM products like ours or other processes, where they can start to get a handle on, you know, the real sort of, or the true kind of risks with, I guess, traditional shadow IT. But, you know, one of the areas that we think is still a blind spot is this idea of third-party software being the real shadow IT. It's kind of an interesting concept because when we talk about third-party enterprise software being the real shadow IT, everybody is like, what the hell do you mean by that? You know, that's stupid. You know, we spend millions of dollars on some of these tools. We go through an evaluation and RFP process and multiple layers of approval. Like where's the shadow part there? And I think, you know, to clarify that point, You know, what we're maybe really talking about here, and maybe a better way to express this idea, is the idea of shadow exposure more so than just shadow IT. And really it comes down to this idea that these, you know, this widely deployed third-party software, most organizations don't really have you know, true visibility into what exposure that's bringing to their attack surface. And we'll talk about some examples, you know, probably as part of this, but, you know, just to give, you know, one example of something recent that we did was like the ServiceNow Research. And, you know, that's basically, you know, everywhere because it's a gigantic company, everybody uses it. And that was a pre-auth RCE that basically had a bunch of really wide-reaching security impact just by virtue of using that product. And so this is sort of what we're talking about to give some idea of this concept. It's really exposure that exists in this software that, you know, organizations don't really have good visibility into for a variety of reasons that we can get into. But that's really the core of the concept, I would say. Do you have anything to add to that idea or to sort of, you know, with that, do you feel like that's a good explanation of what we mean when we're talking about like shadow exposure and in this third party software?
Shubs: Yeah, yeah, for sure. I mean, you touched on it. A lot of the shadow exposure is in locations that are right in front of everyone's eyes. Things like ServiceNow and a lot of vendor software that's being released out there. And certainly from a security researcher perspective as well and how we operate our security researcher team, we do tend to focus on a lot of the software that is very widely deployed, however, doesn't have the security due diligence. What are the things that you think cause shadow exposure? Are there any examples that come to mind?
MG: Yeah, I think there's a couple of core… There's a couple of core root causes, I think, when it comes to what's generating this shadow exposure. The first is really the opacity… With the vendors and and, you know, this varies obviously from vendor to vendor. Um, some vendors are better than others, right? But, um, broadly speaking, vendors aren't necessarily incentivized towards providing that visibility or that transparency at the end of the day. Most of the time they're, they're trying to, um. I guess, you know, not necessarily hide it, right? I think in a lot of ways, you know, you can't really do that these days, but they're certainly incentivized to be a little bit more opaque when it comes to this sort of stuff. And, you know, examples, you know, security advisories that give no detail. Um, uh, things like trying to shut down security research and, uh, tie up researchers with NDAs and legal threats and things like that. Um, you know, it's, you know, that's a little bit more on the extreme end. Right. But like, generally speaking, I think there's a. there's a culture of being opaque with, you know, their customers when it comes to, you know, the security of this vendor software. And I think then the other side of it as well in terms of the root cause is there's, you know, poor process and tooling, I think, from a customer perspective to be able to understand this, right? There's a heavy focus on Things like security questionnaires and self-attestation, whether it's like SOC 2 or other things or just, you know, less formally stuff that, you know, the vendors put out about the security of their products. And these are not necessarily really effective to understand the true exposure, right? So if you take a security questionnaire, I mean, putting Putting aside the obvious, which is people can lie on those or at the very least exaggerate, often they're focused on broader ideas rather than actual exposure. What's your process for this? What's your process for that? And you can have a good process by design. Doesn't mean that you're going to follow it and doesn't mean it's necessarily effective in practice. And so it's very limited in terms of what you can do there. You know, the other thing that I think organizations commonly use are these sorts of security credit agencies, right, where they'll say things like, you know, it'll be a tool, it's a SaaS product, and they point a vendor towards it, and it gives them a security rating score. And, you know, I think the effectiveness of these ratings or the actual correlation between these ratings and the actual security of these products or even the organization in general is, you know, tenuous at best. It's usually nonexistent, right? You know, they're looking at a very minimal slice of their own attack surface. They're not looking at necessarily the products that are deployed inside your attack surface. And, you know, they're looking at basic stuff, right? you know, basic security hygiene on their sort of main website and things like that. There's not necessarily a strong correlation. Like you get an A or a D doesn't necessarily mean that your products are more or less secure. It doesn't necessarily mean that, you know, in a lot of cases that you're even, you know, relatively better or worse than anybody else. I don't think it provides security teams with enough insight into the actual security. And, you know, and then there's sort of reactive processes, so, which aren't really great, and we can touch on a little bit more detail. But, yeah, I mean, you've had a lot of experience, right, with, say, the first point around vendors and vendors being opaque, right? When it comes to, you know, we do a lot of security research here at Asynode, even before we started Asynode, right, you were submitting a lot of stuff to vendors. What are some of your experiences with that? I mean, not necessarily naming names per se, but let's talk about the gamut of the experiences there from a research perspective when it comes to how vendors sort of treat research. Vendors on the top end of town, they're very transparent about it. They're very open. They won't have any restrictions on the researcher talking about it. They'll be very proactive with their customers. I remember having a chat to some of the guys in the security team at Slack, and that was something that they really strived for, is their perspective was less about the presence of bugs being a problem, and more about how they reacted to it being the focus. And so they were very open. They had no problems with researchers sharing their research. They were very open with their customers around what the root cause was, what the fix was, and they weren't trying to hide it. They would just do it immediately, and they had a very strong focus on remediation. And to me, that's sort of you know, at least in my experience, you know, that's, that's the, the, the good side. Right. But maybe let's talk about some of the, some of the other side of that spectrum, right. With, with, you know, some of our experiences with vendors that haven't been as great.
Shubs: Yeah. I mean, I wish, I wish every company was like Slack, to be honest, where they actually did care about it in that way. But, um, You know, I find it interesting because you said earlier that all the companies on the higher end of town are like that, but I've actually noticed that there are a lot of companies on the higher end of town which are still very opaque. Like they'll have no security advisories for products that they own that are deployed on customer environments. They'll often prioritize their cloud versions over their on-premise versions, and they'll always have the same line in their advisory. Yeah, cloud was never affected. No, cloud was affected. You just told us that it's not affected because at the time of publishing that advisory, you'd already gone ahead and fixed the issue in cloud. In many cases, a lot of these vendors that provide both a cloud version and an on-premise version, it's almost the same software with some minor modifications for it to run on cloud. So if you find an issue on the vendor version, you're going to find an issue on the cloud version as well, the on-premise versus cloud. So yeah, I mean, the opacity in these vendors is in many cases quite extreme. And even something that we have dealt with internally from a security research process perspective, where we make sure that we can attach a policy with our security research whenever we send it to the vendors. This is also something that's becoming more and more difficult as time goes on as vendors start taking on Bug bounty programs, not as a mechanism to reward researchers, but rather as a mechanism to stifle researchers from publishing their research. And, you know, as soon as you submit something via a bug bounty platform, you are now beholden to the terms and conditions of this platform, including the fact that you cannot disclose anything without explicit permission from the program that you're submitting to. So in many cases, I've also seen researchers not being able to disclose their research because they've gone through this route of, oh, potentially trying to get a bug bounty out of it or whatever it may be. And then that's too late. At that point, they've already submitted the research that they can't even publish their research at the end of the day. So as much as, you know, a lot of people kind of glorify bunk bounties as being a really great, you know, great innovation in the last 10, 20 years, it's also taken us a step backwards from the disclosure perspective, from the sense that now there are some real ramifications. If you report something via a program and I think a lot of people, the big problem is a lot of people just aren't aware of what that actually means from a, um, from a security, uh, disclosure perspective.
MG: Do you have any thoughts on that? Yeah, no, that makes a lot of sense. And, and I mean, this is something that we've been. you know, doing and, you know, understanding for many years, right? But it does contribute to this idea of, you know, shadow exposure in a way. You know, hey, you know, submit to our bug bounty, you know, or even just, you know, security teams trying to route you to the bug bounty submission form, even if you reach out to them directly, they proactively try to route you there. And, you know, and I think that in a lot of cases, you know, some cases it might just be simply that's just a simpler input into whatever processes and workflows they've set up, you know, to deal with these sort of submissions. But in a lot of cases, it's, you know, I think very intentional to try and, you know, minimize and lock up disclosure. And, And when that's the only mechanism, you know, with which you've submitted the vulnerability, then, you know, you don't really have a choice. And, you know, to be clear, we're not necessarily talking about, you know, the solution to, you know, opacity from a vendor side is just full disclosure, right? You know, we We have a very strict policy at Asinode as it relates to the disclosure of our research and how we work with vendors. And, you know, we're broadly coming at this from the perspective of, you know, we want to help what's ultimately our mutual customers in a lot of ways. And, you know, we're not trying to necessarily um uh you know just just drop stuff right like we want to work with the vendors and we do work with the vendors um you know when it comes to this sort of stuff it's less about full disclosure and just more about the ability to be able to kind of talk honestly about these issues with organizations that are affected by them. And, you know, and certainly from the other side, you know, as an organization who's running this software, you want that level of information. You want that level of detail. You don't want it to be kind of confused. And then just because that just adds to, um, you know, a lack of understanding around, you know, not only what the exposure is, but also how to deal with it effectively as well. And, you know, maybe that's a good segue into thinking about some of the other processes, right? So we spoke a little bit about security questionnaires and self-attestation as being sort of the most common mechanism for trying to, at least on some level, proactively understand the exposure of this sort of vendor software as it's coming on board. Maybe they'll do that on an annual basis or something like that to refresh that, but it's not really focused on the actual underlying exposure in these products, it's point in time, and you know, it's overall not necessarily super effective. You know, it has its benefits, but not necessarily at covering off, you know, shadow exposure. And then the other side of it, right, is if you think about that being the proactive side, you've got the reactive side, and that's also not really great, you know. And we've spoken about this in other episodes, but you know, at the end of the day, you know, companies are just drowning in terrible alerts, you know, with basic, you know, basic approaches like, you know, just matching a patch level to, you know, thousands of CVEs basically in the CVE database. And so they're just drowning in this noise. And, you know, they don't know what to fix and what to prioritize and what's important, what's actually exploitable. You know, if you've got like a, if you've got a finding, from some security tool that says, hey, you're running version X of this software and it's vulnerable to this critical CVE, right? But there's a bunch of preconditions that are required for it to be exploitable. And so what do you do? Like you go to the vendor advisory. It doesn't really tell you anything about it. It sort of maybe vaguely mentions that. And so what do you do? I guess, you know, on some level, like you just patch, right? But it's not, you know, and I think there's been a lot of discourse on this recently and maybe that's another topic, but, you know, patch management, it's not as simple as just patch, right? There's always more complexity in a large environment, in a complex attack surface. And so, you know, how do you really understand how important an issue actually is when it's so opaque? And these processes and the way that, you know, The way that people kind of, and security tooling historically, and even to this day, you know, approaches this issue is that you just got so much noise that's being, you know, thrown onto security teams to kind of deal with themselves. I think there was a report not too long ago, I think it was done by Kenna maybe, when they spoke about the fact that only 5% of CVEs actually present any real risk. And, you know, I mean, at least anecdotally, you know, putting aside our own research, at least anecdotally, that's very true, right? In terms of how we look at, you know, us monitoring these attack surfaces and we look at the output, there's only a handful of CVEs that are truly presenting any kind of real risk. So they're just drowning in this noise. Most of it doesn't necessarily, you know, relate to actual exposure that they need to prioritize. But then there's also the other side of it as well, is that the tooling, you know, it's only reactive, you know, based on what the vendors are putting out, you know, and based on, you know, what's out there. You know, your security teams at organizations don't necessarily have the capability as well to really dive deep and truly understand the exposure in these products and their attack surface. They don't have security research teams like us. And, you know, that makes sense, right? They're not focused on this sort of stuff. They're focused on securing their own stuff that they're developing, their own apps and things like that. You know, they have this sort of expectation that you know, software that they buy should, you know, should have that same level of rigor and process, but they don't necessarily have the capabilities to do that. And, you know, it's quite niche. I mean, we have a security research team, you know, obviously you do a lot of the security research as well. And yeah, I mean, it's a niche skillset to be able to, particularly for some of these, you know, larger enterprise systems, you know, to be able to get to those vulnerabilities, right, and get to the true exposure.
Shubs: Yeah, and I think that's honestly one of the big reasons why our customers really love us and what we do is because we provide that due diligence on a lot of these vendor software that they've got deployed on their attack surface. And, you know, we are, we're kind of like the APT before they get compromised by an APT kind of thing. And we do all of that work and we've been doing that for a very long time. And that's something that's, you know, really pioneered by us in the attack surface management space. And definitely something I'm really proud of with what our security research team is capable of. But going back to that topic about what you're talking about with CVEs, what I find really funny about CVEs is that there's just so much stuff that's not covered by CVEs. And that's really a lot of what we find on the tax services of our customers. There is no real CVE associated with that technique or that specific attack vector or that misconfiguration, but these are real security risks that organizations need to attend to. And, um, I think, yeah, obviously we can talk about it in another, another, another episode where there's a lot to talk about with CVEs and EPSS score and things like that. But there is a huge, huge amount of exposure, which cannot be calculated through a CVE or has CVEs associated with it. Um, but yeah, on the topic of security research, I mean, this is what we do. We, we provide that confidence to our customers that there, like we are looking at all the vendor software and it's really funny because. You know, what we often see is customers put a lot of stock in big brands, big software, widely deployed software, and they think, oh, you know, we're fine because we've got this, you know, Citrix box or Cisco ASA or whatever it is. But when you actually dig under the hood even a tiny little bit and you start seeing what the software is made up of, Oh, it's running like a distro from seven years ago. It's running these binaries, which are 400 megabytes in size. And yeah, there are definitely, I think there are definitely limitations in security teams all over the world in terms of, you know, maybe they don't have the expertise to assess the security of these products. But this is where we come in. This is where we say, OK, this is our job now to actually understand what the security is here. This is why we have people that are experienced in binary exploitation, which is why we have people that are experienced in web application security and source code analysis. It's specifically so that we can assess some of these very, very complex software to see if there are any security issues or shadow exposure.
MG: Yeah. And then drive ultimately, you know, not only effective security outcomes for our customers, but also, you know, to help improve the security of that software, right? You know, we do a lot of security research, but it's not like, it's not like say, uh, I guess the more shadowy world of exploit development for say law enforcement or, or, you know, uh, intelligence operations and things like that, where they're trying to have long lived, vulnerabilities so they can use them operationally. I mean, at the end of the day, like, you know, our bugs get reported to the vendor before anybody else, you know, knows about it, right? And, you know, the idea is that it's going to get fixed. You know, all the research that we do is short-lived by nature because, you know, the focus from our perspective, whether it's with our customers or with us working with the vendors, is for it to be fixed. So it improves the security ultimately. So yeah, I mean, another interesting point that you raised just reminded me of a of a LinkedIn post, so it came from Canva, you know, put out a post, you know, he does a lot of, been doing a lot of stuff recently about, you know, security startups and, you know, how to think about that and providing a perspective from a vendor. And, you know, one of the topics that he mentioned was, you know, if you want to sort of be a smaller vendor that gets into a large vendor, right? So smaller security startup. Think about your product and how deep the internal connections need to be because vendors don't trust necessarily that smaller teams have the level of maturity when it comes to the security of their products when they're asking for deep internal integration and things like that. And what was really interesting about that is, you know, I kind of commented on that I thought what's really interesting here is that this base assumption that a larger company and these products that these these products built by larger companies and more established companies. have somehow better security at the end of the day, because that assumption is not necessarily based on anything real. And I think this is sort of what I'm getting at with the questionnaires, right? It's not necessarily based on anything actually tangible in a lot of ways, or at least directly correlated or related to the actual exposure. You know, the idea that a larger security, a larger vendor has better security and thus there's less You know, less of an issue with a customer or an organization, you know, deploying that internally in their attack surface is just based on these things that don't correlate to security. Like, oh, well, they're big, right? you know, they make a lot of money. And so, so thus they, uh, they must spend a lot of that money on, on security or no, they're a very big company that's in a lot of, you know, other big companies, you know, as their customers. And thus they have a real incentive to, to be proactive about security and to be transparent and upfront about it. And it's like they do have an incentive to a certain extent, but that doesn't necessarily mean that they actually act on that in the ways that would be effective for their end customers, right? As we've discussed earlier, they often act in ways that, you know, they're incentivized to not have those issues, but they don't necessarily act on that by fixing all the issues and being open and being transparent about that and being proactive about that with their customers. They're often just very opaque about that and try to avoid customers understanding the true extent of some of the exposure in their products. And so it's kind of like, you know, that assumption I think I would challenge as something that, you know, should be an assumption that people actually rely on when they're making these decisions. And certainly that's borne out, I think, in terms of the kind of vulnerabilities and kind of the research that we've done over the years and the things that we've found. And maybe that's an interesting point to dive a bit deeper on is some of the examples of what we're talking about here when it comes to shadow exposure. And I think a few key characteristics for us in terms of how we think about this and how we think about shining a bit of a light on it is you know, we're looking at one, you know, software that is, is widely deployed, you know, for starters, right. It has to be, it has to be prevalent, you know, across attack surfaces. Um, you know, two, it needs to be something that, you know, is, is of critical importance to an organization typically. Right. So whether that's, um, whether that's something like, um, You know, service now that's used by every team or whether it's like a VPN appliance that's used, you know, to, you know, broadly there that serves a critical security function. You know, it's those sorts of things that we mean here when we talk about this is critical infrastructure or critical IT, you know, for organizations. And then the other side of it is that it's products that usually closed source, I think, is another key characteristic here. It's usually not open source software. And it just doesn't seem to necessarily have had the scrutiny, right? So there's, you know, there's a lot of you know, a lot of software that fits that bill, right? A lot of Microsoft software, a lot of, you know, companies like that. But those, a lot of those products also get a lot of scrutiny from a security research perspective. But there's a whole vast array of products that fit that bill that clearly don't have a lot of security research attention for reasons that are not correlated or not because of the strength of the security of those products, right? It's things like, well, we don't give access to this to anybody unless they pay a six or seven figure contract to us, right? For starters, you know, it's things like that, that, you know, that mean it hasn't had the security research scrutiny, more so than it's of a certain standard. And so that's sort of the characteristics in the high level of what we talk about when we're talking about this sort of software and the shadow exposure and this sort of IT. But do you want to dive deeper and maybe into a couple of examples? So it gives a It gives a more tangible sense of the kinds of shadow exposure.
Shubs: Yeah, for sure. And one of the things I'm most proud about with what we've done at AssetNote over the last six, six and a half years now is We focus really heavily on impact in all of our research. Everything we've published has really direct impact, really actionable, really practical. Nothing is theoretical. Nothing is just blogging for the sake of blogging. And we're very, we're straight shooters in that respect. And that's how we operate our security research team. That's our blueprint. That's how we do things. You know, there are definitely a lot of other people in this space, in the attack surface management space, that are kind of copying that blueprint and trying to mimic that. But one thing that I've found is, you know, really that attention to detail in terms of the impact in the pre-authentication attack surface. is what we've really done really well at. And, you know, when we first started, I think that we were focusing on all sorts of different products. But as time has gone on, we've really honed that in to be that most critical, most widely deployed software. So as you said, examples of that, we're looking at VPN appliances, things that are being deployed quite readily, your Fortinets of the world, the Cisco ASAEs. And look, there's a lot of things that we don't always get to publish or always have published. There are a lot of vulnerabilities that we've discovered from a security research perspective that are still unique to us and our platform and available for all of our customers. But yeah, the VPN appliances, we're looking at MDM software like Jamf, for example. We're looking at the ticketing software, your Jira's of the world, the ServiceNow's of the world, even Wiki software, Confluence's of the world. And I was speaking to someone in the last few months who works at a, he's like a full-time exploit developer, and sometimes they'll get tasked on Confluence for a whole year. Um, and that's like their one job is like they have to find vulnerabilities in consoles. That's it. But, um, one of the things also that I think, uh, is worth mentioning is how clean our disclosure process is. Like we're not looking over our shoulders. We're not really having to worry about that because we're, we're following a very strict and clean disclosure process with all of our vendors, which goes to the point that a lot of our vulnerabilities are short lived. Like you mentioned earlier. But yeah, but yeah, the vendors we're looking at are the critical software on the Internet. We are looking at things that are widely deployed and quite critical.
MG: Yeah, you've seen with a lot of our research is that, you know, once it, once it does get published, right, it ends up on sizes can lists, you know, and it's because of the nature of the impact of those issues that they get there. But, you know, you know, just to give some examples, you know, you know, you've mentioned, You've mentioned, you know, ticketing systems and, you know, and document storage, you know, VPN appliances, right, is another one where we've focused heavily on. Secure file transfer, we had the RCE in IBM Asper Fastback, and that's a good example, right? That's software that's, it's not like it's an open source product. It's used in large organizations in very critical ways for very critical, you know, file sharing, basically. And it hasn't had a lot of security scrutiny, you know, just in general, because it's so hard to get access to that. And, um, you know, but, uh, you know, it's, it's used, it's used by these organizations for, for very critical processes. So, so that's a source of like shadow exposure because, you know, it's not clear, you know, how much exposure is actually in these systems, um, just currently, but also, also in general, when they're, they're making these purchasing decisions, um, In other examples, there was the progress bugs that we had. You know, a lot of stuff in CMS platforms, right? Site core, web sphere, that sort of stuff.
Shubs: Network management. Yeah. All sorts of things like that where it's kind of like critical. You need to have it in your network at some point or another. Yeah. And then that's, that's what we're looking at.
MG: Yeah, and, you know, there's different vendors, right? But a lot of these, you know, fall into these sorts of categories, right? Where somebody's got some product in this space, right? And they're using it for, you know, critical workflows or from a security perspective, you know, critical points in the security architecture as well. So, yeah, I mean, so that's an example of the kinds of, You know, software and the kinds of issues that we mean when we talk about shadow exposure. You know, things like open source, it tends to tends to be a little bit better just because necessarily, you know, something where they, you know, have much incentive or capability to be as obfuscated. But, you know, what do we do about it, right? So, you know, ideally, you know, we'd live in a world where vendors are, you know, just more open and transparent about the security exposure in their products. They're less concerned about, you know, saving face and more focused on this sort of process generally. But, you know, we don't live in a fantasy land. So, you know, what can we do about it? I think, you know, there's a couple of things that I would sort of say. One would be, you know, focus on improving, you know, process and capability. Right. The first would be focusing on exploitability. So understanding what the real exposure is. Think about your process to To get actionable outcome outcomes, basically, you know, try to try to really understand the points where shadow exposure could exist in your attack surface and either build or buy capability that gets you ahead of the curve if you can. And that's one of the things when we talk about how we do things and why we do the zero-day research and why we focus on shining a light on this shadow exposure is When we find an issue like this, we report it to the vendor, then we put in our platform with the mitigation for our customers. And so the benefit there is that they're ahead of the curve. I mentioned the ASPR FASTEX vulnerability. That was a good example of this. We reported it to IBM. And we put the check in our platform. So all the customers that were affected, they saw it and, you know, they applied our mitigation. And so they were good. And then, you know, eventually a patch was released. And then, you know, there was some uptake there and then we released our blog post. And, you know, once the blog post was released, you know, everybody scrambled at that point to sort of deal with it, especially, you know, it started showing up on the KevList and things like that. Um, but, you know, for the customers that, that we'd worked with, right, they were, they could rest easy, right. They'd been fixed for a long time. And so, you know, we're not necessarily saying, well, just buy us, but, you know, certainly definitely, definitely wouldn't, uh, argue with you if you wanted to do that. But, you know, I think the idea of what we're going for here and the idea of maybe the concept to focus on would be thinking about more proactive measures to identify this kind of exposure. And, you know, for us, we focus on, you know, the research side and the exploitability side, but there are other ways as well. But, you know, that would be something that, you know, I would sort of recommend is, like, look for mechanisms to get ahead of the curve. Now, that could be different for different organizations. You know, different organizations are larger, they have more buying power. you know, they can throw their weight around a little bit more, things like that. There might be avenues that they can explore there. You know, I've heard of, you know, some examples like large customers basically putting, you know, very critical enterprise software to them in their bounty scope, and then trying to force those vendors to pay for the vulnerabilities through their bug bounty program. You know, that's an example I think of like throwing your weight around to get, you know, a more proactive view on this. So, you know, there's lots of different ways, but, you know, I would say that's something that I would focus on. You know, whether that's building our capability internally as well, you know, large organizations that might be more possible, you know, build out a capability that allows for deeper inspection of these products. And, you know, you know, on the point of throwing your weight around, you know, hold these people to account. Right. Um, uh, one of the things that always sort of frustrates me a little bit is when, uh, organizations that are on, you know, that, that, uh, not necessarily vendors, right. That basically criticize and, um. push back on security research in general, you know, and they do this because, oh, well, you know, this weird notion that, you know, oh, well, if you disclose it or you report it or you do anything of that nature, that you're going to basically tell the bad guys what to do. It's like, well, Mate, at the end of the day, a vulnerability doesn't exist because a security researcher found it. That's not when it suddenly springs to life. It exists in the code base or exists in the application when it was actually introduced into the application. So this idea that They, you know, they only exist because the security researchers brought it to light. It's just naive. And, you know, if you take that perspective, and you're responsible for security, you know, you're just you're just doing yourself a disservice. You're just burying your head in the sand. And the reality is, you know, vendors who are maybe not on that spectrum, on the good side of the spectrum that we were talking about, they love that, right? It feeds that, you know. It feeds that opacity that's, you know, detrimental for everyone ultimately at the end of the day. And so, you know, I would suggest that, you know, if you're in an if you're an organization, large or small, but particularly if you're large, you know, throw your weight around, you know, hold vendors to account when it comes to the security of their products that you're deploying. that are critical and have critical impact if you are, if there is exposure there, there's particular exposure that's exploited. So, you know, that would be another area that I would say, you know, to focus on to try and get a little bit more proactive and a little bit more effective at, you know, identifying this shadow exposure and more importantly, dealing with the shadow exposure. Cool. Any other thoughts on this? I mean, we could talk about this forever, right? I mean, we've got a lot of experience, but any other thoughts to wrap up on this idea of shadow exposure?
Shubs: Yeah, I think my last thought is, you know, just based on what I'm seeing come out from the research communities, where there's a little bit of a lack of focus around the external attack surface, and a lot of people, a lot of research blogs, a lot of vulnerabilities coming out, and I'm not saying that this isn't important or valuable research, But it's like, yeah, it only affects port 6, 3, 4, 5, whatever, that's exposed internally and is vulnerable to an RCE or something. Fair enough. But the likelihood of that actually being exposed in a real environment on the external internet is quite low. And as I mentioned earlier, we have this really intense focus on externally exploitable issues. And when we say that, we actually mean majority of the internet externally exploitable, not just like an edge case or two.
MG: So yeah, I guess I'm just reiterating. That's what the real impact is. Yeah, that's what the real impact is.
Shubs: And I'll see exposures come out on certain things, like realistically there's no software on the internet that's exposing this port. It's not bound to 0.0.0.0, which means it's not exposed on the external internet, and that's something I see all the time. And it sucks sometimes because a lot of people can't tell the difference in the general community and they freak out about it when really I think that these risks should be treated differently is all I'm trying to say. And yeah, that's just something that I've noticed in the last couple years where people are trying to I guess, market their research in a certain way where the impact is being conveyed in a way that it's greater than what it actually is.
MG: I mean, that's basically, you know, the other side of, like, the opacity for organizations, right? It's not just coming from the vendors. It's coming from the security vendors as well, in a lot of ways, right? Or the security industry. you know, at large where there is an exaggeration of impact, you know, in a lot of cases. There's a lack of understanding of, you know, true exposure, a lack of understanding around like the true problems that are faced by organizations. You know, and this is a topic we can probably dive into a little bit deeper in a future episode, but you know, a lot of things like, you know, there's a lot of solutions now that I'm seeing, like say EPSS is a good example. It's just basically a solution that's been developed by vendors for a problem that they created in the first place. And so there's a lot of that kind of nonsense. And, you know, I think, you know, that doesn't help customers either, right? It doesn't help organizations. The opacity is not just from vendors, as in, like, the vendors of the software. It's also from the security community, you know, generating this kind of noise that makes it harder to see the true exposure as well. But that's probably a topic for another day. I think we can wrap it up there. Thanks. Great episode.
Shubs: All right. See you all later.
Subscribe to our newsletter
Subscribe to our newsletter and stay updated on the newest research, security advisories, and more!
More Resources Like This One
Ready to get started?
Get on a call with our team and learn how Assetnote can change the way you secure your attack surface. We'll set you up with a trial instance so you can see the impact for yourself.