Deception Based Intrusion Detection & Prevention for SCADA Environments - Video Text Version
Below is the text version for the Deception Based Intrusion Detection & Prevention for SCADA Environments video.
>>Erfan Ibrahim: Good morning. This is Erfan Ibrahim of the National Renewable Energy Lab. Today, we are privileged to have Chad from Illusive Networks presenting on their cyber security technology. As you know, in the last several months we have had a variety of vendors come and present on their forum on the various types of cyber security, cutting-edge cyber security technologies that are being brought into the energy sector. We are very pleased to have Chad here today to talk about another innovative technology from Illusive Networks. So Chad, please take it away.
>>Chad: Hi, my name is Chad Gasaway. I'm the senior solutions architect for Illusive Networks. So first, I'd like to thank you for the opportunity to speak today. What we're going to talk about is a little bit about deception technology and how that is applied to your various defensive strategies in an effort to kind of protect your environment. But first, just a quick introduction of myself.
I have over 22 years in the IT industry; 17 years of those 22 years were focused in cyber security. Prior to joining Illusive, I was actually a solution architect in S elites for companies like CloudPassage which does Cloud security; the RSA Security whom I'm sure many of you are familiar with; SilverTail Systems, they did behavior analytics; and HP ArcSight who is also a pretty familiar name. They do a really popular SIM solution out there. As well as Crossbeam Systems.
So I've touched a broad range of technologies to use a number of tools that are out there in an effort to kind of help protect organizations against all kinds of threats. But in doing that I've also participated in infinite response and penetration testing. I've done a lot of work in anti-fraud, governance, risky compliance, and then obviously design security architectures for lots of different organizations, quite a bit actually in telecom space.
So I'm also the proud husband and father of a seven-year old son and a five-year old daughter and we actually have a dog named Harper which I'm not sure where that name came from. So again, thank you for the time today. Hopefully this presentation will be thought-provoking, give you some new ideas on how you can protect your organization, and new techniques and new approaches. So looking forward to moving into it.
So whenever you talk about warfare or engaging an adversary specifically, Sun Tzu oftentimes comes to mind. Basically, he's written a book called The Art of War and the idea is that one of the quotes that he made in that book, it's actually a famous quote, is that all warfare is based on deception. Every engagement that we have, every adversarial conflict that we have, there's some element of deception involved. In fact, deception is actually used not just in warfare, just in our everyday lives. So oftentimes, we engage in environment and we don't know that something's deceiving or we're using deception but we actually are.
But the idea here is to create an appearance that's a little bit different than maybe what's actual reality. Engage a person or a place in a way that allows you to become comfortable, allows you to navigate a specific certain kind of way, things like you – like being vague on a particular subject is also a way of leveraging deception. So we actually use deceptions quite a bit, not only in warfare but in engagement. Attackers are using active deception against us as well. We'll talk a little bit about that in this presentation.
So just really quickly, just to kind of state the challenge that we actually face and I think everybody here is probably familiar with some of the challenges that are out there, there seems to be this ending supply of investment in security products. Case in point, I'll be at RSA in February. There's over 2,600 vendors at RSA all solving different problems. Those technologies and innovations are very important for the industry. But the problem is that even though there's a large number of capabilities out there, lots of talent and perspectives in terms of ways to actually protect organizations, at the end fo the day, we still continue to see on the news various breaches that's taking place and information being leaked out.
So the idea that at some point, no matter what we spend on security a breach will occur, an attacker will get into our environment is pretty much a safe assumption at this opint for a lot of organizations. What's actually troubling though is that despite our significant investments in prevention, firewalling and things of that sort, the advanced attacks still persist. Oftentimes, for very long periods of time. We're not doing an effective job actually getting down to the bottom of it. So security's quickly becoming a big data problem. It's becoming an issue where we have so much information being generated about what's happening in our environment that it's oftentimes breaking down our processes. Attackers are just still making their way through our various defenses.
What actually gets interesting is that now the executive management now has to answer to the board of directors. This is becoming such a recognized issue that executive management now has to communicate to upper level management or the board of directors in terms of what's actually taking place. They're actually being held accountable as well. So this is a pretty significant challenge. It's one that we've faced for a very long period fo time. So this is a deception. It's an approach that hasn't been used pretty much at all and now we're actually starting to use that as a part of our – as a strategy.
So some of these – these are troubling statistics. These are fairly recent here. The median numbered days to detection despite our investments has been 146 days. Just a couple of years ago, it was actually over 210 days. So this is a problem. This is an eternity for anybody to reside in our network without being noticed. This has actually been happening more recently and the most recent breaches that you guys may have heard of, one of which we're actually going to touch on here shortly. But an even more troubling number is the fact that 69 percent of organizations that detected that they were breached were notified by a third party.
Now just to kind of put this in context, what this means is that 69 percent of the organizations that determined they were breached were notified by law enforcement. Now if law enforcement didn't notify them of this breach, that 146 days, the median number of days of detection, would be much greater, be much larger. Probably well into 300 or 400 days. That just means that we just have inadequate mechanisms in place to help us identify the various attacks, techniques that take place. Yet still, 85 percent of our budgets go to prevention and very little of our budget goes into detection and response capabilities.
So we're starting to see a shift in this kind of thing. We're starting to see more budgets go to incident response capabilities because we've learned that no matter what our investments in prevention are, there is going to be a point when an attacker makes it into our environment. So now we're spending a lot more money in kind of the reactive public security and that we want to do – be effective at incident response and ultimately remediation in an effort to kind of shrink the period of time that it takes to actually remediate the problem.
There's another issues with that equation is that we're still not investing significantly in early detection. So the first two statistics that I shared would actually still be a problem. It would still be there. It's just that now that we've been notified, we'll be able to respond quicker. So that needs to improve. Those need to balance out and we need to get better at those kinds of things and being able to really respond to attacks that are in motion in our environment.
So kind of here's the challenge. This is really an asymmetrical arena that we're dealing with here. It's not a fair ballgame at all. So what we have is kind of a tale of the tape, if you will. In the gray, we have the attacker. In the orange, we have the defender, we have us.
The problem with this picture is that the attacker only really needs to be right once for them to actually gain access to the environment or perhaps move into a machine that actually contains sensitive information or access to sensitive environments that they can then use that for whatever his purposes are, whether that could be disruptive or cause announce or maybe that's to exfiltrate data out. It's actually getting a lot easier for the attacker, whereas for us we're having a difficult time keeping pace. That's plays really into this idea that we have so many different products that solve so many different problems and they're all generating their own bits of information that this is starting to become a big data problem. It's getting very difficult to actually keep up with the sheer amount of data that we actually have coming out of our various systems in an effort to really try to understand or identify the attacks that are taking place.
The attackers actually know this. So for them, it's very easy for them to actually modify their tactics or to create kind of decoys, if you will, to kind of get your attention in other places. Maybe generate more information and make a lot of noise in one place in an effort to mask the idea of the fact that they're moving to a different place. For them, it's almost no cost. They're using open source tools and the custom developed tools in order to engage your environment. For us, we continue to have to invest in not only defensive solutions, but we also have to invest in security tools to just manage the information. with that comes infrastructure cost. With that also comes storage costs. It just gets more and more expensive on a day to day basis to effectively manage our environment.
Something critically important though that we should understand as well is that the attackers are very, very dynamic. They make decisions based on what they observe in the environment. So they don't actually have to follow the rules that we actually set in our environment. The difference is that we are very predictable and very static. We have preconceived notions of what we think the attacker will do or where the attacker might go and we create policies that are static that don't grow with the business, that don't change to environmental conditions, that are very predictable and the attacker has ways of really just looking at that environment and making a decision based on what he's presented with. So he's able to move dynamically through the environment without any issues.
Then lastly, we have a large amount of regulations that we have to abide by that limit our ability to a certain extent in terms of what we're able to do and how we're able to access information whereas the attacker actually has no rules. So the environmental conditions are absolutely in the attacker's favor. We actually need to get better in terms of how we actually address security in a more dynamic, less reactive way so that we can get down to the problem and make decisions on how to effectively remediate an issue or prevent an attacker from making it to something sensitive in the business.
So again, this is an asymmetric arena. It only takes once for the attacker to actually win the battle. He only has to get right once and we only have to be wrong once to effectively lose the battle. We have to secure every potential entry point in our organization. We have to have appropriate controls in place to control authentication or how people access certain resources in our network. We also have to determine or make sure our employees understand how the organization perceives security and why it's important that they don't do certain things like respond to certain emails or if they see certain situations how they actually communicate to that rest of the business. We have a lot of things that we have to take into consideration to deal with both the technology side and the people side, whereas the attacker just basically has to find the kink in that armor and that's all he needs to make it through into the business and ultimately start to move around in the organization.
So here's an example of something that happened recently, and probably many of you on the phone is pretty familiar with this particular event. So the Department of Homeland Security issued a formal report back in February. It's called the IR-ALERT-H-16-056-01. What it talks about is the recent cyber breach or cyberattack that occurred in Ukraine where attackers issued – kind of went through a coordinated cyberattack against the distribution home or the substations for a utilities organization. Basically, the result was that they impacted the power for 225,000 customers.
Now that's an extremely bad situation, number one, for a number of reasons. But what's really interesting is that when [audio skips] deeper into what actually took place here in this attack, it was identified that the sophistication level of the attacker was really consistent with the fact that this was highly organized and a well-resourced adversary. Basically, this was a nation state attack. That did mean something because there's very significant differences between a nation state style of an attack versus an organized crime style of an attack versus just some random attacker out for political motivations or other reasons. In this particular case, it was identified that the attacker used tactics and techniques to match – and I highlighted match for a reason – to match the defenses in the environment of the impact of the target.
The problem with this is that the word match means that the attacker engaged the defenses. He didn't try to move around it. He actually engaged the defenses in an intent to understand the defenses. Because this is a well-resourced attacker and a highly organized attacker and because he has the backing of a state sponsorship – for states, it's very easy just to open up a shell company and buy whatever tools that you need, whatever defensive tools that you would need to secure your environment. Right?
So what that means is that the attacker actually is already well aware of how to circumvent certain security controls, things like FireEye or things like your various types of firewalls that are out there. AV, for example, is pretty easy to bypass these days with toolkits that are built right into common tools like, for example, Metasploit is one of them. Collie Linux or some of the other distributions that are out there provide. So there's a lot of different ways to bypass these various mechanisms but they can actually just purchase the new products that come out and learn that.
So there's always a matter of time – it's just a matter of time before the attacker actually has had time to actually study these environments, develop techniques to bypass certain technologies and then ultimately employ those tactics and techniques in a real situation allowing them to continue to go undetected and move laterally in the business and ultimately make it to something sensitive to the organization like a SCADA network, for example. In this particular case, they determined that they didn't want to exfiltrate it out. They probably did exfiltrate it out during this incident, but in this particular case they were seeking to disrupt service which they did.
Now to kind of peel back the onion a little bit and take a closer look at what actually took place, there's something interesting that actually happened here. Number one, they used spear phishing to bypass the automated controls or static policy. This is in itself significant because spear phishing it not new. Spear phishing has been around for a very, very long period of time. The fact that they used something that gauges a [audio skips] is quite interesting. Number one, spear phishing allows you to basically craft an email or something and present it to a broad range of users in the hopes that that person would respond. Then once they respond, you're now on their machine. So all the walls and the moats that we normally create simply go away at that point. They've engaged a person.
The second thing that was done here was that during their reconnaissance or what they used for reconnaissance is a variant of the Black Energy malware. They actually created version three, if you will. There's a couple different variants that created their own custom version, but it was used basically for subversion of system resources and some data collection and ultimately exfiltration of that information that they found sensitive. That information is then used externally to develop new tactics and new techniques and perhaps changing in terms of what the targets are. But also network monitoring, to kind of understand what kind of traffic is taking place in the environment.
The third thing that they did is they navigated the environment using stolen credentials. This is [audio skips] because every attacker has to enter the environment. They may know some things about the environment but they ultimately have to do reconnaissance and navigate through the environment. You don't just come from the outside and land directly on something sensitive. That takes time. It takes process. Depending on the level of sophistication of a security organization or the attacker, the attacker can make it through significant parts of the business within seconds or he can take a large period of time like days in an effort to go undetected, use low and slow techniques quite often. Then that's reflected obviously in the statistics where the attacker's in the environment for well over 100 days, in some cases 200 days.
But in this particular case, the attacker decided to – not only did he use phony credentials but he also used those credentials to access the VPN that ultimately led them to the ICS network, right, the Industrial Control System Network. So this is a common thing. This happens quite a bit. Many of the other breaches in the retail space, in the other verticals, very similar situation in that a partner was compromised. Then they ultimately make it through the partner network and through a remote connection like a VPN onto a resource and parent company network and then eventually as they engage to environment and move laterally through the environment, they stumble across information that's worth exfiltrating out and they do just that and monetize that information. So this actually is not uncommon. But in this particular case, it happened over a long period of time at this particular utilities organization. They were able to access critical systems to create that outage.
Now what's actually quite interesting here though is that they also decided to disrupt the service. So instead of just staying there for longer periods of time and continuing to collect information, they actually decided to attack the system. So they actually used the program – it's actually a freeware program and modified the version of the program called killdisk to erase master boot records and log files and other system files for the various operating systems that are there. Then ultimately connected – also connected up to the UPS system and scheduled service outages. Also went to the extent of denying service.
They actually attacked the telephone service, the call center so that no customer calls can come through to kind of allow the customers to express their concerns so that they can capture enough information to hopefully restore the service quicker. So the attackers, quite simply, they access the environment, they did the reconnaissance within the environment. This is not external to the environment because they used phishing attacks. Then they navigated through the environment through lateral movements using credential data and other connection information. Then ultimately decided to disrupt the service by rendering service to 225,000 people inoperable.
So since phishing was used, I want to kind of call attention to phishing because, again, we've spent a lot of time building moats and walls and things of that sort. I wanted to kind of spend some time on phishing because it's unique in that it bypasses all of those traditional controls that we normally have. One way that we can solve phishing is – there's obviously some tools that are out there to help with that, but there's also through working with your end users and making sure that they're a part of the security awareness program and to be [inaudible] what to look for and what not to do. A problem is that it's not 100 percent foolproof. Not everybody may be – some people might not make the meeting. Some people may not understand exactly what we're referring to.
But phishing as a result of those weaknesses in programs and tools still continues to be the number one way an attacker enters an environment. This is despite the various types of antispam solutions that we have and things of that sort. So the attacker will use malware and C2 activity. They'll be active [inaudible] in your environment manually moving through the environment, learning the environment before they create malware and C2 activity. They'll use stolen credentials. It's part of the information gathering process to help them move laterally.
They'll exfiltrate data, but again, if you look at all these, these are – most of these items that you see here on the right are all things that happen after he enters the environment, whereas phishing is the one thing there that's being used actually to penetrate the environment. So this is a big, big problem and this is a problem that needs to be solved. But this is what's lending ourselves to this idea that a breach will occur and we may or may not have the means to actually identify the breach, number one, but also identify the attacker if one of these other items have not been introduced into the environment.
So what is phishing exactly? Why exactly does phishing work so well? It's really simple. Phishing is a form of deception. The attacker is actually using deception against us as a way to kind of deceive our employees. So it works off the same principle as social engineering.
You can talk to any PIN tester out there, they'll say that 100 percent of the time if they use social engineering they will get into the environment. We've worked with PIN testers in the past where they have a pretty robust social engineering capabilities and they would do things like maybe have a Gmail person on the phone speaking with somebody and have, for example, a recording of a baby in the background to get some sort of emotional response and ultimately allow that person to access an account that they ultimately didn't have access before – where they didn't have access to that account before. So it's an effort to impersonate or fool people to respond to a call to action. That includes things like, "Click on the link."
You're crafting an email, for example, with an embedded link. The email looks legitimate but it has a link in there that perhaps is illegitimate. That link – or perhaps it's even provide personal information or requesting access to certain things. Maybe you're kind of introducing yourself as the security administrator for the company and you're requesting that they provide you their credentials.
Different types of phishing techniques [inaudible] there. So you have deceptive phishing, which is the tried and true where impersonates an entire business. So you'll receive emails. It looks like a business email but it's actually not. Spear phishing is highly personalized. Spear phishing happens to be the one that's more often than not used in various types of breaches. So if you look at the breach reports that are out there for what has taken place, oftentimes spear phishing is how they made it into the environment.
We have whaling. Whaling actually specifically targets CEOs or high level employees. Then pharming which is basically – when phishing doesn't work you go to pharming. Pharming is just a different approach of a phish. Essentially, instead of engaging a human and trying to get the human to respond to something, perhaps the organization the attacker's targeting is doing a good job with security awareness and other mechanisms that is preventing that from happening. But then you go to pharming where you actually poison the DNS Cache and redirect the victims to a deceptive website that ultimately allows the phish to take place. So phishing is a form of deception. The attackers use it all the time. Deceptions work quite well from an attacker when attacking a defender.
So now let's talk a little bit about how we can maybe use deceptive techniques against the attacker to kind of put the ball in our court to give us home field advantage. Here's how we do it. Number one, we have to kind of take a step back in security and really start to think about the people. At the end of the day, what happens is that no matter – whether it's malware, whether it's phishing, whether it's – no matter what the attack factor is, there's always a person behind that attack and that person have motivations.
That person if he states probably has pretty strong motivations. He has emotions. He has certain things that he has to kind of look through and process. People cannot just memorize everything about an environment. They're going to have to record that information.
They don't know everything prior to going into the environment. So they're going to have to study the environment and do reconnaissance. By the way, reconnaissance doesn't just mean using tools like Nmap where you're doing a port scan. Advanced attackers will very rarely do a port scan because it creates a lot of noise.
The attackers have other ways of identifying assets they can connect to, but at the end of the day, the attacker is a person and we have to understand that and we have to actually treat that with the proper amount of respect because static controls and policies actually don't work on people. When people see a wall, they walk around it. They try to find an entry way through that wall. So that's the idea with an attacker. This is a perfectly natural thing. So we have to flip the asymmetry a little bit and start to think about the attacker and think like an attacker and understand the attacker's perspective in terms of what he looks for when he enters into the environment.
So this is really just a matter of perspective. I have a couple of different pictures here to kind of provide an example of what I'm referring to. Normally in an organization, when we look at security, we look at it in terms of this is a visio background. This is where we have our controls. This is how we have our infrastructure configured. This is how we have our authentication configured. This is – they have disaster recovery in place.
Our firewall policy is set up a certain kind of way. Our IPSs or our FireEye-like solutions for sandbox things is configured in a certain kind of way off of spam ports and taps. IPSs are sitting in line. We've got a number of things in order to give us visibility but to also give us – so some level of understanding in our environment. We represent that environment with a logical IT view like a visio diagram.
Now the diagram to the right is slightly different. Well it's actually a lot different. That's because it actually – it's a social sciences map that was actually created by Linked In, of all places. But really, it's a great example of how an attacker perceives the environment. You see, in social sciences they actually look at things in terms of relationships and degrees of separation. We start off with me, which as you can see there I've clicked on myself, and the site that I actually used to create this map was called SociLab and it actually communicates with Linked In and pulls back all your contact information.
So what happens is you start from me and then you start to build your degrees of separation and your various connections around you. So you have a source, which is me, and then you have a destination. So when a recruiter engages that environment and they start from the people that they know, they start from the people who are first in their connection and they're looking for somebody with a certain set of skills to fit into a certain position. So number one, they're highly motivated. They have an objective.
Number two, they start with things that they know, information that they have when an attacker – that might be intelligence that they may have gained previously about the environment. Then once they want to go into that environment, they start to look for people that are connected to their immediate connections to see if perhaps they have the skills and experience and expertise that they need to fill this particular position. So your connections go from one to two to three to four, maybe even to five and ultimately they find somebody that looks interesting. They engage that person. If they engage that person the right way, they may be able to pique that person's interest. If they engage that person the wrong way, they'll have to stop and start the process all over again, move on to the next person.
So it's very difficult for recruiters to find great talent because obviously great talent especially in security is in high demand. But they have to go through this process. They have to look at their connections. They have to look at their connections' connections. They have to treat those – figure out where those relationships are, what the – if there's any recommendations, for example, on the profile. That information's going to guide how they actually approach the next person in an effort to fill the role and ultimately get paid in their recruitment efforts.
So let's compare and contrast that a little bit to an actual network organed map from an attacker's perspective. All right, so what you see here is something a little bit bigger, obviously, and it's hard to understand, but this is actually how the attacker viewed your environment. The blue balls there represent credentials. The red balls represent systems. So when an attacker enters your environment, now he has to kind of look at things from the standpoint of systems and resources. Those systems and resources have a certain amount of relationships because their connections are based on the services that those resources provide for those systems and those end users.
Now the second equation to this is the users, right? The user population has to engage, have relationships [inaudible] resources. So when those users log in, their credentials are stored in memory and they can then use that application to complete their function, their daily jobs. So when an attacker enters in this environment, he has to look at this information. He has to understand how this environment's being used.
He has to collect enough information to make decisions and then he has to act on that information. So what ends up happening though is that it's just a matter of time before he stumbles across a user much like in the same way a recruiter stumbles across somebody with the perfect set of skills. The attacker needs to stumble across a user or a system with either sensitive information or the administrative credential. He has to marry those two up in order to be successful.
So it starts to look a bit like this. Here, in this example, the [inaudible] balls represent admin accounts. Typically, what happens with admin accounts is admin accounts shows up in the memory of a lot of different systems in the organization. I mean in this particular case, it's represented in a way. What happens is that these systems, these admin accounts could be, for example – it could be an IT admin account. It could be a backup account. It could be any number of account with administrative credentials. So you'll see especially with backup accounts they'll access a broad range of systems in the environment.
Now the systems that you see in the middle are systems that are maybe connected to multiple admin accounts. Maybe there's two different admin accounts or two different accounts with administrative privileges that are used for various types of services that have engaged these particular machines. There'll always be inevitably a couple, few machines in your environment that actually have more than one administrative account that was used on that machine.
So if the attacker engages this environment or if one of these users that owns this machine responds to a phishing email, the attacker has essentially landed on a machine that's an absolute gold mine because, number one, [inaudible] doesn't have his local connection information but he also has credential data, privileged credential data that he can then use to move laterally or make it to next top that would provide something that's very beneficial for him that he can exfiltrate out of the business with perhaps a connection a remote resource into a critical environment like your Industrial Control Systems.
Now what oftentimes happened though, is the attacker won't land immediately on a system with these kinds of credentials via phishing attack what will happen is oftentimes those users that respond to those phishing attacks are a little bit less sophisticated. So it's further – it's more in the end point space. It's more in the laptop space or workstation space. It could be the receptionist, for example. But it only really takes on average in order to make it here an average of three lateral moves before you make it to a system that actually contains a privileged credential. That's on average. Three lateral moves can actually happen in a matter of seconds.
So it actually doesn't take a whole lot of time when you have 100 percent valid information to actually move laterally in your business and make it to a machine with either sensitive information or a privileged credential. So assets to the organizations are no longer just assets that are important to the business. Assets to the organizations are also assets that just happen to have a privileged credential. That could be a laptop. It could be a workstation.
It's just hard to determine that upfront because we tend to look at things from the perspective of where we place certain controls and how many – what our various authentication mechanisms are and things of that sort rather than, "If an attacker lands here, how quickly can he get to something that hurts the business?" So it's important that we start to understand things from an attacker's perspective and then use this information to inform how we deploy or create security policy. Then what kinds of tools and techniques that we use in our environment to actually help protect the organization and detect the attacker early enough to where he wouldn't make it to the keys to the state.
So here's the process. This is something that every human being has to go through. Remember, this is a people problem, not a problem with technology. This is a problem with people. But if you're familiar with OODA Loop, it was originally developed by an Air Force Colonel, John Boyd, but basically it's a set of interacting loops that are just continuously in motion. This again, this has nothing to do with just engagements in general or an adversary.
This is how we, as humans, interact with our environment in our everyday lives. We enter into an environment, like a restaurant, for example. The first thing we do when we enter into a restaurant is talk to the receptionist, let her know that we have a party of four, and then look around to see where we want to sit. Figure out where the bathrooms are. Where's the bar? Figure out maybe what kind of things we might be interested in eating.
Then we ultimately make a decision based on that information and we act on that decision and go to our table, sit down, and order food. So it's a dynamic process. It's basically you're presented with information, you observe that information, you orient yourselves within that information, you decide based on that information and then you act on that information. You act on your decision, right?
So if you think about your current security models, with your traditional moats and walls and various types of tools and techniques that we use today, how do we impact this process in a fundamental way that's unavoidable? Because this is a very human thing. How do we address the people with our traditional security controls? Because right now, we basically are just presenting an environment to the attacker. The attacker goes through this process and he's making decisions based on the information that he sees. So this works both ways. The attacker actually has to go through this process as well, because he's human as well. Remember, this is a people problem.
So here's where deception comes in. This is where it actually starts to get really, really interesting. So what if when the attacker enters into this new environment, instead of just seeing 100 percent valid information that he would then use to move laterally, what if the attacker could then observe a different reality altogether? What if he sees not only real data but he sees deceptive information as well? So we're feeding. We're engaging the attacker. We're feeding him false information.
We know that he has to use information when he enters into the environment to move laterally. We're going to feed him false information so that he has to pick from something, and mind you he's not going to know that he's looking at false information because the information looks the same. So he has to now make a decision or orient himself within the data that he's given. So now he's disoriented. He's not oriented properly. He's off-balance a little bit. He doesn't really know what the reality actually is. Of course, he's completely unaware. He really thinks that it's a perfectly normal situation for him. But now he has to make a decision.
Now what we've done is basically impacted the attacker's process, the human process in a fundamental way. We've actually basically – turns out despite our investments in security, the best way to actually disorient or engage an attacker is just to lie to the guy or the girl, just to use an old-fashioned deception as a way to get him to make a wrong decision that's in our favor as opposed to in his, effectively basically making the attacker sort through false positives instead of us sorting through all the false positives. So the approach is fundamentally different. It addresses security in a fundamental way because it impacts the attacker based on his human nature, the things that he has to do as a person as opposed to introducing a new type of technology for him to try to move around, circumvent.
So now he decides incorrectly. Right? He decides incorrectly. He chooses the deception. Remember, in order for him to successfully move laterally, he has to marry a connection with a credential to move laterally. So now he's going to decide incorrectly. What if by deciding incorrectly now we receive the notification.
We receive the alert. Because of the nature of the alert, that alert already has a severity level assigned to it because it's highly accurate. Because it's highly accurate and there's virtually no false positives because it can only be triggered through the use – through the attacker's techniques, we can now use automation to issue a forensic response. Right? At the source, where the attacker resides at the time he makes the decision.
So if you're familiar with the kill chain, there's an issue where there's a large period of dwell time, weeks to months. Then we have days to weeks before you – once you do the detection to actually do the investigation. Then it ultimately takes days to weeks to effectively remediate the problem. This is the reason why attackers are able to dwell in our environments for long periods of time because they're using – they're dynamic and they're making decisions based on the information they see. They're using real credentials and historical information.
So your various solutions like your SIMs and things of that sort see successful logins and maybe your behavior tools don't see as a [inaudible] for certain types of connections. So your behavior tools are not going to come out and say, "Hey, there's an outlier here." So the attacker is working through your environment quietly without really causing any noise or any other tools to trigger because they're using real information. Now if you're introduced to septic information, you now create an opportunity for you to detect the attacker early stage before he makes it to something critical to the business. Ultimately, now you can make a decision because of the accuracy and fidelity of these events [audio skips] decision on how you effectively want to respond to that and ultimately remediate the problem.
So here's what it takes for the attacker to be successful and just to kind of touch on this. Number one, when he enters into the environment – spear phishing attack or maybe it's the USB key that you found in the parking lot – I use that reference from Mr. Robot. But now that I'm here, how do I, number one, establish persistence in this environment? I need to make sure that I can actually come back to this environment at a later time. I also need to determine the context of this environment as well.
So now that I'm here, I also have to determine how this particular machine is being used, how the user is engaging in this environment, what kinds of connections are being made in this environment. I need to identify the user, maybe understand that user's role in this environment. That will help determine or help me – give me an idea of whether or not potentially I need to find a credential that has administrative privileges that will allow me to probe deeper into this environment or on this particular machine to look for data where it's exfiltrated.
The next question though is after all these things are taken care of in the reconnaissance phase now that I'm here, the question that the attacker has to answer for himself is, "Okay; there's nothing here. I have my beach head now. What's available around me? What endpoints or assets of systems has this person connected to in the past? Maybe it's through the Web browser or FPP for example. Has this user – is a business user or BBA and therefore has database connections that they've made. Does this person work for this substation? Perhaps they have a certain type of connection or VPN access into systems within that substation."
Once all of that – the fact that they collect that data and then use that to pivot through the environment and move laterally in the business. But the last thing that the attacker has to figure out in order to do that is, "Now that I have connection information and connection history information, I need to marry that information with credential data." That credential data is going to be a successful login because it's going to be a real credential. So if the attacker has to go through – this is all information that the attacker has to capture. Now what we're talking about through deceptive technology is poisoning this environment, making this environment look perfectly normal. The same questions get answered but the data is not real. The data can be alerted on once it's actually used.
So when you use deceptive techniques as part of your defensive strategy, it enables a few things. Number one, it enables you to create an environment where detection is nearly unavoidable. This actually lasts over time. So it's not one of these things where, for example, an attacker catches wind that deception technology is being used and then all of a sudden it's no longer valid, it's no longer usable. Or maybe the nation state can purchase this deceptive technology or deceptive technology because there's a lot of different techniques out there and then reverse-engineered it to the point where they understand or can identify deceptions.
The problem with that is deception is purely data. There's actually [audio skips] [inaudible] to actually validate there are the fingerprint. It's purely data. So what actually has to happen is the attacker is looking at just data and there's actually no way for the attacker to tell the difference between good data and bad data. So the delivery mechanisms of that data doesn't matter. The attacker still has to make decisions based on data. Fundamental thing. It's just data alone.
Now what's interesting about the approach is that if you, say, have a policy in place that's very dense and the attacker is weary of the environment, he says, "Something doesn't look right," or maybe one of the deceptions are really obvious and he understands that deceptions are there, he still has to make a couple of decisions. Number one is, "Okay, do I assume the risk since I can't really tell the differences between the data but I do get that deceptive technology is in play? Do I assume the risk and then at that point make a decision based on what I see? Do I spend time here to kind of study this information further to see if I have a way of validating some of these things which does happen from time to time? Or do I just go ahead and leave and try to find another target? It's a little bit easier to deal with."
All three of those situations are actually desirable outcomes. Because, number one, you create a situation where the attacker is aware that he's at risk. So if he decides to make a [inaudible] while being aware that he's at risk, number one, he not only is he slow [audio skips] two, the chances of detection haven't changed. Because you've created a math problem for the attacker where the probabilities are in your favor of detection versus his favor of moving successfully, if he successfully manages to make it to the next steps his probability is decreased significantly.
Number two, the second part of this is – the second part that's particularly – that's in your favor is the fact that if the attacker is, again, observes the environment and he feels that he needs to research it a bit further, that slows his [inaudible] through your environment. So what that means is that the attacker is no longer effectively going to move naturally through the business comfortably because he's weary of the environment that he's in. So he's – not only has he gone through a process where he has to accept risk but he's actually slowed down in the process. If he feels like he needs to speed things up and he starts to employ things like a script, then what happens is the script will then churn through all the data and then now all of a sudden you trigger a lot of alerts and ultimately you respond to that in kind and remediate the problem immediately.
The last desirable outcome in that is that he sees the environment. He feels that there's deceptions in place and he leaves. A case in point, I actually spoke to a pest control organization and they told me that that's their guiding proposition and that's because there's certain pests that you can't ultimately get out of your walls like Pharaoh Ants, for example, but the idea is to get them to move somewhere else. So that's the same concept with attackers. That's less so with highly motivated state sponsored attacks that are specifically engaged at a goal of attacking lesser organizations.
So they tend to be more motivated. But for a lot of attackers out there, they will actually ultimately just leave if they suspect something that's out of their control and that they won't be able to get to. So we want to have those three desirable outcomes. Those outcomes still come even if deception is recognized as being used in the environment.
Now the second benefit of this deception as a defensive strategy is the reduction of false positives. One of the big issues that we have in our organizations is the fact that we have alert fatigue. Right? Again, this is a big data problem that we're facing because all the various tools and techniques or tools that we're buying all generate information related to what they're seeing and they have any number of reports that are used to massage that information in ways that actually can be useful.
Much of that information is not valuable at all. A lot of that is just noise. Some of it is just false positives. It's actually just not real alerts altogether. So it's important that we can reduce false positives, not put something else that actually increases the work, breaks down your processes and hurts the way you actually prioritize various things that happen. You want something in place that when you deploy it you know that because of the way deceptions are deployed, because of the way deceptions or how they're actually used that you have a degree of confidence in the alert that comes out of the use of the deception.
Then lastly, whenever you have a high confidence alert, you can enable automation which I think every security organization at some point would like to get to. We just can't because of the amount of data and the amount of – the current confidence level and a lot of the information that we've received. So it would be nice to automatically remediate the problem, detect and attack, understand that this is real, have forensic details right there while the attacker's there and be able to [inaudible] immediately and then automatically quarantine or remediate the problem.
This is something we've talked about for years but it's been very difficult to do because of the nature of the events that we have coming into our SIM solutions and other alert mechanisms. So deceptions provide a great opportunity for us to not only detect early but ultimately investigate and respond instantaneously and ultimately we prevent the attacker from making it to things where they could ultimately disrupt the business or attack our power grids or SCADA systems or in the case of the Ukraine incident, the substations.
So here's the idea. Number one, understand from systems the relationships between systems and resources, understand the types of connections that take place between those systems and resources and then at that point look at it from a deceptive standpoint where everything here as you can see creates a math problem for the attacker where detection is almost unavoidable. Then we can expand this out. We go from the attacker seeing a network that looks like this which is pretty straightforward and it's very understandable and easy to look at. Connections are all real to all of a sudden looking like this, an environment that's deceptive, an environment that's hard to navigate and the attacker's none the wiser that [inaudible] and therefore he cannot make accurate decisions.
So with that, that concludes the presentation. I certainly appreciate your time. Let's please open it up to any questions you may have.
>>Erfan: Thank you very much, Chad. At this time, if people have questions, they can go ahead and post them online. I will read them. I think we have a couple of them to start with. The first one is more like a comment. It's from Edgar Casilis who says, "They messed with the breakers and the Ethernet to serial converters."
>>Chad: Yeah. That's actually one of the ways actually how they gain access to it. But they have to get there first, right? That's the problem. So there's a combination of things that have to take place before they actually make it to that point. That's where defenses come in. We have to find new and unique ways to actually identify these attackers before they get to that point. So it's a big problem. It's an unfortunate event. But hopefully through what happened, we can learn and start to think about new and innovative ways we can address that problem.
>>Erfan: Yes. Could you go ahead and put your contact information on the screen so that people can see, because Michael Shay is asking for your contact info.
>>Chad: Sure. Let me actually move back to the very first slide, the introduction slide here.
>>Erfan: Okay. Michael – yeah. Yeah, so Michael, you can see at the bottom, Chad@IllusiveNetworks.com. So you sound like one of the original gangsters of Illusive Networks because it's just your first name because usually it's like – all right, very good. So Michael Shay asks another question. He says, "Would it be possible –" yeah, "How much artificial intelligence has been employed in the detection process?
>>Chad: It's a great question. AI, there's a lot of conversation about AI and whether or not it's a good thing or a bad thing. I think some folks are concerned about Skynet type situations. Deception – one of the things that in order to deploy an effective deception strategy it's, number one, important that you understand the environment not just from what it looks like from a connection standpoint but how the environment is actually being used. So when you develop a deceptive policy, you take into account the conventions and standards of the business.
Now that can be done manually or that can be done through a learning process using artificial intelligence. So the quality and the craftsmanship of the deception is actually what makes them effective. So taking, again, consideration of the actual naming conventions and stands of the business, taking what types of applications are deployed on various machines, all kinds of information come into play when making a decision on, number one, what types of deceptions you use and, number two, once you settle on a type and a deceptive policy how that's actually deployed on a per machine basis. So deceptions are most effective when you evaluate every single machine that you plant a deception on. That's actually how you actually have to approach it.
So AI is oftentimes used in that process. So in our case, we actually employ some algorithms to actually do that to determine what the environment looks like and ultimately build an effective policy. But you get the idea, right? So in our case, artificial intelligence is not used as a part of the detection but we do use learning or some level of intelligence to craft an effective deceptive policy that can be deployed out.
>>Erfan: Okay, then Joe Price mentioned that the comment that I had read was for the Ukrainian attack. All right, then the next question comes from Loralee Vas Sudaven who says, "Is Cloud computing making hacking easier than before?"
>>Chad: That's a bit of a misconception. I think the perspective that some people had on it is yes but not entirely the case. What happens with Cloud systems is oftentimes Cloud systems in general are you have one issue which satellite TV where those systems are just spinning up in an authorized way, but generally the access to those environments are fairly well controlled. So you normally have access to those environments through the use of certificates which is a great way of securing a credential, but then at that point it's not – access to that environment is not spread out to everywhere.
Now at the application level, you will have users, for example, that will interact with the application level. The application could have very well its own abilities but that doesn't – there's no differentiation there between Cloud and a normal environment under that circumstance. But the level of access is very controlled. Then the second thing to think about in a Cloud environment is the fact that not only is access [audio skips – inaudible] is to find life span techniques or technologies are being used. So in the case of Amazon, for example, Elastic Beanstalk is one of them where you would have an environment that's spun up based on a gold master image and then at that point – that's based on a need for the business in terms of the influx of traffic that's coming in and then spins out and those systems go away.
Also you have situations like in the case of Netflix if you guys follow what they do. It's actually quite interesting. But Netflix employs a set of scripts. They actually attack their own environment. It's actually called the Simeon Army. One of the scripts that they use is called a chaos monkey. That particular script basically just creates chaos in the environment at all times.
The idea is that if I can create chaos in my environment, predictable chaos, if you will, if that means anything, but if I can create some level of chaos in my environment then I can cycle through gold master images that I've continuously patched, continuously hardened, continuously update the software for that. Those gold master images are automatically introduced back into that environment because the demand is there for that service. So for example, I have a large number of users that require that I have ten machines servicing that user [inaudible]. The chaos monkey comes in and he starts to shut down machines in that environment.
Then those – but because of the automation that's involved and I need those specific number of machines to actually service my customers, what will happen is those machines are then freshly restored with a current gold master image that doesn't actually have the same problems. So what happens is that you're automatically refreshing your environment with current patch level operating systems and applications but also you can use security events to actually impact that situation. So for example, if I see something that's changed in this environment that's against policy, maybe instead of the normal rotation that the chaos monkey then goes through, maybe I immediately chill off this resource and replace it with something that doesn't have that same problem.
So the Cloud can be less secure just like anything else. But because of the level of controls, the expertise at the organization that actually provides the platform and also the various things that you can do to properly secure your environment or leverage things like define lifespan configurations, there's a lot of opportunities there to do some amazing things and have a very secure environment that can service your customers effectively.
>>Erfan: All right, so one thing I would like to say about Cloud computing is make sure that the wolf is not in charge of the hen house. That means that qualifying the Cloud computing service provider is critical. In this day of mergers and acquisitions, you never know when an advanced, persistent threat in the [inaudible] nation state or an organization that has some political or other agenda could be the actual owner of that Cloud computing service provider. So it's very important to do your sourcing before you send all the jewels of your kingdom to them.
The second thing is where it's actually a positive thing is it can actually help reduce the number of attack surfaces in your organization by moving your assets to the Cloud. Then by having a good Cloud computing service provider, as Chad was saying, you can get the best security controls that are out there in the industry because there are people that are dedicated for that purpose, whereas in your organization you may have one person that's multi-tasking and doing multiple things. So they may not have enough bandwidth to deal with the best security controls for your organization. So there's no black or white answer to this.
It's very clearly, a) you have to qualify the service provider and then, b) when you move your stuff over, have some oversight over what's going on. Don't just give it over there and say, "I know it's trusted and therefore I don't have to worry." You're always responsible for your business application. All right, next question is from B. Sobra Munyum who says, "Can you be more specific about what is done? I was unable to gain any new insights about what exactly Illusive does in terms of details."
>>Chad: Oh okay, great question. So we purposely try to talk about deceptions from a high level conceptual basis, but what Illusive does is we're a deception organization. We actually have a product. We're a startup based out of Israel and we have a product that actually is a true deception management platform that actually does exactly what I described. So for example, you are able to at scale create policy automatically. You are able to visualize your environment automatically, understand essentially [inaudible] office the environment. So we're going to show you things from an attacker's perspective that's going to help inform how you actually create policy but also how you concentrate some of your security controls around areas that have a high number of connections or levels of access.
Then that information is used to, again, create deceptive policy and distribute deceptive policy across Windows, Linux and Mac platforms. Then lastly, this is the most important part, is not only are we improving the rate of detection, but your time to response is immediate and automatic based on when the attacker uses deception at the [inaudible] decision. So we respond forensically to the end point in question, capturing all the information related to that end point including best stop screen captures or copies of the attacker's tools and scripts.
Then we present that through an easy to use, highly functional UI and we can also generate the appropriate alerts to either your SIM solutions and things of that sort as well. So we're a deception management solution, a true VMS platform. We have a huge breadth and depth of deception categories that crosses various types of protocols and credential types, across the numbers of three different types of operating systems. So hopefully that clarifies what we do.
>>Erfan: Thanks Chad. Okay, so the next question is, "Any advice for organizations embarking on Cloud journey?"
>>Chad: Lots of Cloud questions today. So that's great. Yeah, it's very important to understand with Cloud that traditional security controls don't apply well to the Cloud. What I ran into specifically at CloudPassage in my time there and at Illusive we actually do deceptions. We can actually deploy deceptions in the Cloud as well. So the Cloud is a very, very unique place. Number one, the Cloud has huge resource constraints.
Oftentimes organizations move to the Cloud for a specific cost benefit, but they realized very quickly that the cost benefit quite frankly is not there if they're not using it properly. So what happens is that what we run into is that an organization says, "Okay, we want to move to the Cloud," and they just think – they feel like they can just basically move an application out there. It requires some time. It requires some development and careful planning. But from a security perspective specifically, it's very important that you actually do things that use minimum resources because when you provision an asset and often the team that only the asset in the Cloud is not the security team, it's oftentimes the div ops team or the applications team.
So when they provision those assets they provision those assets based on the needs of the application, not the needs of the security controls that you may layer on top. The cost benefit is based on the needs of the application. You're paying or bandwidth. You're paying for processing time. You're paying for all those different things. So it's important to keep that in mind. Right?
So definitely investigate tools and solutions that don't consume a lot of resources but give you the information you need. But a lot of it is really leveraging the power of the Cloud in terms of the various capabilities that the platform provides like Elastic beam, elasticity or other things or maybe even leverage two of like doctor, for example, for some applications that require a very specific environment. So that doesn't require you to add additionally – too many additional security controls. It creates some optimizations in terms of how your applications are deployed to your customer environment and at the same time increases security because of the minimalist nature of the environment. So that would be my suggestion. Take a look at those things and make sure that you balance both security resource consumption and the capabilities on the platform that you're interested in.
>>Erfan: So on a humorous note, I saw a person wearing a t-shirt that said, "There's no such thing as the Cloud. It's just someone else's computer."
>>Chad: Yeah, it's funny. I think the Cloud also kind of looks similar to the old ASP model, too. So I also used to work for Qwest Communications and I specifically did security for their ASP. We basically provided Oracle and PeopleSoft Applications at a remote location they connected to via a VPN. So back then that what we considered now to be the Cloud. So yeah. To a certain extent, that is correct.
>>Erfan: I think one of the key things if you are an asset owner and you're having difficulty keeping up with the pace of technology to support your business applications, that's usually a good sign that it's time to move to the Cloud. Because if you have a stranded assets that you're not leveraging or you don't have enough of a staff to support the IT infrastructure, you're actually doing yourself more harm than good by keeping it in house. But as I said, when you move to the Cloud, make sure that your business processes are in place so that the service level agreement that you set up with the Cloud computing company can actually be realized. If you are not organized, you will not appreciate the data that they are providing you. So you have to organize yourselves internally before you just put all of this information assets out into the Cloud.
All right, next is from Loralee – actually before that, Bruce Rosenthal says, "Would you see implementing the deception network into the control network and maybe even out into the substations or a more controlled deployment just within the corporate enterprise networks?
>>Chad: That is a fantastic question. It's one that is being explored. There's a lot of different potential strategies there that can be employed. The one part, though, that gives deception a unique benefit is the fact that we don't actually – in the case of Illusive, for example – sis data. We don't actually use an agent. This is an agentless technology.
So that's number one because oftentimes in the substations – number one, you have bandwidth constraints. Sometimes it's kind of an old low-bought type connection into those environments. I actually did some work in the substations here in Georgia for ArcSight and one of the big issues was and one of the big issues was getting log data out of that environment. So I am familiar with that. So deceptions can be deployed for that environment. Oftentimes, just not as many systems in [audio skips] for example. But there's high resource constraints, high bandwidth considerations to take into account. So you do have limitations in terms of what you can do in those environments.
Typically, what happens and this is certainly the case with healthcare as well is there are certain parts of the business that you just can't access. You can't access from traditional security control standpoint or machines, for example, where you just cannot modify. They're too critical. But you do have to protect access to those machines. There's normally some sort of gateway point, for example, into that environment which subsequently would lead to those machines. So deception is oftentimes used as a means to diffuse an attacker's ability to gain access to that environment.
So on the enterprise side, for example, the attacker would have to navigate. He's not going to go – nobody's going to respond to a phishing attack inside of a substation or inside of a nuclear facility, but what will happen is the enterprise side does have different tiers within the organization that ultimately leads to the access to those types of environments in which case you can diffuse the attacker's ability to navigate those environments, make it those various tiers that would ultimately provide him the level of access he needs to create a [inaudible] situation unfortunately. So deception is a great way of doing that. Absolutely. Detect them early.
In many cases, you're going to detect them on patient zero. In our case, we actually detect an attacker within – normally on the first attempt to move laterally but we normally tell our customers within four lateral moves. Deception also, if you've ever gone through a red team exercise you'll actually see the deception technologies is a really effective play of actually stopping the attacker from making it to the five. In our particular case, the blue team has won every time. So we're really proud of that kind of thing. So it is a great way.
I think the question kind of indicated to me [inaudible], there will be a point though where we'll start to look for ways, for example, in terms of looking at the environment and maybe creating fake SCADA systems or fake smart grid systems. That's something that we're looking at now and that's certainly on the roadmap in terms of looking at those kind of controls and duplicating them, if you will, so the attacker lands on a machine and he thinks he sees a valid system that he can connect to but it's just a detection for us. So those kinds of things are all techniques that are based and grounded in deception that's certainly where the market is actually going as it relates to deception. So great question. Thank you.
>>Erfan: Yes, so if we focus on the energy sector, specifically the electric sector, what we are finding is that most of the distributed energy resources that are being integrated on the grid are happening at the transmission and distribution level, which is turning the substation into what's called a virtual power plant. Therefore, any kind of deception technology that you're going to use in the enterprise, by necessity, has to come out in the field. Because if you don't, then what will happen is as we move to higher penetration level of renewal energy on the grid, we're going to lose the action by just focusing on the corporate. Most of the assets are going to be out in the field and hackers will have physical access to these DER resources and they can launch attacks from trusted systems up the chain into the substation and then back to the control center the other way. So it's really important that whatever you're doing in the corporate network needs to get out there in the field to protect both ways.
>>Chad: Yep. That's certainly the direction that we're looking at, yep.
>>Erfan: Excellent. Next, how much can training and encouraging secured defensive programming help?
>>Chad: Significantly. The problem is that there's inevitably those employees that don't make it to the training or maybe they didn't digest what was said or distracted. Training is a very human thing and humans are flawed, as we all know. We talked about that throughout this presentation. The security problem is a human problem for a very real reason and that's because it's very difficult to disseminate information accurately and effective across a large number of people that could impact the organization, and number two, very difficult to retain that information.
Then attackers also use techniques where in some cases where organizations have done a great job of that, attackers will change their techniques like in the cases of the other phishing techniques that are out there where a phishing attack's not working so all of a sudden we go to a different technique altogether where we just basically poison the DNS Cache. So now you're forcing somebody to go somewhere whether than asking them to go somewhere. So it's very different and there's different ways to engage a person. So the answer to your question though, in short, is absolutely. You absolutely need a security awareness program, a properly executed [inaudible] witness program but you also need to take the appropriate precautions in the organization and from the people, process and technology standpoint to have an effective and robust security training.
>>Erfan: All right. Next question, we have Ethan Berry who says, "How does your technology work with other zero trust networking methods?"
>>Chad: Yeah, so that's a great question, right, because what happens oftentimes the level of access that's afforded to zero trust networking methods, it's a little bit more restrictive, right. What happens with deceptions – there's a couple of different ways. There's the decoy method which basically you're hoping an attraction method, to basically attract the attacker to another system. Then there's the deception method where you're engaging the attacker on the machine that he's on. So depending on the environment, certain methods might be more appropriate depending on a level of access. Certain methods might be appropriate. So you're going to employ a combination of both, to be honest with you.
The difference comes in – where the difference is that how you respond. That's the trick. How do you respond in an environment that has more restrictive access? The answer to that question becomes kind of tricky, right, because you're limited in the number of controls that you can use. You're limited in the number of approaches that you can use. You'd have to develop ways to actually either white list your approach or your deployment method or get it to where you are trusted or ways to actually put enough decoy or enough detection capabilities around that environment to where you would effective mitigate the ability for an attacker to even access it altogether.
So there's a number of different techniques that would have to be employed there, but there is some challenges and it's something that you would have to kind of architect and figure out leveraging, policy leveraging, white listing leveraging, connection techniques or deployment techniques. Also leveraging decoy versus deception. There's a combination of things that you would have to employ to make that work.
>>Erfan: Next question from Michael Shay. He asks, "In light of the IOT devices deployed all over, would the scale of the detection effort be exponentially increased? If yes, has there been some ways for the reduction of scale of the problem?"
>>Chad: Could you repeat the question one more time? Just make sure I have it all.
>>Erfan: Yes. "In light of the IOT devices deployed all over, would the scale of the detection effort be exponentially increased? If yes, has there been some way for the reduction of the scale of the problem?" Because there's so many millions if not billions of devices. So how would you deploy a deception scheme over such a large infrastructure of so many nodes?
>>Chad: Yeah, so I'm working with an organization now with over a million assets, over a million systems. IOT, as it's an opportunity to get much greater than that. Currently, we support Linux operating systems, Windows operating systems as well as OS 10 which I think that's actually what's coming up here in the second quarter. So we're focused on those systems that we can deploy deceptions to.
Now there's a difference in approach or a difference in the level [audio skips] deception has versus decoy. Decoys are actually emulations or basically virtual machines that are all consuming address space information, almost like Internet of Things, if you will, with the attempt of mimicking a device in an effort to attract and attacker to that device. That method, by itself, is inefficient because if you – I'll give you a small number as an example. If I have [inaudible] machines in my network – and IOT obviously gets much greater than that – if I have 1,000 machines in my environment and I want a 35 percent ratio in terms of coverage or chances of detection, 35 percent coverage model or protection scheme, if you will, then I have to deploy 350 new addresses all responding to services on the network.
That looks like real machines. That looks like those machines that I have to manage. That's machines that I have to cross my fingers and hope that an attack can make it there. Those are machines that ultimately can be fingerprinted if the attacker looks for the right things. So that's a problem.
The other method is – the other method of detection would be deceptions which is more focused, more – pretty much where our approach is. That is by engaging each and every system in the organization and actually basically poisoning the data on each – effectively basically spreading honey to the entire organization. We're going to make every system have information that looks attractive. Now you're not attracted to that machine externally. It's once you get one, you're attracted to data. You're attracted to certain bits of information.
Then at that point, creates a surface for the attacker to interact with in such a way where he couldn't avoid detection. In a case of devices where Internet of Things are taken into consideration, some of those devices run on Linux. In which case, there may be opportunities to deploy deceptions in those environments. Some run on legacy Windows operating systems. There's a lot of systems that are still running on Windows XP for example and even earlier in some cases.
So those types of systems, there are opportunities to deploy deceptions in those environments, but for proprietary firmware and things of that sort, that's stuff that I think the industry as a whole is still working on trying to figure out how we can mimic those kinds of devices or maybe create pathways to those kinds of things that look interesting for an attacker. As of right now, we can actually create deceptions that look like those devices or at least the connection data looks like you can connect to one of those devices, but in terms of planting deceptions on a device like that, that's something that as an industry, I think we're going to evolve into, taking those things into consideration. So great question. I certainly appreciate that.
>>Erfan: Okay, so we have seven questions left and like eight minutes. So if we can just keep the answers brief, then we can get to all of them. So Bruce Rosenthal says, "As a follow-up, would you implement the deception across critical information resources like geographic information system datasets that are in part used to generate a distribution network state map or does that start getting into too risky in terms of impact on core grid operations and reliability?"
>>Chad: So those are extremely critical systems. We have [inaudible] that make decisions every day whether or not to deploy deceptions on the servers. Keep in mind that an effective deception management system only deploys data. Doesn't actually impact the underlying server or application. So the answer normally is, yes, deploy deception is everywhere. But it's really determined by the customer and the criticality of that asset.
If that asset is so critical that it can impact things on a massive scale, the risks may not be worth in terms of deployment may not be worth actually planting deceptions. It just really depends on the organization. Oftentimes, they're determined through testing though. So vigorous testing is certainly recommended for any deception solution, much less ours, to ensure that the impact to the system is not going to be there. But in our case, we don't really impact the system much at all.
Now there also may be certain responses that you can't do. So forensic responses, for example, against those machines. You may actually have to disable forensics against those machines because forensic responses, you pick up resources. You're creating a snapshot and so there is some CPU consumption there. That's your potential impact point. That's just forensics in general. That's just what happens when you do forensics. So a great question. It's something that we run across quite often and it's just something that we have to make a decision on a case by case basis.
>>Erfan: One thing to do with complex networks is use segmentation. One way of doing that is by having a merge unit on which you have multiple devices connecting to that. The 252 Mask and IP is really good in creating these individual subnets with pairs of IP addresses for each link. Then you can use access control lists to block traffic from moving from one segment to another. That way, you can have millions of nodes, but they're not all in one round robin kind of network where if you get to one, you can get to all. So creating hierarchy and segmenting and using access control lists helps reduce the attack surfaces.
Okay, next question is Ravi Kumar Geli. He says, "Would an effective co-simulation of power and cyber system test bed design provide a way for simulating various cyberattacks and defense mechanisms? If yes, can you put some light on this?"
>>Chad: So you broke up a pinch in the question. Could you repeat that question one more time, please?
>>Erfan: Sure. "Would an effective co-simulation of power and cyber systems test bed design provide a way for simulating various cyberattacks and defense mechanisms? If yes, can you put some light on this?"
>>Chad: No. It will help but the problem is that this [audio skips] – this is not a static thing at all. The moment you've tested every known scenario, there will be new scenarios that come out. So it's very difficult to account for every single thing that could happen now or in the near future. New techniques are being invented all the time, new approaches, new methods whether it's physical, social or however they want to access various environments. They come out every single day by the thousands. So it's not something that you could 100 percent rely on. It's going to have to be a combination of that as well as the normal due diligence that you would do in protecting your environment and continuing to make sure that you understand what's out there, new approaches, threat intelligence and defensive strategies to protect that environment. Thank you for the question.
>>Erfan: Next is Alex Amir Nova [inaudible]. He asks, "So are you deploying decoy honey pots or something else like an IDS?"
>>Chad: No. So deceptions and IDS is completely different. IDS, IPS is an appliance. You would deploy that traditionally. It immediately comes right behind your access points. They generally use signatures. Some of them accident ports, some behavior capabilities. But they approach things from a network standpoint.
Deceptions plants data on the end point, literally data. So you're going to plant data in memory. You're going to plant data on disk. You're going to plant data on the registry. For various types of applications or various types of credentials, various types of connection types, all types of information that the attacker would actually use to make decisions on where to go next in the business. So it's a fundamentally different approach.
Now in order to do that, some of those deceptions are network for unit. So what I mean by that is say, for example, you have an FTP connection that you've planted on the machine, a deception FTP connection. That FTP connection has to be [audio skips] nest to another machine and that machine that responds to that ultimately has to interact with the attacker. The difference between deception and decoy or honey pot is that the honey pot will have to interact with that attacker or that connection in a way that looks realistic in an effort to buy time. The reason why it needs to buy time is so that it can actually study on all of the traffic that's coming through and determine which one's real in terms of an attack in this stuff that's actually not real, just normal somebody accidentally connecting to it. That information can then be used by your forensics team to investigate throughout the network where the attacker was prior to making it to that destination.
Deception's a little bit different. Deception DNS, like our solution, we actually engage the attacker by feeding the attacker deceptions on the end point that he's on. We use those deceptions to resolve to a trap server that is like a honey pot but it isn't relying on an attraction method. But then we respond forensically to the end point that that attacker is on while he is there so that you can automatically receive pertinent information related to the attack, what kind of tools and techniques he's using on that machine and ultimately remediate the problem while he's there, on patient zero, rather than having to rely on a responsive process or a reactive process of investigating things through a variety of tools to figure out where he came from. So there's a fundamental difference in approach. Because of the difference in approach, there's a difference in how we actually impact your actual time to detection as well as how you respond to that situation.
>>Erfan: Great. Next question from Ethan Berry. "How does your technology complement or overlap in terms of PRIV account and session management like CyberArk?"
>>Chad: Yeah, that's a fantastic question. CyberArk is a wonderful tool. We actually have – one of the things that's on the roadmap actually is actually a CyberArk integration. So CyberArk is a way of managing credentials for those who don't know, where you can basically generate your appropriate administrative credential for one time. You just add it on a given asset. That's certainly important for what we do because we want to engage the assets on an on-demand basis.
So our integrations with CyberArk, it's going to allow us to leverage those on a one-time credentials or complex passwords to engage on a specific – on an on-demand basis, various environments that require that level control. So that's a very important tool. I would certainly recommend any organization purchase or take a look at that solution. But yeah, we do need to interact with it and work with it well and that's what we're actually working now. Thank you for the question.
>>Erfan: Next question, follow-up question from Alex Amir Nova who says, "How do you make sure attacker can't tell the difference between real system data and fake?"
>>Chad: That's a fantastic question and one that comes up quite a bit. The difference between deception and, say, breadcrumbs, breadcrumbs are really easy to tell because they don't have a lot of the details that are required for certain types of connections. So while deceptions are similar to breadcrumbs, the difference is in the craftsmanship. The craftsmanship of the deceptions includes partnered information.
So credential deceptions, for example, would include active directory or directory services related fields that are fully populated if it's a real credential. That credential can then be married with a perfectly valid credential inactive directory that's not used or in a disabled state. So the idea here is to create an environment where the attacker can't tell the difference. So it really depends on the craft [audio skips] of the deception itself. So when an attacker goes and use various tools and techniques to pull that information and he compares it, the difference in the data – there's no way to tell the difference in the data.
Now one giveaway for an attacker – the part that creates false positives where you start to get a lot of noise with deceptions is how they're deployed. In our particular case, we've gone through great lengths to ensure that we deploy our deceptions in a way that can only be uncovered by the methods that an attacker would use to collect that information. So for example, modifying browser databases as opposed to adding favorites through the browser itself. When you do it that way, it actually doesn't show up through the UI. So your traditional users wouldn't actually see that information and therefore they wouldn't actually stumble across that data and molten create a false positive. But the attacker would copy and dump that database and that's across all the various browser types that are out there.
Other things that you would do is maybe put certain kinds of deceptions off the beaten path or, for example, plant a deception in memory with the appropriate information populated in the properties of that deception so that it looks similar to the corporate data. Now there's certain things that you can do in planting a deception and certain types of information. So for example, we've worked with some organizations that has well over 30 fields in active directory populated for various – for their account whereas we are only able to populate in creating a deception, putting a deception in memory, there's only so many fields that we can actually populate using that process. So the counter for that is to create a second validation point. So there's going to be a point where attackers develop techniques and tools to very quickly try to validate certain things that they see.
The first one is going to be credentials. It actually is credentials. If you're gone to DEFCON, they actually talked about it on the last year. They actually enumerate an active directory. They'll look at the credential itself. So if you create multiple validation points that actually solidify the fact that your deception is real, then that's how you fool the attacker. So deceptions are – it's an imperfect thing because you're only afforded a certain amount of access when you're doing things outside of the way a certain operating system's designed. The idea is to create multiple validation points so that the attacker will be none the wiser.
>>Erfan: All right, next question. Hasib Budani asked if you were showing other slides than the Who Am I one. So I guess he came on late and thought that you were not progressing. But we had just moved to that slide to show his contact info. We've already done through all the slides. Yeah, that was interesting. Hasib, by the way, presented earlier. He's with Soha Networks and Soha got purchased by Akamai Technologies recently.
>>Chad: Fantastic. Congratulations.
>>Erfan: They provide security to enterprise customers. Now next question we have is from Chris Potts who says, "As processing capability increases – HBT, Quantum, et cetera – are you concerned that bot detection algorithms will gain greater capability to detect deception faster [inaudible] establish data pulsification protocols in essence countering the counter?"
>>Chad: That's a deep question. Great question. That's something that we'll definitely have to think about. Remember, there's no way to tell the difference between the data but there will be ways to increase the [inaudible] that can be done. Now the key thing is that if you're using those kinds of tools and technologies on the defensive side, you're fine.
But if the attacker is now all of a sudden able to – maybe at some point they have a Quantum laptop or something like that that they can engage an environment and have the level of processing power to churn through information, you've got to keep in mind that oftentimes sometimes the only way to validate data is to use the data first. So that's why it's critically important that deceptions as a counter measure that you have a broad set of capabilities there. Because if an attacker is able to validate or identify through whatever means one deception, that doesn't necessarily mean that the risk posture of that asset is now compromised.
So if an attacker, if it gets to the point where attacker would have access to that level of capability to try to discern what is deceptive and what is not, you also got to keep in mind that what he sees is purely on that one machine. He moves on successfully to the next machine. He's going to be presented with an equally complex problem. That next machine that he moves on may not have those same processing capabilities. So there's a lot of things to have to take into account there, but hopefully we don't get to the day where everybody has a laptop where they can basically figure out your future just off of processing alone. But if it does come, then there'll be innovation out there to meet that challenge, I'm sure.
>>Erfan: So one thing to note here is that when you develop deception nodes, make sure that the data in those deception nodes is as close to what is the actual system. The farther away you are from normal data, the easier it will be to identify those patterns. I'm talking about like in mod best TCP and OPC and 61850 type protocols that we have in the energy sector, voltage regulators, capacitor banks and other types of DER equipment have very finite ranges of values that are acceptable in normal modes of operation.
So if the deception nodes are also staying in that range, it doesn't matter how good your computing capabilities. It would be very hard to distinguish real something. But if you have static type of data that is not changing with the environment that you're in, yes, then it will stick out like a sore thumb. So a video comes to mind. Right after Muhammad Ali wins a boxing match and this interviewer was standing in front of him, he told him to keep the camera moving because he's kind of fast. Deception nodes need to be just as dynamic as Muhammad Ali to keep the other guy guessing.
>>Chad: Yeah. It's not only the node itself but it's actually the protocol. A common misconception is if you – the common misconception is that the response has to be exactly the same as the real thing. Actually, the properties of the deception has to be very close to the real thing. The response purely depends on how you engage the attacker at the moment they use that deception.
So for example, if I didn't engage the attacker immediately when they use a deception then I would need a response that's extraordinarily accurate just so the attacker continues to believe over a longer period of time that he's working in the environment that he thinks he's working in. But if your response is automated and instantaneous at the moment he makes the decision to use that deception, then all of a sudden the level of interactivity and the time it takes to interact with the attacker goes down. You pair that with randomization throughout the environment, now you're in a situation where the attacker absolutely cannot win. So even if he uncovers or identifies one deception and he moves onto the next machine, then he's not presented with the same but it's equally complex and the probabilities get more difficult as he blows them up. So thank you for the feedback. It's great.
>>Erfan: All right, wonderful. We'll just wrap it up in the next couple of minutes. We have two more final questions left. Both are from Ethan Berry. One says, "Look forward to a follow-up discussion and demo of your technology and methods." That's good. That requires no response except applause.
>>Chad: Thank you.
>>Erfan: The next one is, Ethan Berry says, "What development tools and techniques have you used to try and circumvent your technology?"
>>Chad: Well I can tell you as a person that supports the technology and would like to see more and more people use the technology, I don't spend a lot of time developing ways to circumvent it. But I can tell you that people much smarter than I have and they failed miserably. I'll give you an example. Red team exercises are a fantastic way to validate any deceptive type of solution. Actually, red team exercises are a great way to validate a lot of different solutions. PIN test not so much. PIN test really tests whether or not you can be effective at entering an environment but we all know that phishing techs work quite well and social engineering. But red team exercises actually test efficacy of the [audio skips] and that's where it's critical.
So just to give you an example, and we've gone through several red team exercises. So far the blue team has won every time. But in one particular case, we had a nation state level attacker who was very proud of his capabilities and argued quite a bit with us, didn't feel like we would be able to do anything with him and he felt like he would be able to identify deceptions quite fast. He was in an environment with three different segments. He rewrote tools like mini cats, for example, during that engagement. He also had access to tools and backend information into the operating system that quite frankly is not publicly available, techniques that he's developed himself and he hasn't distributed out to the public or tools rather.
So he engaged the environment. Full disclosure. He understood that deceptions were in the environment. He actually understood what deception types we actually deployed in the environment as well and because of that, he took his time in trying to validate whether or not he's looking at a deception or not. In fact, he created a Note Pad session that included everything that he thought was a deception. Long story short, he generated five alerts, which is the goal. Right? You don't win a red team exercise while generating alerts. The goal of the red team exercise is to see if the blue team can actually detect the attack. He was not able to make it to the flag. He actually asked for additional time.
He was visibly frustrated because attackers of that ilk, of that level, it's personal for them. They have a reputation to maintain and this particular person has never lost a red team exercise. So he was not able to make it to the flag. The blue team won. It was the first time the blue team won in history against this particular gentleman.
In the meantime, we actually captured screenshots of his desktop that showed the list of deceptions that he actually identified. Actually it was a Note Pad session called "Deceptions". Out of 26 systems that he had listed as deceptive, 19 of which were actually real systems and only 6 of them were deceptions. It just so happens that there's a strategy that's used in the deception space that's called deceptive deceptions, if you will. It's actually a counterpart of mine made that one up.
But basically what it entails is in order to instill confidence in your well-crafted deceptions, you actually create bad deceptions that are obvious so that the attacker feels like he has your figured out. So it raises the confidence level or the impact of the real deceptions that you have there and it increases your odds of detection. So those techniques are all employed when we work with our customers and it works quite well.
>>Erfan: Thank you very much, Chad. I have to applaud your stamina to present and answer questions for an hour and 45 minutes. So that's pretty incredible. I also want to thank all our participants today for their patience as we've gone through this presentation and sat through Q&A. At the top, we had 150 people on line. So that's incredible.
Over 300 of you have registered to see this recording. Some logistical notes; we will have the presentation and the recording link sent to you by Monday. To all those who registered, no worries there. Chad, do you have any final comments or thoughts that you want to share with the audience?
>>Chad: No. I certainly appreciate the time, guys. It was great answering your questions. Thank you for the opportunity to talk about deception. Hopefully this conversation was productive and you guys felt it was time well-spent. I hope to at some point meet you guys down the road. I will be at RSA. We do have two booths at RSA. So if you are going to be at RSA, feel free to come by and visit. I will be there and I will certainly like to speak to you further about it if you're interested in seeing product and talking a little bit more. Thank you for your time.
>>Erfan: Wonderful. For those of you who have participated, you have Chad's contact information. Please reach out to them to learn more about their technology and invite them over to your locations for product demos. All right, so our next webinar is going to be on February 17th and it will be a company out of Fremont, California called Dataguise. They'll be presenting another aspect of cyber security that involves databases. So I hope that you will attend that. I will be sending out the notes for that next week. With that, thank you, Chad. At this time, I'm going to bring the recording to an end and end the webinar. Thank you very much.