Skip to main content

Introducing AgilePQ DCM (Digital Conversion Module) — Video Text Version

Below is the text version for the Introducing AgilePQ DCM (Digital Conversion Module) video.

>>Erfan Ibrahim: Good morning. This is Erfan Ibrahim of the National Renewable Energy Lab. Happy New Year. Welcome to 2017. Our first webinar of the NREL smart grid educational series. Today we have three speakers affiliated with AgilePQ who are going to be presenting on the subject of cryptography and showing some innovative approaches to cryptography which can help in the deployment of encryption technology on critical infrastructure that typically has not been possible because of the amount of memory and processing resources that are typically needed in the more traditional approaches to encryption.

Before we start, I just want to say a few words about just the general topic of cyber security and then get into the presentations. For those of you who don't have much of a background in cyber security, cyber security is not a monolith. There's not one thing you do to make things digitally secure.

We talk in terms of confidentiality. Confidentiality has to do with making data available to those who are authorized to see it. Integrity of the data, which has to do with from the time data is generated, transmitted and stored that it maintains integrity in its contents and that it’s not modified. If it’s modified, there has to be a marker on it that says it got modified on this date by so-and-so.

That whole area has to do with the integrity of the data. The third area is availability, which has to do with what percentage of time when you're accessing a piece of data it is available to you so that you can make decisions on it.

There are two other nonfunctional attributes in cyber security that are also important in addition to these three. Those have to do with accountability, so when someone does something that it's attributed to them, and reliability. Which is really availability in the long run over weeks and months and years you check to see what percentage of time the application and the data is available.

These are called nonfunctional attributes and the cyber security technologies that we purchase and implement in infrastructure are all meant to help enable these nonfunctional attributes. There's not, as I said, one thing you can do to satisfy the whole subject of cyber security. And there's not one thing you can do to fully enable these nonfunctional attributes. It is a combination of technology, best business process and training, all of those things and then policy at an organizational level. When all of those work together, then you can say that you have relatively secure systems and make data and application available to the authorized users who need them.

So with that background, we now are looking at confidentiality. Cryptography is directly connected to that because you want to make things available only to those people who are authorized. By various techniques you can obfuscate data from unauthorized users and only allow certain people to see it. With that background, I now give the floor to Doug, Bill and Greg to take it from here. So, Doug?

>>Doug: Thank you for passing us over and thank you so much for inviting us to participate in the meeting today and share some overview of this company that Bill and I have come across that seems to have established a pretty remarkable technology. I'm actually under the weather today so I'm going to not do a lot of the talking. I'll pass it on to Bill and Greg.

But our goal is to just kind of share some of the feedback that we've received from dozens of conversations with folks we've had in the energy sector related to the area of cyber security, what are their fears, hopes, aspirations and dreams. We want to give you an overview of the capabilities of this company. The goal really for our session, just to kind of set up where the company is at at this stage, is it's a technology that's been thoroughly tested at this point.

There is a pretty major rollout occurring with folks at the Microsoft Azure team as the first mechanism to actually secure an endpoint or sensor directly to the cloud. That's rolling out this month. We are very excited about that. It's been validated in some other test beds as well. But as it comes to actually determining the best ways for it to connect into the energy environments, what we've been doing is our homework to talk to the people who would ultimately be possible users and try and learn as best as possible where we can best impact things.

We've heard quite a diverse set of feedback in terms of where the best fits might be and that's a great thing from the vantage point of we can fit a lot of places, but most of the challenge, because we're looking to figure out where to go from there. So with that I'll let Bill introduce himself and how he got involved with the company.

>>Bill: Okay, thanks Doug. As Doug was saying, this is a company that we both encountered through other areas that we were working in related to energy policy. In my case, as you probably noticed, I was most recently senior advisor to Energy Secretary Ernie Moniz. One of the topics that came to the front of mind very frequently was cyber security and the importance to the nation to improve what we've got there.

It's amazing how much that has moved from, if you will, the technical discussions of specialists, probably before Thanksgiving, to now it's the front page almost every day issue. That hasn't shed a whole lot more light on the subject but it's certainly brought a lot of public awareness which I think will cut in both directions of making sure it gets addressed, but hopefully gets addressed in a way that's helpful to the industry.

Now we know that in earlier NREL webinars the topic of cyber security has come up and from the federal roadmap presented last year, at least as far as DOE is concerned, the focus has been on five factors. The culture of thinking about security. The risk assessment and monitoring of systems to make sure that we are maintaining integrity etc. The development of new protective measures. The management of incidents, which one of the most important elements there is quickly sharing information in the course of the evidence of an attack. Also, the communication with the public and the issues of controlling cascading that might happen.

Then of course there's the sustainment of any security improvements, and as Erfan has said, there are no silver bullets here. This is basically a subject that there will always be room for improvement and on occasion there will be serious needs for improvements. I think one of the things we have seen recently is the threats from cyber have moved from annoyances to nation state level threats that are of importance not only in strategic jostling between us and some of our major adversaries, but also protecting our systems from attack from people who are actively hostile at all times towards our nation and our people.

We want as we provide this overview for you to take lots of notes about things that you'd like to give us feedback on because there's lots of capabilities embedded in this advanced encryption and one of the things we want to learn from you is where would you most like to wish that you had advanced encryption or an ability to encrypt elements of your system that you can't today.

Let's turn then to the current environment. As Doug mentioned, we've talked with dozens of leaders, both in the industry and on the policy side, and we know that the rates of attacks have been growing. They've been growing both in numbers and in sophistication. That as the industry looks at how to protect, they get some guidance from the feds and NERC but who is responsible for what is a rather vague concept at this point.

There's no clear delineation between the feds who, if you will, know the most about the nature of the threats, and the state regulators, who have to authorize any expenditures and create the incentives around reliability and so forth, and how is a company and the technical experts addressing these issues to make decisions with the lack of clarity that we've got there today.

Certainly, boards are paying a great deal of attention to this. Many members of boards that I've spoken with say this is a major topic at almost every utility board meeting. They need to develop risk-management approaches and they will be turning to the cyber professionals and the operation engineers to figure out a path forward there.

Even just broadly, what's the public responsibility and the private responsibility? I've heard the somewhat stretched metaphor that if these threats were happening the way the responsibility is falling today to an electric company, it would be as if the airlines in the early Cold War of the 1950s had to provide fighter jet escorts for their passenger jets crossing the Pacific or Atlantic. Obviously, everyone would consider that absurd. So how do we work this through?

Of course, another very important element here is the Internet of Things is viewed as having so many promising potentials for greater efficiency, for more services and so forth. But as we've already seen, there's been a denial of service attack that used many things that have Internet connections but were easily penetrated. We've got to deal with all of that as we move forward here too.

The search is on for ways to provide effective protection at reasonable cost. I think also very important is to be able to do it in a reasonable timeframe. Because if we wait for the normal standard setting processes, I think everyone's going to have been so penetrated that it could be too late to deal with things in an effective protective way.

Broadly within the challenge of the vulnerability, the key point here is that we all know computer power, thanks to Moore's law, continues very rapid advancement and the standards that have been developed, mostly through NISC leadership, but referenced by NERC and therefore approved in several cases by FIRC, apply the AES standards and, as our picture shows, we have already developed computer power that can beat the protection afforded even by AES-256 when many of the standards required is still AES-128.

So we've got a situation where you can be in compliance but you may not be secure and so what we are talking about is this is obviously an increasingly hostile environment, as illustrated by those exponential increases in hack attempts and what we believe the AgilePQ DCM capability provides is a way through this.

In our conversations, we've had a number of areas come up as places where the DCM technology might be applicable. Here is a list that we would be delighted to discuss with the folks on the call about which of these are most important to you to come up with better solutions for. I think one of the things we've heard often is, "Well I have put AES-128 in place." And we would like to talk about it. Are there reasons you find that adequate? Or are you just comfortable because it does have you in compliance?

We would like to stimulate a conversation around that because there are areas beyond energy where this technology is quite important. We want to focus within energy on the areas where it can be most helpful to our national infrastructure. So issues around sensors are one that comes up. We can provide an ability for encryption that is applicable to the industrial control systems and can be on remote sensor.

That's an area where the DCM capability to draw quite low levels of power and also physically be very small appears to allow introduction of bump-in-the-wire solutions in places where it's not been possible to date. There are also the issues around securing and authenticating commands that are used in the SCADA systems or other ICS systems and we'd like to understand what's most important there.

Maintaining secure operation in the real-time environment is something that's also been brought up to us. Our system is rather quick because of the way that the encryption occurs and that then allows you to minimize time delays. Banging through time delays is something that makes the system work better. In fact, we've heard that there are often times where encryption is not in place because it takes too long.

Of course, customer data is an area where regulators bring it up a great deal and minimizing the power draw, we've talked about that. Another important element here is how can you make it clear to the feds or the state regulators that this is not gold plating. I've heard often from many a public utility commissioner, "I want to make sure our systems are secure. But frankly, I don't have the expertise on my staff to address this in terms of knowing when it's appropriate levels of security and when it's not." We want to hear your feedback on all those elements here.

Let's move into the more technical discussion. I want to introduce Greg Ward who's the leader on technical product development at AgilePQ. Greg, please tell the audience about the company.

>>Greg: Okay, thanks Bill, appreciate it.

>>Bill: You're welcome.

>>Greg: As Bill mentioned, we'll dive into a little bit of detail that kind of backs up a bunch of the statements that we just made. Specifically, and there's kind of a method to this madness, talking about where it came from because as you can see the genesis has really nothing to do with securing data or cryptology. But it led us to an ability to introduce this very light footprint, very efficient solution.

So what happened? There was a gentleman that has now retired, but was a colonel in the Air Force and he worked on things such as the Strategic Defense Initiative, SDI. He has some pretty interesting credentials. One of the last things he was asked to do was to figure out how to improve communication links with unmanned aerial vehicles. Very specifically, they will fly in environments that have a high degree of noise and signals sometimes don't get through. It's very important to maintain control over your UAV. So how could he improve that throughput?

The answer was the use of what he called optimized code tables. Just a little bit of detail. Basically, he used a mapping scheme, if you will, to take the input and translate it to a slightly different array and would use these tables to continuously change the mapping based on the noise environment. As the noise increased, the table would change and you'd go to a slower code rate and still get your signal through. As the noise decreased, he'd change the table again and increase the throughput.

Part of this though was a predictive capability to actually see you what the trend was and then anticipate table changes or code changes in advance of the requirements. The result was, and this is actually how I was introduced to the company, I was working for one of the world's leading test and measurement companies. Used to be called Hewlett-Packard. Most recently, Agilent Technologies or Keysight, if anybody's familiar. Wouldn't expect so unless you use test equipment.

But they came to my organization and wanted us to validate the claims on this improved throughput. My organization performed that project and the results were so outstanding, that's what brought me over to this organization.

Now the result of using these code tables, it turned out, was that the data as it was being transmitted was being obfuscated or hidden. As the tables changed, the data format changed and it was very, very difficult to track. Now in this world it's a finite set of tables so we are not using that implementation as the security solution. This simply is a representation however of the methodology of using code tables as a way to obfuscate data. If you change the tables rapidly enough, often enough, with unapparent sequence, then you end up with extremely high levels of obfuscation. That's what the origin is of AgilePQ and how we went from signaling into security. Again, if someone would like to understand that a little bit better, in the Q&A session I'd be happy to go in further.

I won't spend a lot of time on this slide because this is a tagline. What the result is of our DCM, and that's our own acronym, we call that the Digital Conversion Module, we should have defined that previously, my apologies, is to systematically deliver a solution that will provide all the aspects that you need for a secure communication link. This very specifically is designed for data in transit. There are other things we can do for data at rest. We haven't applied complete effort there yet. We focused on the data in transit for now.

But the application, as you can see, covers a number of environments and of course the grid and energy is one of those high target environments because there's a big need. There's a lot of legacy equipment out there that has not been able to be secured, as Bill mentioned earlier, because of the nature of the devices, the resource constraints of just not ability to have a computational stack.

Let me just spend one more kind of background slide on the name itself. Agile Post Quantum. There's a gentleman in the world of cryptography named Dr. Taher Elgamal. He's actually the gentleman who invented SSL and introduced RSA to the world about 18 years ago. He has been working with us and following our company. As we shared with him how we are approaching cryptography, it was actually Dr. Elgamal who said, "When you talk about your company, there's only three words that come out of your mouth. That is agile and post quantum."

I'll get into a little bit more detail but at a top level Agile is in reference to the key itself. AES, if people are familiar with it, is a fixed block cipher, which we can work in a block cipher mode. But we do not have to stay to a fixed block. We can change that block depending on the environment. Post quantum is actually, again, Dr. Elgamal stipulated that because of the way we are approaching this, traditional methods, high compute power, will not be able to break the algorithm. I've got another slide layer that talks a little bit about some of the validation steps that corroborate that statement from Dr. Elgamal.

In terms of the highlights, what are we bringing with this Digital Conversion Module? I think, again, Bill had mentioned, and I stated just a bit ago, that the approach delivers a high level of security in a way that previously has been unavailable. Therefore, things that could not be protected previously now can. Additionally, I think Bill also mentioned, if the desire would be to continue to run something that is considered a standard and follow policy, yet there is a desire to be more secure and have something that actually is quantum computer resistant, then you could actually run DCM on top of AES.

Now Bill also issued issues of latency, and obviously if you add work you are going to add time. But if security is the most important thing then you can add security. If speed is a most important thing then we can dial up speed and still give you very, very high security. So at a net level it's much, much more secure than AES-256. I think there's a slide later that talks about what those numbers are. But it's significantly higher search space for the key size. Yet the code itself, the actual algorithms that run on whatever processor, is actually under 2.5 kilobytes. That's for both the dynamic footprint and the static footprint, if anybody's thinking about that.

As I mentioned, the key size is tunable. A smaller key will give you slightly less security but even the smallest key we use, we are still much higher than AES-128 in terms of the key search space. They'll be another slide that comes up in a bit about the energy. Again, because of the efficiency and the speed at which we can perform the encryption, we use far less energy than current standards.

We have deployed on bare metal. We have deployed on Linux and Windows and Mac. Pretty much any environment. We are system agnostic to that and we'll work with the protocols that are available. It is available as software that can be flashed into firmware and there's also a hardware solution, which is what the bump-in-the-wire is, the initial step. But it could go into silicon or into an FPGA. That's down the road. Today we have software running and we flashed it into firmware as methods for delivery.

On this slide, we are trying to address some of the concerns that have been reported in terms of SCADA vulnerabilities. The way we've looked at this and evaluated and discussed with other people, our Digital Conversion Module can address the vast majority of those vulnerabilities that were pointed out. You can see the bullet points on the left of there, communication endpoints and channels and so forth. Whereas legacy cryptography, like AES, does not address those as well.

Really the remaining 15% have to do with human behavior and are not things that you would be able to address with a software or some kind of technology solution. That's always going to be the case. There's going to be a human factor involved. Even with things like authentication authorization, often includes human factoring. What we do is we work with the implementation environment, and those who are familiar with the environment, and design a system that fits and therefore covers those vulnerabilities.

This is a slide trying to represent the strength. Legacy cryptography, like AES, is a numbers theory approach. Meaning the level of security is based on equations and those equations actually can be reversed calculated if you apply enough compute power. That's what we would call numbers theory. Ten to the 38, as is seen there, is actually the key search space for AES-128. If you did 2 to the 128th power, you'd end up with 10 to the 38 as a decimal system we're used to, describing the search space for the keys. But again, it's a numbers theory approach and can be reversed calculated.

Coding theory is a different approach, and that's why I mentioned the origin of AgilePQ and the use of these code tables, and the systematic and random permutation of those tables. The idea of a code theory is it is not equation based. It is a one direction, or unidirectional, process that as you go through the code and apply the algorithm, when you have the result you cannot take that result and in any way reverse the algorithm. It's a one-way function.

The way we implement this, we actually end up with a factorial problem. We would have a 256 factorial as opposed to two to the power of 256, such as AES-256 is. That's where we would end up with astronomically large 10 to the 506. It's kind of hard to put that into perspective. It's a number that is almost meaningless because it's so large. Just consider it many, many orders of magnitude higher in the key search space. That's what code theory will deliver to you.

Now in terms of power, this is interesting because many times we've been speaking with people who are traditional cryptologists and the rule of thumb is that to deliver higher levels of security you must increase compute power. That's probably the case because AES has been around for, I think, since 2008, it was formally introduced. Actually, that was the early versions of AES. I think there was even an AES-32 that no one used, 128, 256. There is an AES-512 available. The key size can continue to increase. Yes, it does require more and more compute power.

If you look at AES-128, it's actually a 10-cycle process to achieve that level of security. AES-256 is a 14-cycle process. That means that the CPU is going to be working harder for longer to achieve the level of security. With coding theory, it's a single pass. Again, because it's a factorial problem, it does not require additional compute power. It has to do with permutations and combinations and we can achieve this very high level of security with a single pass through the code and be much, much more efficient. So less power.

What we did to validate that actually is we've connected a number of different devices. This particular one, I apologize, the font is very small, I'm trying to get a lot of things on the slide here. But if you can't read it, that part in the yellow says it's an RFduino, which is one of the smaller devices you can get. It has a total of 8K of RAM. So very, very constrained resources.

We were able to force AES-128 onto this RFduino. It took us a little while to get it to fit but we did get it to work. We connected the RFduino to a precision battery emulator piece of test equipment from my old world and used that as the power supply. What we were able to do is then just measure exactly what the power consumption is through the process of encryption.

We asked the device to encode and decode 32 kilobits of data, actually, again, that's probably kilobytes, I know it is, using AES-128 and measured how long it took and how much energy is consumed. Then we did the same thing with our own AgilePQ technology, the Digital Conversion Module. The orange trace is the APQ Digital Conversion Module. You can see it uses about 68% less energy than AES-128 to perform the same function, and of course, it uses far less time and that goes to the latency statement that Bill had made earlier.

Again, on the left there's a couple of reiterated bullet points. I think one of the things that is most significant is the size of the code itself. It can fit on these very resource constrained devices like the RFduino quite easily. There is not a device we have not been able to fit our code. Especially if you're running something like Wi-Fi or Bluetooth, that uses its own set of memory and part of the stack, and we can run alongside that no problem. Also, if you consider devices that might be operating on battery power, the idea of efficiency and speed is very attractive.

What we've done here is essentially put a diagram together, very simplistic, of a control room and different areas where a DCM, or Digital Conversion Module, could be applied. Bill had mentioned bump-in-the-wire. The idea of a bump-in-the-wire is that the end device that you want to protect, while it can communicate on the network, either is proprietary, and you cannot load any additional software, or it simply does not have a traditional compute stack. Yet it is connected to the network.

So with the bump-in-the-wire, what we've done is you would insert a small little piece of hardware, very inexpensive, that has the DCM on it, plug the device into one side and then the rest of the network in to the other side, and that is the bump-in-the-wire. Think of a cobra eating a goat. You know, you've got a bump in the snake. Add a little bump-in-the-wire and the device can then protect all of the data that is transmitted from that endpoint up through whatever the next step is.

So for areas, maybe PLCs that are proprietary or don't have traditional compute stacks, or other devices, you can use a bump-in-the-wire. Where you do have a traditional compute stack, and again, it could be very resource constrained such as the RFduino, you could then load directly on. Or if you choose to use a bump-in-the-wire at that point you can as well. For example, with an RTU.

In the control center, we would load directly onto whatever machine and that would be the last hop, or at least endpoint, of the communication link. Again, the key size is something that we can flex to the environment. Very specifically, if you have small blocks or even serial data, we can work in those environments and still deliver the exceptionally high levels of security.

One of the things that we've done with the bump-in-the-wire, and this is actually some IP that we've patented, is the ability to, we call it self-realize. Or auto configure would be a common term. When you plug it in, it automatically discovers the addressing on both sides, the client side and the network side and then announces itself to the system and can communicate with any other bump-in-the-wire or any other DCM that is already in system.

Then very specifically on the key size itself, one level of detail down here, AES is a block cipher. AES-128, you must send 16 bytes at a time every time. If you have more to send, then you break it up. If you have less to send, then you pad it. If you have a fragment of it, you break it up, then you pad. You do both. AES-256 is a 32-byte block, and that's a very common use. But again, you are fixed to that block size.

We can work with block sizes, I think the maximum we would do is to just, well I don't know, we've gone up to 1,500 bytes, no problem. Two hundred fifty-six bytes can be common. But we would go as small as a 4-byte block and deliver extremely high security along with that. So that allows you much more flexibility.

If you want even further flexibility, and as I understand it some PLCs, you don't know what the transmission size is going to be. It could be a few bytes, it could be a few bits. It could be a complete register dump and you're sending hundreds of bytes. We have an implementation where we use essentially a serial method.

The data is introduced to the encoding algorithm. The size is determined. It is obfuscated or encoded and sent on in exactly that size. The key itself would flex to the size of the data for that serial data stream, be received and decoded accordingly, and stay in sequence. So you can introduce different size packets sequentially and we will work with that without having to pad or without having to fragment.

Now in each of these when we are sending the data, we use the key once and then we throw it away. Permute it is a better term. Modify it to a different key and send the next packet. So there is only one key used per packet and then not used again statistically for a very, very long time. Also the keys that are, well that goes to the last statement there, is that they are essentially self-generated at the moment, used and then discarded.

Because this is a new approach and it kind of is a different way of thinking, the coding theory versus number theory, we have spent a great deal of time validating this solution. For example, that first one, that Keysight Technologies, that's where I came from and we did some validation steps.

We worked with the Ponemon Institute, who actually was fronting one of the national laboratories, and that national lab wanted to remain nameless. They were actually a little bit frustrated with us because they could get nowhere with our data. They actually accused us of doing unnatural things to the data. Suffice it to say, they were not able to break in and discover anything.

We contracted with the University of New South Wales. Last year they were the world champions of some hackathon hack attempts and _____ and they tried a Trojan plaintext attack and got nowhere. ­­­­_______32 different _____ and the outputs were entirely different. So there was no collisions, no repetitions in that.

We also worked with another world-renowned cryptographer. His name is Russell Impagliazzo, it's not listed there, but of the University of California and San Diego. What we did with him is we gave him both the algorithms as well as a mathematical expression of the algorithms. He and a colleague also spent about four months evaluating the system, the approach and the cryptography. What they were trying to do was determine where the leak would be of information. Or how you would approach an attack to try to discern the key. Their response, in quotes, was "no known attack surface."

Additionally, we've done some other proof of concept. One of the labs from a large networking company we've worked with has been evaluating the code and they actually wrote some papers on how they would integrate it with their own management software. The University of Nebraska Omaha actually was one of the early evaluators of the technology and there is a professor of cryptography at UNO who is now our chief technology officer, Dr. Ken Dick. We took it to him for evaluation. He was skeptical at first and then discovered the abilities and joined the company, much like I did.

Incidentally, on an ongoing basis he will provide the encrypted text from a DCM encoding scheme to his PhD candidate students and allow them to try to decipher it. He said he'll give them 100 extra credit points for anybody that gets anywhere. Of course, no one has yet. We anticipate no one will. But that's an ongoing evaluation that's happened over the last three years.

We are working with another one of the national labs right now, very specifically in the energy world, and we'll be working with them on getting a statement of fact.

Then as Doug mentioned at the very beginning, we have a very close relationship with Microsoft where they have actually integrated our code into their Azure cloud. Specifically, in the protocol gateway. Their interest is that all these devices that people want to deploy for IoT but were not because they were not comfortable lacking a secure link, we've introduced something that allows them to ingest data from these resource constrained devices directly from the endpoint into Azure with no requirements of gateways or anything in between. That's something that they are very interested in. We're working with some of their customers and doing workshops with their customers and with their technical groups rolling this out, as Doug mentioned, this month.

There's quite a bit of activity that we've done to try to validate the solution. Without having been in the world for three years, what else can we do? As time goes forward people will use it and it will get its own additional validation. But in the meantime, we've worked with industry experts and universities and so forth to demonstrate that yes, this is actually, we actually do what we say we can do. At this point I'm going to pass it back to Bill.

>>Bill: Yeah, thanks Greg. My cleanup job here now is, I told you what Greg would be telling you. He told you it in excellent detail. I'll just try to wrap it up here. One point I'd make, I started my career as a member of the tech staff at Bell Labs and so I always thought in ones and zeros and so forth. I think we understand that AES-128 has two to the 128 combinations and that that equals, as Greg told us, 10 to the 38.

Basically, what that would be is brute force trying every combination if an attack was underway. Then with the magic of the factorial approach with the DCM system, you reach 10 to the 506 power. That is an unimaginably large number and as these tests have shown, you don't even bump into things. It's almost like going into the universe and trying to hit something. There's so much empty space that that's where you end up most of the time. That's the concept of no collisions.

That plus the potential to tune to a small footprint, especially out at endpoints and sensors from legacy systems and so forth, and then the tunability that comes with that, is a really neat feature. That kind that says you don't have to – you can use a Cadillac where you need a Cadillac and you can use a MINI Cooper where you need a MINI Cooper.

The power consumption, this is not going to affect the world's consumption of energy but it's real important for where you need that energy. So having a low draw in some of the places where we should be adding protection is a big addition here.

Then of course the vetting is well underway. I think the Microsoft Azure breakthrough is public evidence there. There is always a dilemma in this area, is can't be too public about a lot of the elements here. I hope we can turn now to a discussion.

One other really interesting coincidence here is today is the day that the Department of Energy issued its second installment of the Quadrennial Energy Review. That is a document you probably all want to take a look at. It focuses on transformation of the nation's electric system and a key part of that deals with security and the interdependency of so many systems with the electric power system. That becomes important not only about the Internet of Things elements that we spoke to briefly, but also then how we keep our infrastructure secure at this time.

 Erfan, do you have any questions that people have sent already? Or let's encourage people to get the conversation going.

>>Erfan: Thank you very much. Yes, we have several questions that have been posted while the presentation was going. Let me start by asking a question about key management because I didn't see any mention of that in the presentation. How are keys managed for this scheme, if there are keys? Especially when we are looking at field equipment, lots of field equipment, and you want to give access to different pieces of information to different types of people out in the field. What happens?

I know that you said a key cannot be compromised computationally. But through social engineering and phishing schemes you could have unauthorized people have access to keys. Talk to us a little bit about key management and how you'd address this challenge of advanced persistent threat through social engineering and phishing that can get access to keys.

>>Greg: I'll be happy to answer that and it's a good question. There was only kind of a brief mention, and as you said, it was more along the computational lines. Really there's, separated between the keys themselves and initialization, or authentication and authorization. From the key's perspective themselves, you are just not going to get them because it's just too hard. There's too big of a search base.

The question then becomes how do we get to the point where we are actually exchanging data using these keys and the keys are these optimized, err code tables? The focus shifts to all right, the initial secret exchange. People use traditionally asymmetric cryptography for initial secret exchange and we can work with any form of traditional asymmetric cryptography, like PKI, public-key, private key, Diffie–Hellman and things like that.

The challenge is though on, again, these resource constrained devices that don't have a lot of the computational power. Asking them to perform some of these calculations is a multi-day process, quite literally, for the processor. It's just infeasible.

There was thankfully just in the last year a couple of algorithms that were introduced called, one of them is called the NewHope, and the other one is called Frodo. Both of them are quantum secure lattice based ring learning with errors. That's a whole lot of words. But that's the description of the algorithms. If it's akin to anything, it would be akin to Diffie–Hellman, but it's not, it's much smaller, it requires much less computational power, and is a quantum approximation as opposed to a numerical calculation.

These are algorithms that were tailored for small devices. We actually trimmed them down further and have made them available for these resource constrained devices. We have the smallest devices working with Frodo and NewHope as the initial secret exchange. That gets us the secure channel. You do initial secret exchange. We actually go through a multistep process before we get to the actual data tables that are used for steady-state communication.

Which then brings us back to some of the things that you were just mentioning with phishing campaigns and so forth, and that's where the human factor comes into play. That's where we would really need to work with an implementation environment and what are the appropriate methods for authentication. Now as a client am I really talking to my bank? Or to the server that I want to be talking to? Or is someone spoofing that and phishing.

Or the other side. If I am the server and someone introduces a device, how do I determine if that's an authentic device before I authorize them? Now the authorization typically is something that is embedded within the system, and again we would work with the implementation to make sure that that link is there so that those, I'm missing the word right now in my head but basically the matrix or the database that says okay, this person or this device has the authority to perform these actions. We would simply interact with that.

We are not going to create our own. We are going to work with industries that already exist for that. In general, we will use industry standards where it makes sense and then we'll diverge from that where we have something that's much more helpful to offer and where improvement is needed. Such as the security itself or the size or the efficiency.

But the authentication piece is something that is going to be perennial. It's always there. The human factor is always there. So we would need to work with the limitation environment to determine okay, are you going to use a three factor or two factor authentication that actually requires human intervention? We actually were talking with a company the other day that has a fingerprinting capability for devices. That you would have a database that recognizes, it would be akin to a serial number but it goes beyond that. You wouldn't be able to spoof it. Does that answer the question sufficiently?

>>Erfan: Well I think you said what needed to be said for me to invite you to NREL to deploy your technology here on some real equipment and run some real use cases.

>>Greg: Okay.

>>Erfan: I'm inviting you to do so. Because you are right for it now. I know you have mentioned about a national lab and the national SCADA testbed and I think that's great that you are pursuing that direction. But here at NREL with our energy systems integration directorate, we have all the pieces of a utility. All the way from ADMF down to the SCADA, down to DER, actual equipment. We have power hardware in a loop capability over here.

So if you want to deploy your stuff, you can show it in all its incarnations that you showed in your presentation. From bump-in-the-wire to embedded in the servers themselves. We could do some very interesting use cases. I think we should have a discussion off-line about that and I think that there could be some very good case studies that we could jointly publish in this area.

Because what you have done by talking about the realities of AES, which is an industry standard, and then showing that you can do it in one third of the time and a fraction of the energy, is really disruptive. So what needs to happen in order to develop credibility, in addition to having scientists and engineers out in the industry saying "this is really cool," is running real practical use cases. So I would invite you to do that.

>>Greg: That is an absolutely fabulous invitation and we accept.

>>Erfan: Okay, wonderful.

>>Bill: Thank you very much, Erfan. That's fantastic.

 

>>Erfan: Yeah. The next, so the question's from online. The first one is a clarification. In one place you talked about patents and international ECP applications. What does ECP mean?

>>Greg: Pause, because I know I have the acronym in the back of my head somewhere but I'm not coming up with it. Basically, the US patents, we are all familiar with and they have numbers and so forth, the international patents are what fall into that umbrella of ECP, and I do not recall what the acronym – basically it's allowing the patents to expand beyond the US borders. Coverage for the patent.

>>Erfan: All right, very good. The next one is, Joseph Bryce asks, "So is this encryption or merely encoding?"

>>Greg: Well that's an interesting question. Let me answer it with this. The purpose of secure communications is actually obfuscation. It is, how do I protect my conversation? Encryption is simply one technique that is used for obfuscation. It's not the other way around actually. The purpose is obfuscation. What are the techniques that can be used to achieve that? Encryption being kind of a general term.

We are calling it encoding, and I've said that several times simply because when we actually went to the US government to talk about export control they said, "Hmm, this doesn't fit in any of the current categories." So they called it nonstandard encryption. Now I'm not offended if someone calls it encryption because you could. I mean you're taking data – the purpose – let's see, let me back up one more level.

There's diffusion and confusion. You want to spread out the randomness to something that would look like a Gaussian distribution. You also want to modify, I think the goal is 50% of the bits, for any single bit change on the input you should have a 50% change on the output. We have done the measurements and that's exactly what we do.

I hope that answered the question. The purpose is to protect the data. The methodology, as we call it code theory or algorithmic approach. If you want to call it encryption, I'm fine with that.

>>Erfan: Yes. I think the key thing that you mentioned was obfuscation. Because the essence of confidentiality is that you provide data to those who are authorized users of that data. Obfuscation is the way to protect access of that data by unauthorized people. The techniques are agnostic to the purpose. The purpose is obfuscation. I think it's really good that you don't want to fit into any kind of bin, but just talk about, generally about obfuscation. I think that's the more strategic approach.

>>Greg: Good, thank you.

>>Erfan: Okay. Next question is, "Would optimized code table for UAV application be something off the limit for the general public?"

>>Greg: Ah, well thanks for asking that question. We actually didn't talk anymore about that. The short answer is no, it's not off-limits. We are very interested in the public use of what we call our Signal Conditioning Module. The UAV was the origin of that. As we look at the world of IoT we are seeing quite a bit of interest, especially a manufacturing floor that has a bunch of devices. As IoT devices proliferate, especially those that are going to be operating wirelessly, the noise level is going to increase substantially.

So the idea of applying a signal conditioning module in any signaling world is completely open and we are very interested. We are actually — interestingly the original development was on the Signal Conditioning Module. That's what I evaluated at KeySight, or HP. We actually deployed a solution in Australia, VicTrack, and then discovered that the security market was ripe and more insistent so we kind of shelved the single conditioning development for a bit, focused on the DCM, which is now ready for deployment and we are actually developing the SCM again.

In a month, I believe, maybe six weeks, we'll have some of our first prototypes that could be available in a wireless world. For example, we did have a presentation to one of the large heavy haul rail organizations in the United States that expressed high interest in a single conditioning solution. We've had Microsoft's IoT lab express interest in the SCM solution. So absolutely open. Very interesting, that discussion.

>>Erfan: That question came from Michael Shay of UC Berkeley extension. Next question is from Joe Price. He says, "Do you have any issues of key distribution for your implementation?" That's echoing my initial question.

>>Greg: Yeah, and I'll just reiterate. We actually don't distribute keys. Some of the early solutions we looked at did have that as a component and we found that too fraught with peril, and came up with this solution where there is a quantum secure initial secret exchange and then a very quick transition into key generation for that particular packet, and then discard it. So we don't have to distribute keys so we avoid that issue altogether. So key maintenance, key management, things like that.

In some enterprise environments, there are requirements to store historical use of keys. If that is required that can be achieved. It's an additional server, a key storage server if you will, that can be set up, we actually have designed that once and have that sitting in our archive. But for our general implementation we try to avoid that because of the challenges it poses.

>>Erfan: I think it would be very helpful in your collateral if you can create some cartoon diagrams for those initial steps of establishing the way by which the endpoints can decrypt or de-encode, your way of doing it. I think it's important because everybody knows standard key management. So what you just said in words, it would be really nice if there were some cartoon diagrams that showed things being transmitted out of band, if you may, through the secretive method, then validation occurs and from then on it's not needed. It would be really nice to see that.

>>Greg: Okay, that's your job.

>>Bill: Very good suggestion, thanks.

>>Erfan: Sure. Next question is from Michael Shay again. He asks, "What is the bandwidth and throughput of DCM in the bump-in-the-wire?"

>>Greg: Interesting. That's going to be entirely dependent on the hardware that you throw at it. What we have done, you can get pretty darn good throughput on some of these very inexpensive devices. We are not going for the gigabit world per se. We actually have transmitted gigabit speeds without a problem. But the opportunity really has been in the resource constrained devices. The limitation is going to be on the hardware itself, not on our algorithm. We can keep up with whatever data rates the hardware can generate.

>>Erfan: Yeah, I think a key thing is that if you use MPLS or some of the traditional ATM technologies that are still out there, you can assign a certain amount of bandwidth by application so that it is not an issue from a bandwidth perspective. Because in SCADA applications you need to be able to, for some other protection stuff, be able to go down to the 40-millisecond kind of level in response time.

I think the next question from Michael Shay talks about exactly that. Latency and bandwidth are just reciprocals of each other. The question is, "What is the latency introduced by DCM in a site SCADA loop?

>>Greg: Great question. I'd love to discover the answer. We would need to implement on whatever solution and then measure. Suffice it to say that it's been less than any other legacy cryptographic solution.

>>Erfan: Yeah.

>>Greg: Because of the speed which we can operate the encoding.

>>Erfan: We have in our testbed here an architecture where we have created a main site which resembles a control center in a utility and then is connected via a Cisco router network to two substations. There is DNC3 running between the distribution management system and the enterprise information services in the control center and out in the edge between the advanced substation platform and the DER through the grid simulator we are communicating, we are Modbus TCP.

We have a true SCADA system end to end and it would be really good to run, as I mentioned earlier, use cases to see what effect it has on throughput. What I find is that the farther you get to the edge, the less bandwidth you really need because the functions are very primitive. But you need that bandwidth. You can't just have a noisy network that, even though it's gigabit, but for times there's just no bandwidth available. There has to be guaranteed throughput and that is more of a network design issue then an encryption or encoding issue.

>>Greg: Yeah, and there's also things you can do if you have, what do we call them, environments where you don't have a high predictability on sequential nature of packets.

>>Erfan: Yeah.

>>Greg: Like UDP versus TCP. We have implemented a solution that does not require that high quality, if you will, of connectivity. That if you drop packets or have no idea what order they are going to be received in, that's fine. You can implement in that environment as well.

>>Erfan: Next question is, "From an independent audit or regulatory perspective, how might the security of AgilePQ be tested and proven?" This is by Mike Lanigan.

>>Greg: This might require a little clarification. My answer to that would be that's what we are trying to achieve with for example, the national lab. To say, "Okay, here's the system. Here's what we say we do. Do you validate that we do what we say?" Also with the evaluations by some of the universities and the professors that we've spoken with.

Like I said, short of the public world hammering on this for years on end, that's, I think, the best that we can do. Get experts to look at it and validate that yes, we are doing what we say we are doing and we've tried all different kinds of approaches for compromising it and they've been unsuccessful.

>>Doug: I guess I would –

>>Bill: Let me jump in on that too. This is Bill. So having been the top cop at FERC and having FERC's audit team report to me, what I would think that question is getting at is an independent auditor might be trying to help a company determine that they are in compliance with the standards. I would think that if an auditor can do the same testing that it applied for NURC standards, like the AES-128, if you tried that attack mode and you didn't get in, that would at least be a strong indication that you are in compliance.

I think that's what it's getting at. That's not going to tell you what is the new level but it would, at least as an auditor, tell you whether the system you are testing is in compliance with the mandatory standards or not.

>>Doug: Yeah. I mean we are in process of trying to get through the standardization. We've joined Trusted Computing Group for example, IEC, and I have some other good conversations going on with some of the government agencies to try to go through that process. That's a three-year process we understand. So what we are saying is where there are requirements for security where there is no standard – for example, NIST several months ago issued a request stating that legacy cryptography is not a good solution for many of these resource constrained endpoints and we need something new.

They are acknowledging that even if you can get it to fit in some cases, like we crammed AES-128 in an RFduino, it's not a good fit. Not a practical fit someone could deploy. So we are trying to bring the ability to secure links, I guess, ahead of the standards because there's a demand and there's a need. There is a threat and if we can protect against that today, let's do it, and then simultaneously let's work through the standardization bodies.

>>Doug: Let me just jump in and add one last point about this. I think there is also, within a system you have places where there may be standard set and there's places where there aren't yet really standard set because things have not been capable previously. I think where there aren't yet standards, we are certainly offering an incremental benefit.

If there is a need, or, and one of the things that a couple of utilities that we spoke to liked the idea was, at least for a temporary solution, being able to run DCM on top of the AES solution adds very little, almost no latency on top of it with the incremental benefit. So you kind of can have the best of both worlds for a time period until standards might catch up or until comfort is at least established more.

>>Erfan: I think one additional graphic might be very helpful for the industry. We have in the electric sector, as you are familiar, four or five standards that pertains to cyber security. One of them is the IEC 62351, Compendium of Standards that applies to 61850. It applies to secure ITP. Then of course there's the IEEE 1615 standard that's connected to the DNP3.

Now we have DNP3 secure authentication version of it. Then in addition to that, higher level things, like the NISC cyber security framework and then the Cybersecurity Capability Maturity Model from DOE, and finally the NIRC SIP guidelines. If you want to go and be a little a little more detailed about it, the NISC  800-53 standard.

What would be really helpful is a graphic that, almost like putting a flashlight on these standards to the parts that are relevant to this technology. So a lot of it would be AES, where AES is fitting today. But because of some enhanced capabilities of AgilePQ, you could maybe show compliance to additional specifications in all of these documents that AES is not.

That will allow traditional people who are in the compliance, or in maintaining infrastructure in the electric sector, to quickly understand how this will map. The proof of course is always in the lab and you have shown plenty of candidates who are helping you do that both a theoretical as well as empirical. But these types of graphics will quickly make AgilePQ speak the same language as our industry.

>>Doug: Perfect. That's good feedback, thank you.

>>Erfan: Sure. Yeah, they are very tedious. We've spent a year and a half here at our Center in Cybersecurity mapping these standards in detail. It's like ultimate in job security. You could just say I'm a consultant for one of these standards and it's like trying to pick needles out of a haystack. But once it's mapped, it makes great sense to start seeing where the different parts of the standard apply. That's why these graphics are so valuable.

>>Doug: Mm hmm, perfect, perfect.

>>Erfan: Okay. Next question is, "Once DCM's are deployed in SCADA loops for instance, how would the user team debug the system in case of some functional breakdown?

>>Greg: That's interesting, I'm trying to conceptualize that a little bit.

>>Bill: Greg, let me, I've heard this question, I think the same question, and it's like hey, if you put something on my system and then I'm having problems, I'm going to figure it was caused by what you did. So how do we determine if it was by what you did or not? And what do we do about it?

>>Greg: Yeah, we actually have done some implementations where we've had a toggle switch in that will allow you to use AES, or toggle that off and use DCM. I'm trying to think at this kind of a multilayer, you know, what the stack looks like and diagnosing.

If you can communicate something, then your link is there and if that link is there then you can encrypt the link and that's not going to change it unless at that point that is what changed it. I guess I'd to go back to you can turn it off. I mean you can disengage the encryption piece and see if the link reestablishes as a diagnostic step.

>>Erfan: Right. Let me help you here a little bit. From an operational perspective, what would be really helpful is that if you could create some additional syslog object that tells what is happening in the different logical layers. So that if the debug is needed those syslog events that can go into way a SIEM, S-I-E-M, would allow a Splunk like product to help diagnose it and do what's called root cause analysis.

>>Greg: Okay.

>>Erfan: Think about that. Because all devices that are on the infrastructure have some kind of alarming capability and the more detailed the alarms are, the easier it is to troubleshoot. We see this from an operational perspective, that it's not sufficient for something either to be on or off. If it's on, there are many levels of degradation. So these alarms are the telltale way of telling what exactly is wrong and that's what Michael Shay is trying to get at.

>>Bill: Okay.

 

>>Greg: Okay. Yeah, I'm sure we have some learning to do on that, and try to work with the tools that are available so we can present information to those tools. That would be very helpful.

>>Erfan: Yeah. It's interesting, Brett Olson is coming up with an interesting pun. He says the S in IoT stands for security.

>>Doug: I like that.

>>Erfan: I just remember from the good old days of networking in the ‘90s. We used to see SNMP stood for security, not my problem.

>>Greg: Yeah.

>>Erfan: All right. Next question is from Paul Bazandec, who says, "Have you considered any relevance to block chain?"

>>Greg: That's interesting. Yes. There's a couple of things. Block chaining is really good for non-repudiation. In other words, putting an indelible record of a transaction. I think it's a fantastic solution for that. I don't know that much about it. I mean I haven't dug really deeply into it. But for everything I hear it's a very good solution for that.

In terms of a security solution, it's a little bit cumbersome. Block chaining requires a certain critical mass to be significant, you know, statistically significant. But if you get too many elements in the block then it becomes unwieldy and very slow. From that perspective, it hasn't proven to be something that's useful from a security element.

Incidentally, we were working with some of the technical people at Microsoft, and they do have a, block chaining as a service is one of the capabilities inside of Azure. Again, the application is very good for the record, the non-repudiation. But it is not something that is put forth as a security mechanism.

Now on the other side of that same set of words, AES, it's most secure mode is called, I think it's called cipher block chaining, and in a sense the keys that we are permuting follows a kind of a cypher block chain mode. But from a block chaining perspective I think that was probably the original question. It's a different use if you will.

>>Erfan: Okay. We have several more questions and very little time left. I think we may have to go five or 10 minutes over just to capture all of them. But if you guys could just keep your answers as brief as possible that would be really helpful. The next question is, "Is this AgilePQ protection inherently point-to-point? Or can it handle multi-capped environments, one sensor sharing data with many subscribers?"

>>Greg: Both. I mean point-to-point is very simple, very conceptual but we have devices that can handle communication with multiple downstream devices. Yeah.

>>Erfan: The next question came from John Brittonbach. Paul Duffy of Cisco asks, "AgilePQ seems focused on encryption. You also mentioned authorization. How is AgilePQ helping authorization?"

 

>>Greg: Inherently and directly the algorithms, that's not what they are implemented for. It's something that we would do working with the customer in the environment to implement an authorization methodology.

>>Erfan: So like a NIS or Kerberos or LDAP kind of?

>>Greg: Exactly, yeah. Work with, yeah.

>>Erfan: Okay. Michael Lanigan asks, "Where else for advanced encryption, healthcare records, national defense communications, nuclear power plant?" Is the answer all of the above?

>>Bill: All of the above.

>>Erfan: All right.

>>Bill: That's been one of the things about this technology is, it has so many wonderful applications. It's how do you prioritize? That's been something that's been a very important question we've been working through.

>>Greg: Yeah.

>>Doug: Yeah, and again if any of the listeners want to send you suggestions on priorities, Erfan, we don't need to talk about it here but we do invite people to give us that guidance.

>>Erfan: Yeah. I think one of the key things to remember is that all the standard stuff is based on best practices. But you can't have best practice if you don't have a practice. So the AgilePQ, this aspect that you are now bringing, you're in the phase of practice right now. When it becomes a best practice then it can be incorporated into a standard. And that takes 3 to 5 years.

>>Doug: Mm hmm, and that's why we are pursuing nonstandard areas first where there's just nothing and get our initial inroads of there. In particular, we are right on top of places where standards are existing and we can just piggyback.

>>Erfan: Next question is from Paul Duffy again. He says, "Is this helping asymmetric crypto in any way?"

>>Greg: I'm going to short answer that by saying no. Asymmetric crypto is used traditionally for a single transaction. We use it for initial secret exchange. Symmetric cryptography is used typically for steady-state or data in transit, you know, continual communication. Our solution would be considered symmetric cryptography, if you well. So it's not helping it per se. We use asymmetric where it makes sense and our solution is, for all intents and purposes, symmetric.

>>Erfan: Question from Yvonne Marcotte says, "Security by obscurity is not an option. Will the algorithm's code implementation be open to community?"

>>Greg: The algorithms will essentially be public. I mean you can't hide them. As soon as they are deployed, someone's going to figure out and reverse them, that's fine. The algorithms are not that complex. It's the keys themselves and the permutation of the keys, and the fact that the key search space is so large, and the fact that you can't follow the initialization because that first step is also quantum secure.

I've heard that statement before and okay, that's fine. The end goal, as we discussed before, is obfuscation and cryptology is one way to do it. Code theory, which basically expands the search space beyond what traditional cryptography can do, is certainly quite viable. But the key, no pun intended, the critical piece is to hide that initial exchange.

>>Erfan: Very good. Next question from _____ asks, "Also, could you please comment on how the sender and receiver use the same key in an Agile environment? You mentioned each method the exchange could use a onetime key as a onetime pad, or path I guess. Does a key exchange happen every time?"

>>Greg: The short answer to that last question is no, in terms of an exchange. The initial secret exchange – so very specifically we call it, the key exchange or the secret exchange, then we go to a step where we call a key distribution table. Which is then used to wrap, if you will, the data table, which is actually a set of elements that is the key. A table being one of the elements and some other vectors that are used. Then that data table is permuted with each subsequent send and receive.

Both sides know where they started and can follow the sequence. It is sequential from that point per se. But again, the ability to know where you are and to follow the sequence without knowing where you started is impossible. You just can't. As the professors from UCSD said, "No known attack services." There's just no way to get a toehold on that. So there's only one initial secret exchange and after that it's a known permutation in a very, very large search space.

>>Erfan: But since both sides know what they're doing, they can do it but nobody else is privy to information.

>>Greg: Yes. If you lose sync then you have to reestablish. So if you're operating like a TCP environment and you lose connection, then you would need to reestablish and you would go back to the initialization process. We can store the key, that initial secret, in non-volatile memory. Excuse me, very clearly, volatile memory. We would not put it in nonvolatile memory unless it were in something like an Intel HSM.

So you can be quicker on re-initialization. Or if you're in a UDP type environment then we put additional data into the header, that is obviously also protected, that will tell the receiving side where to be, if you will.

>>Erfan: Okay. The next question from Paul Duffy. It's, "Has NIST performed any analysis on AgilePQ?"

>>Greg: Not yet.

>>Erfan: All right. Next question from Joe Price. He says, "Patent Cooperative Treaty. How long before computational power requirements on these products become affordable in the mainstream marketplace, including extreme temperature constraints? I think tighten up a company's vulnerability footprint before using encryption could be a focus. But I do agree that we have to work now on this type of cryptography. That's from Brent Olson.

>>Greg: That sounded more like an informative and helpful statement than a question.

>>Erfan: Yeah. Actually, it was two different parts but it looked like continuous writing. Joe Price says, "How long before computational power requirements on these products become affordable in the mainstream marketplace, including extreme temperature constraints?" Then the next one was, "I think to tighten up a company's vulnerability footprint before using encryption could be a focus. But I do agree that we have to work now on this type of cryptography. Thanks, Brent Olson." There were two different entries.

>>Greg: Got it.

>>Erfan: We'll move on. Next question is two questions for the speaker from Urshad Miller. "Since your slides seem to indicate that DCM functions like a communication gateway between two endpoints, how would you compare DCM to a similar gateway device that uses one time pass?" That's the first question. Second question is, "Let's assume both gateways don't have resource constraints or key management issues. How do you compare DCM to what QuintessenceLabs Australia does?"

>>Greg: I have to go back and look. At one point I do believe I looked up QuintessenceLabs and I can't remember what it was. Could you read the first question though again? I didn't quite follow.

>>Erfan: Sure. Since your slides seems to indicate that DCM functions like a communications gateway between two endpoints, how would you compare DCM to a similar gateway device that uses one time pass?

>>Greg: I guess I wouldn't really agree with the gateway. It's code that sits on the devices and operates at the socket level, and so any protocol above that, that you want to use, that's fine. Inherent within that code is the ability to have this very dynamic key that changes with each packet.

We use the term one time pad as a reference for people to understand okay, this is a key that's used once and then discarded. I'm not familiar with a gateway. I guess I'll have to go look at QuintessenceLabs before I can answer the rest of the question.

>>Erfan: Well I think the model that you have is closer to the way TLS is implemented. Rather than the gateway thing –

>>Greg: Exactly.

>>Erfan: – like we saw on the LIMB nodes project in DOE where the VPN tunnels were created between devices at the front doors.

>>Greg: Exactly. Very specifically, with Microsoft, they asked us to replace TLS because TLS, you can't get it to work on these small devices. We sit exactly parallel to TLS and what you are doing there is you're establishing a secure socket.

>>Erfan: Yes, and that was Ershod Noiur asking from Strongoff. The next question from Michael Shea is, "A-M-A-N for the cartoon diagram for the initial key management process. Aman." Okay.

>>Greg: Oh, you meant amen.

>>Erfan: Amen? Oh, I get it, it meant amen. He just wrote – because AM, A-M in Arabic means pain.

>>Greg: Oh, well it probably wasn't –

>>Erfan: I don't know whether this was deliberate. But there will be some pain involved. Next question is from Michael Shea also. That, "There has been unidirectional communication server data diodes that could minimize the invasion by malware. Would lack of attack service of DCM accomplish the same objective?"

>>Greg: I'm not familiar with what he described but the way, the words he used, to me –

>>Erfan: I can –

>>Greg: Go ahead.

>>Erfan: I can address this.

>>Greg: Okay.

>>Erfan: It does better than that. The reason is because you are not limiting the communication from the two directions. The purpose of data diodes is that when you are in a very highly secure environment and you want to communicate to less secure environments, they put data diodes so that nobody can respond back from the less secure to the more secure.

But you don't have any such restrictions because once you've authenticated two endpoints as being legit, talking to each other, you don't get involved in what they are saying to each other. You're just obfuscating the communication from third-party intruders, correct?

>>Greg: Yes. Very well said.

>>Erfan: Yeah, so the key thing here, no pun intended, is that what you're doing is you're reducing the attack surfaces but you're not addressing the issue of more secure and less secure people talking to each other as in the case of ITOT when you're integrating for the electric sector.

For those kinds of things there are hardware layer filters available, Michael, that are not necessarily data diodes. Because data diodes are very limited. What they do is while yeah, they are protecting the more secure from the less secure, sometimes the more secure needs acknowledgments back from the less secure to know that the data that they delivered was received successfully. When you don't get that acknowledgment back the liability issue is still there.

Data diodes work in very, very narrow application. But in today's smart grid world a lot of times they will be unnecessary barriers. But this is a different space, this AgilePQ. This is not addressing that subject. But it is saying that if third-party people try to come in and try to intrude, that it's not in clear text and it's not putting much weight on the communication. The latency is not so disruptive that the application starts timing out. It has the lightest footprint possible for the maximum obfuscation capability.

>>Bill: Yes.

>>Erfan: All right. The next question is – oh, just a disclaimer. My knowledge of AgilePQ is limited to this presentation. I just wanted to let the audience know that it is very important to keep an open ____ about disruptive technologies and begin to see how they can change the way you do business. If you come with mental models on disruptive technology it won't serve this purpose. That's why deliberately I didn't read up on AgilePQ because I didn't want to bias my mind with mental models.

Okay, next question. Michael Simmons says, "Is it unrealistic to expect a paper to be published dealing with the DCM sometime in the future?" And this is Nick Roan –

>>Greg: Oh, absolutely.

>>Erfan: Yeah.

>>Greg: Yeah. No, we would welcome that. We actually are, I'd like to say we've done it, but we haven't. There's a couple papers we'd like to write and publish, and that's something you'll see in the near future for sure. That's already something we've wanted to do, just haven't set the time aside to do it. We would like other people to do the same thing.

>>Erfan: Good. Michael Simmons is from Iprimus. Then Fallut Masreed says, "Would the technology benefited from participating in the smart grid interoperability testing and certification manual be developed by SGIP and NIST/NEMA?" Oh, very nice. Good question.

>>Greg: That sounds like a yes.

>>Bill: Well yes. My two cents on that, Erfan, would be yes, but one of the things we are trying to do is avoid getting in logjams too. We think we are ready to take some parts of this out into the real world and we don't want the whole thing to get tied up by standards. But the smart grid interoperability efforts are certainly a good place to talk with folks. But that is part of our decision-making. I'm sure you appreciate how the standard-setting process, as you've said, is a minimum of three years.

>>Erfan: Yeah. That's why I think one leapfrog method, which I proposed a little short while ago for you, is to quickly map it, your technology, to the relevant portions of those standards and just let it be. Then start developing case studies with potential customers in different verticals. That goes much longer way.

Because the subject matter experts that are sitting on this call are actual implementers, so rather than showing them that you've gone through detailed certification process, they would find it much more helpful if you could map it to things they know and then show through case studies how it was practically implemented so a lot of their operational questions can be answered.

>>Greg: Okay.

>>Bill: Excellent.

>>Erfan: That's a much better use of your limited resources as a small company. Now we are at the end of the Q&A. I just want to give you a minute or two to make some concluding remarks and then I'll have some future notes for the webinars that are coming up and then we'll call it a day. Go ahead.

>>Bill: Doug, do you feel up to wrapping this up?

>>Doug: Sure. Well we are incredibly grateful for the opportunity to share an overview of AgilePQ to the folks who've been so kind to join us on the webinar. We will absolutely follow up on your very kind offer to see how we might be able to collaborate in the future. I've taken a bunch of notes that will be very useful as we continue to shape how we go to market with this and certainly will welcome any and all feedback. The questions were just outstanding and we are very, very grateful so thank you.

>>Erfan: And Bill and others?

>>Bill: Well yeah, I would just say we are delighted that we can pursue working with NREL on this step. That that is something that was on our list to explore and we think that would be a great, a great test of a reality check here. We'll look forward to working with you on that. Again, anybody who has a particular application that they, if they're interested in exploring a case study implementation, we'd be delighted to talk about that. Just reach out to Erfan or go to our website at AgilPQ.com.

>>Erfan: Wonderful. Well thank you very much for your presentation. A couple of logistical things. The slide presentation will be provided to everybody. What I would request of you is if you could change the AgilePQ proprietary and confidential to something a little less restrictive. It will allow the audience to view use slides for their work. So you may want to put something a little less legally restrictive on that.

>>Doug: I will do that and send that over to you.

>>Erfan: Wonderful. Then the other thing –

>>Bill: Yeah, never mind watermark.

>>Erfan: Whatever you want to do. But this particular term gets people a little nervous.

>>Bill: No, that's a good observation.

>>Erfan: Yes. The webinar recording will also be made available over the next couple of days to everybody. Our next webinar is going to be on the 27th of January. It was supposed to be on the 20th but you know what else is happening on the 20th so I don't think anyone is going to be paying much attention to a webinar on the 20th. So we'll be having our next one in three weeks.

One thing I would like to say in conclusion about this whole obfuscation. This is a very important part of data communication and as you are seeing with all of the hacks and the challenges we have for national security, that securing the data and making sure it's in the hands of only legitimate people at certain times of the day, and not all the time, is very important. Also, the granular authorization of data, as obfuscation allows, that certain groups can see certain things and not others. That is a need. It's a requirement.

Now it also has connections with data privacy. A lot of the consumer data that's coming, whether it's in healthcare with patient records, whether it is smart meter data or home energy management data, where lifestyle and medical conditions and things like that could be revealed, needs to be obfuscated.

The challenge has been that while the need is there, every implementation of it seems to tax the infrastructure so much. Whether in the form of memory or processing or bandwidth, and we have an opportunity here through AgilePQ with this disruptive way where they are using coding as opposed to the encryption standard one pass kind of thing is much better than the multiple pass ways that we have from the traditional encryption technology.

I think it's really worthwhile to investigate this disruptive technology, see where it fits and what tweaks we need to make in order for it to work in our operational environment. I think that this could be a dialogue that could be ongoing and I'm welcoming AgilePQ to come and work with us here at NREL so that we can have practical demos that we can share with the industry and accelerate the adoption of this disruptive technology in the market.

Thanks again Bill, Greg and Doug for a wonderful and informative presentation. I thank all of you who stayed on overtime to listen to the presentation and the Q&A and I look forward to your participation. I will be sending an announcement about what we will be presenting on the 27th. Enjoy and have a wonderful day. At this time I'm ending the recording.

>>Bill: Thank you. Thank you so much.

>>Doug: So long everyone.

[End of Audio]