Here is a copy of my introductory statement from the May 22, 2018 briefing where L0pht revisited its historic Senate testimony of twenty years earlier. (supporting links at the end.)

Good Afternoon, I’m Space Rogue. Twenty years ago, out of fear of corporate retaliation through lawsuits Space Rogue was the only name I used. Today I also use the name Cris Thomas, although not as frequently, and I work as the Global Strategy Lead for IBM’s X-Force Red which is the offensive security services part of IBM Security.

We are here today to talk about how things have changed in information security over the last twenty years. When we were here twenty years ago a lot of people said, we were a voice of reason attempting to warn people about just how much risk was inherent in our critical systems. A lot of people in information security, or I guess we call it cyber security now, that’s one change right there, will tell you that nothing has changed, that we still have issues with passwords from password reuse, to weak passwords, to no passwords. We still have organizations who ignore the problems either through ignorance, ambivalence or just greed. And we still have people who try to blame users for technological failures.

When I was here twenty years ago I touched on a lot of topics. I talked about the weakness in access control cards, I talked about software liability, I talked about the rise of nation state attackers, and I talked about the lack of authentication and ease in jamming GPS signals. I also mentioned that the goal of information security should not be to make something 100% secure because I still don’t believe such a system is possible.

Twenty years ago, the possibility of cloning an access control card to gain entry into a building was a known risk but was considered to be so difficult no one would do it and it was an acceptable risk. Cloning access cards today is much easier but attackers will often just sneak in behind someone else or tailgate them because that’s even easier.

Software liability was a concept that was seldom even thought about twenty years ago. Today with the increase in software in all sorts of devices from self-driving cars to medical devices the issue of liability, of who is ultimately responsible for bad code, code that can kill people, is being weighed out in our courts.

Over the last twenty years nation state attackers have become the predominate threat for many organizations, something that was almost unheard of twenty years ago. The nation state threat is more than just compromising networks and end points, today it focuses much more on information gathering, disinformation and just plain old propaganda. Look no further than the evening news every night to see examples of that.

GPS has become critical to our way of life and yet the risks remain the same. We now depend on GPS not only for flight navigation but for your personal car navigation. GPS is also used by emergency services to locate a phone from someone who may be in distress. Military applications of GPS have received some improvements, especially with the recent introduction of M-Code to the GPS architecture, but risks through signal spoofing and authentication controls remain.

However, it is not all doom and gloom. Some things have changed for the better.

Today we have a lot more information available to us, if we want it. We have greater ability to inventory not only our end points but what exactly is running on our networks. We can know with certainty what our critical data is and where it lives. We can analyze traffic with more speed and precision with much lower costs than we could just a few years ago. This visibility into our infrastructure is critical in identify and eliminating risks.

Once we know what systems we have in place, where our data lives, then we can prioritize our remediation efforts. We can then deploy our scarce resources into making our systems more resilient in an effective way. Instead of just applying fixes or new defenses in some random fashion we have the information available to us today, if we choose to gather it, to make educated decisions about how best to apply our limited resources.

And once those resources have been deployed to shore up our defenses our ability to test our implementations, with both manual and with automated tools is continuously improving. Testing our defenses whether through code review, penetration testing, red teaming or other methods is just as important as deploying our defenses in the first place. Our testing abilities, knowing what to test and how to replicate real world attacks gets better every day.

We have learned a lot of lessons over the last twenty years. For example, we know that flat networks are not a very good idea. We know that segmentation and compartmentalization when it comes to network design makes things much more difficult for an attacker when they are deployed correctly.

We have learned that using strong encryption whenever feasible is a really good idea. And while strong encryption still isn’t always easy it’s a lot better and more prevalent than it was even ten years ago.

And we have learned that using multi-factor authentication whenever possible makes an attackers life that much more difficult. Even when the implementation of those other factors may have weaknesses themselves, anything in addition to a simple password makes the attackers job just that much harder.

While network design, encryption and multi factor authentication are not necessarily new things, they have all existed for a lot longer than 20 years, they have become much more ubiquitous in today’s environment and have gone a long way in makeing the world a more secure place.

The government has learned as well. Look at the NIST CyberSecurity Framework which is a policy framework developed by a wide range of people, involved in cyber security from both the public and private sector, working together. The framework helps organizations of all types to assess their environment and use that knowledge to be proactive about risk management. Such advanced thinking from government about information security was pretty hard to find twenty years ago.

Our problem today isn’t so much that we don’t know how to make things more secure it is that we are not applying that knowledge evenly. For every organization that implements multi factor authentication there is another one that is running old outdated and un-patched operating systems. For every organization that is properly encrypting all its data there is another one that isn’t and has a web facing database vulnerable to SQL injections.

Every organization has limited resources but today we can have a lot more information available to us that allows us to deploy those resources appropriately. We know have greater ability to test those deployments to ensure they are installed correctly. We now have technologies and polices that make an attackers job much more difficult.

And so while we can never make something one hundred percent secure hopefully over the next twenty years we can use the knowledge we already have and the knowledge that we will gain to create a much more secure world for everyone.

Facebook video recording of the briefing

L0pht returns to D.C., two decades after first testimony

The Cybersecurity 202: These hackers warned Congress the internet was not secure. 20 years later, their message is the same.

TIME, AND THE LØPHT, MARCH ON

Famed hacker collective reunites on Hill today

20 years on, L0pht hackers return to D.C. with dire warnings