In A Security Future for IoT, Part 1, I discussed how IoT changes the nature of the security game, and in part 2 of this series, I share insights on how we can make that radical change occur. I believe that we are lacking four key and fundamental foundational areas in our current computing model. We lack identity and the trust that it brings, everything (wanted and unwanted) is in the same communication space, we are fixated on what we have so we can never dispose, and we build fragile environments, creating a fear of change.
Let’s start with the most important area for IoT – the concept of trust. With 50 billion devices in the near future and maybe as many as 10 trillion in ten years, trust is important. Building that trust starts with a foundation of identity, which is sorely lacking in today’s computing space. Unfortunately, the current model for identity and trust on the Internet is fundamentally flawed, and this presents the first IoT opportunity. Basically, we have designed much of our trust based on the idea that we trust data sources (e.g. servers), but we don’t have much trust for end points (e.g. browsers, desktops, mobile devices, etc.). Part of that reason is the complexity of our trust system, such as certificate authorities, embedded browser certificates, and cross authentications. Our current trust system is based on our desire to use anonymous end points and we never choose to worry about the end point – hence, a universal required mechanism for end point trust was never built.
Imagine trust in the modern era, where our devices can self-define, self-distribute and self-manage. Sounds a lot like the IoT world – while maintaining the same level of confidence we have in our “trusted” servers. We have good distributed models that already exist in places like Blockchain, BitTorrent, and Bitcoin. Using those models, distributed trust becomes viable no matter how complex your system, be it 2-way or 10-way or 10 million-way. If every device is required to be part of a common distributed trust, and their identity, security and validity can be reviewed hundreds of different ways by millions of other sources, the ability to “be a fake” at the end point is many, many magnitudes more difficult – unlike our current model, which requires little effort. We see this already in the concept of contract blockchains.
The second area of IoT opportunity for security is around mutability. At the desktop and mobile is the concept of “personal ownership.” When we “own” a device, we are more concerned about maintaining our customization than for the ability to make operations (and security) simple and straightforward. Hence, the closer we get to the personal usage of device, the harder it is to allow that device to be mutable. This is also true in our applications and data centers – they become fragile. We put a lot of effort in making it work just so-so, and we fear change.
How often does rebooting an application require 100 steps done in a very specific way to “bring” back the application after a power failure? How long does a restore from backup actually take? How little does it take to undo our infrastructure? From operations stand point, this is a common nightmare. From a security point of view, it is even worse. How do we enforce good practices when we are handcuffed by operational fragility? The answer is mutability. Instead of static fear of change in our applications and architectures, let’s build architectures that assume change and motion. Can we even do this? Yes, because IoT is still simple. Simplicity directly supports mutability.
While we are still at the beginning stages of IoT, let’s build in the idea that anything connected to our IoT environments can be wiped and replaced at a moment’s notice. Let’s do this from the beginning of building our applications and operationalize it. In fact, why don’t we do deliberately and every so often – monthly, weekly, or daily – blow away half of the devices, their software, the firmware, the backend, and everything but the data and reinitiate from secured original images. Can we do this? Yes, but first, we need to learn our lessons. It’s all possible while IoT is still simple. And once we have a universal ability to dispose of IoT devices and applications, think of the implications. We are currently so tied to keeping the past running that we do not build for obsolescence, but rather, to develop forever. When we change that model, operational complexity gets simplified, adding and removing functions become unpretentious, upgrades are safe, and security changes are easy because as we no longer fear change. We can install patches within minutes of their release because if it doesn’t work, we simply wipe and fall back as SOP. Applications and OS’s can be upgraded continuously, technologies can be swapped out, we trust our backups and for IoT we’d have worked out how to manage a cast of tens of thousands at a distance.
Mutability is linked to disposability, which leads us to our third opportunity in IoT security. If we have infrastructures for which we do not fear failure, then we can freely dispose of what we have built. And yes, even dispose of the hardware. In security, remediation is probably the hardest task. How do we get back to where we were? How do we fix our problems? How do we do that quickly? The difference is that “things” are not desktops or servers or SAAS or any of those traditional “human” oriented systems. We don’t have the attachment to personalization – or at least should not. If we build our “things” with the assumption that they are indeed disposable, how many of those issues disappear? What is the outcome when something happens – “rip and replace”?
Today, we do this on our mobile devices. For instance, if we don’t like the “APP” we toss it for another. And what about upgrades? They just happen without our knowledge, and suddenly, we have new capabilities. Again, like with mutability, if we operationalize our ability to replace (something we can transparently do in the mobile environment), old software, vulnerabilities, and security issues become short lived. This creates a competitive price advantage against hackers. When hackers have to expend enormous resources to create their toolkits and techniques only once and deploy millions of times, they have the advantage. When we do it, we have the advantage.
How do we get there? Let’s look at the Apple iOS ecosystem, where greater than 95% of the infrastructure (one billion active distributed devices) are no more than 18 months old and over 65% of devices have all security patches released in the last 3 months. That represents 200 million OS upgrades or more per month on a continuous basis. Now, imagine a world where every IoT device was running current software. To do this would cost more upfront, but for the development, operations and security, we need to ask ourselves, how cost effective can we become?
Finally, let’s consider the internet itself. It was designed to be a barrier free highway to connect different networks (inter-network, e.g. Internet). And that’s great, except we now have universal reachability. Simply put, everyone has the ability to reach out and touch your systems, maybe in ways you don’t even do yourself. Any knowledgeable security person will tell you that remote connectivity is almost always a requirement to do anything. And yes, other types are possible, but in terms of sheer volume of bad things happening, remote connectivity is employed millions of times more frequently than any other way combined.
Considering that connectivity is a baseline requirement for cyber-nastiness, our fourth opportunity is to remove most, if not all, the ability to remotely connect. For example, what if we could put IoT devices onto their own self defined networks, living inside our public networks, but only talking to themselves? What if they could only talk to policy defined own networks? What if they were cryptographically enforced, built on well-defined identities in an opt-in self-defining way that used a distributed trust and already existing protocols like TLS?
Without a connection point many of the things you can do for hacking are simply not available. And what happens if you find a way in? In many cases, our identity would break, and we would be dropped out of our closed network, preventing communications and stopping the spread of the attack.
What happens when we put this together? We end up with identity and trust that are everywhere. We know where compromise occurs as it disrupts our trust, and we can choose to automatically kick out our trust failures or isolate them until they are replaced or refreshed. We dispose and replace so that our attack surface is minimal, and by continuously updating and replacing, we limit the scope or stop the attack. We have devices that are completely separated from the “wild west” of our internal and public networks. As you limit or remove the ability to compromise, then what you cannot reach cannot be attacked when combined with operationalizing disposal. Bottom line, we make hacking expensive, while our repeated process is cheap so that we tilt the costs of back to our favor.