A connected world

The DEW line was not the only place where radars were located. Another inaccessible and inconvenient location, Prince Sultan Air Base, Saudi Arabia. Another long trip,
more onsite maintenance, adjustment and monitoring. The sand degraded the equipment, the heat was worse and your neighbors, not much better than a polar bear, are now camels ‐ they spit.

That was 25‐30 years ago, we now live in a connected world where
equipment in remote locations are accessible – maintaining, monitoring
and even upgrading the systems is convenient. These are the intended
consequences of living in a connected world. We use technology to
collapse time and space. The unintended consequence of a connected
world, vulnerability. We are connected, accessible and vulnerable. Bad actors can use the same technology we use to connect, to exploit holes in our security and create any number of issues.

Connected – accessible – vulnerable

We call the environment of networks and devices in this connected world, the internet of things. The devices range from water meters to drones, thermostats, remote sites, oil rigs, and yes, light bulbs. These devices are typically owned by companies. There are a set of very personal devices involved in this internet of things from cell phones to cars, credit cards, passports, etc.

Addressing the vulnerability of connecting these devices with each other comes in two forms that contradict each other. They are security and privacy. Security is a group concern, best addressed by being invasive, collecting as much information as possible from every device and user of that device to assess who and when someone may do something improper. You then use all of the information you have collected to prove to others that the person has in fact done something wrong. From a personal perspective these actions violate your privacy. Privacy is an individual concern, your goal in retaining your privacy is to not share your actions. Your use of a device, who you us it to communicate with, the information you store such as passwords, biometrics, personally identifiable information, or the location in which you use the device with anyone without your consent. A person or device that maintains their privacy raises concerns for security teams as they think the person has something to hide. Additionally, the regulatory requirements for security often intrude on our perceived or actual rights to privacy. Our perceived or actual rights to privacy impede on the regulatory requirements for security.

How does this come up in our environments?

As human beings interacting with the various devices that are connected, we create multiple digital selves within these devices. Our phone may carry contacts of friends and associates; text messages, location information and certainly, whom we communicate with, it may even store passwords. Our car stores how fast we were going and where we are located at any given time, how quickly we brake. These devices contain subsets of our private and personal information. Who owns this digital version of our life, do we or the device manufacturer? Do we have a right to keep the information private or did we inadvertently give that up under the terms of use agreement we just clicked on? As the dissolution of the safe harbor agreement last Fall shows, how information is treated in the United States versus Europe is quite different. In addition, corporations do not get a pass as their connected devices have configurations, software and IP addresses that a company may wish to keep private so as not to lose trade secrets, intellectual property or expose a vulnerability to their organization. On the other hand, if they or their employees have access to customer’s private data, who is responsible if there is a breach? What is the company’s obligation to compensate a customer for the misuse or loss of their data? Today, companies do not bear that burden, but that could change quickly as customers get savvier and the laws catch up to the impact of security vulnerabilities.

So, we need to be mindful of the inherent tension between security and privacy as we try to address the question:
"How do we construct systems that are connected, accessible and protected?"

Discovering a tamper event

Software security teams could learn a few things about security from the hardware security teams. Growing up there were a number of times when a neighbor of mine would be yelling at the gas company service technician and vice versa. Why? Someone had clipped the wire on the meter. The technician was upset because now the home owner could modify the meter reading, the home owner was upset because “they didn’t clip the wire”. It was often a teenager who the home owner had yelled at when cutting through their yard. But the moral of the story, the technician could tell that their device had been tampered with, the unfinished part of the story for technician, when did the tampering take place? The sooner you identify the tampering, the easier it is to fix the problem. IT security teams don’t typically do as good a job with this as you’ll see in a moment.

This is a place where the needs of the hardware and software teams converge, both need to understand when their environments have been tampered with and quickly restore those environments to a desired, trusted state. Companies need integrity of their data and systems.
To settle the dispute over who did what to the meter, you would like your system for identifying a tamper to be verifiable independent of the two people in the dispute.

A recent Verizon Data Breach report (2014) showed a series of charts showing how long it took organizations to identify a breach in the computer systems. I contrast to the speed in which a bad actor could compromise the companies systems, 90% of the time it is less than a day. The time for companies to identify that a bad actor had compromised their system, in the case of insider misuse, was over a day in 70% of the cases and took weeks in over 30% of the cases. The real goal is to align the time it takes to detect tampering with the time the bad actors take to actually tamper with the system. The time to tamper is fast, the time to identify tamper is slow. To speed up our ability to identify a tamper requires the coordination of our IT security teams and our hardware teams.

Connected – Accessible – Protected

There are a few high level best practices that involve all the teams who provide devices and work to secure a connected, internet of things.
  • You should know the proper and approved configuration of your equipment at any given point in time. If you do, you can answer the question, “Am I safe to operate my environment?” This question bedevils all who experience a data breach from the Air Force to Sony, to Target. 

  • You should know the integrity of your digital assets at any given point in time. Have the contents changed since your transmitted them or stored them? What is the ground truth? Addressing this question is key when customers consider moving their operations from in‐house to a cloud provider or transmit sensitive data to a remote location for say an over the air update. 

  • Use your connected devices as the new DEW line. When a remote device detects a tamper based on knowing the integrity of its assets, report it to the IT security team as a tamper event. The U.S. government started using this technique in the early 2000’s to great effect ‐ we can learn from the technique to reduce the time to identify a broader scale attack on your systems. 
 
  • You need independent proof of the integrity of your digital assets. If you are part of a larger infrastructure, or part of a software or hardware supply chain that touches a customer, there will be a dispute over what took place. It does no good for you to have proof that customer may not trust or for the customer to have proof that you may not trust. Your best position is to provide for independent validation of the integrity of your system. This will allow you to settle a dispute faster and less expensively. 

With cooperation between hardware components and IT security, you can achieve faster evidence of tamper. Devices on the internet of things can also be sensors that report independently verifiable tampering of the integrity of their digital assets, whether firmware, software, configuration files or computer executables ‐ this moves us toward a connected, accessible and protected world. Otherwise, if we remain vulnerable, it’s time to disconnect our devices and head back to the Arctic Circle.