Integrity and the history of data security
From a systems perspective, security has historically been classified into three distinct tenets - confidentiality, integrity and availability (‘CIA’). Ask security professionals to describe these components and you will receive a uniform response for confidentiality and availability, however integrity (what it means and how to address it) is much more abstract, and will typically generate a much wider variety of answers. Professionals often respond with a very narrow definition of data integrity - “the accuracy and consistency of data over its entire life-cycle” - however in reality the definition of integrity is much broader, encompassing systems, processes and operations.
Figure 1 - CIA Triad
What this highlights is an underlying problem in defining the necessary security controls and best practice for integrity; to date we have seen a number of point solutions but nothing to effectively deal with integrity at scale in a transparent manner. In turn this has led to some ambiguity about what integrity actually means and what a ‘minimum’ security posture should be in this regard.
We are now seeing data breaches at high-profile institutions and businesses with alarming regularity. It’s clear that existing perimeter and signature-based security controls are simply not working effectively. The recent attacks on OPM and Target etc led to a loss of confidential information, but importantly the attacks were initiated by malware that compromised the integrity of the systems it infected. Put simply, the breach was the result of an integrity attack.
We also know from recent studies that the dwell time for breaches has a median time of over 200 days, ample time for a hacker to extract what they need and cover any traces.
The recent challenges at Volkswagen were fundamentally caused by a supply chain integrity problem. It may be impossible to prevent a rogue employee introducing malicious software when there is collusion, but if the integrity of the supply chain can be verified then that malicious software can be detected and importantly, the who, what and how that malicious software entered the supply chain can be verified without the need for lengthy and uncertain forensic investigations.
The actions of Edward Snowden were an operational integrity problem as he operated outside of the rules of the system he was tasked to administer. If operational integrity can be monitored then action can be taken when an administrator or third party crosses the bounds of what is acceptable.
In summary if we define integrity in a similar fashion to the human quality as ‘the absence of compromise’ it becomes clear what its value is for cyber security. If you can guarantee integrity then you can guarantee the absence of compromise, which leads to a very different security model.
This asset-focused integrity model is orthogonal to current layered security architecture, where the presumption is that the threat is predominantly external. Whilst this new model is critical to the future protection of assets, in practice we need both, with a conventional ‘onion-skin’ model acting predominantly as a filter.
Why integrity has historically been forgotten
Despite its increasing importance integrity has largely been overlooked from a security perspective and this is primarily for historical reasons.
Firstly for the first years of the internet, security was synonymous with confidentiality of data in motion, with the presumption that features such as access control and firewalls were more than adequate for protecting data at rest. This was a natural consequence of how organisations operated, as isolated networks that assumed a hardened perimeter, and securing e-commerce or information exchange necessitated the need for secure messaging.
However, the advent of cloud computing, mobile networks, and the Internet of Things (‘IoT’) has severely challenged this approach, as well as an increasing awareness from diverse thought leaders across the security community, including researchers such as Bruce Schneier, NSA Director Admiral Michael S Rogers and Director of National Intelligence James Clapper, and there is now emerging consensus that in reality system integrity is the biggest threat in cyberspace, not confidentiality of data in motion.
As an example consider the relative importance of confidentiality and integrity in the Internet of Things see Figure 2.
Figure 2 - Implications of Integrity and Confidentiality
Secondly, integrity was widely considered to be a ‘solved problem’ after the invention of Public Key Infrastructure (‘PKI’). The impetus behind PKI was key-exchange across an insecure channel and it was quickly recognised that it could also be used for digital signatures and verifying the integrity of system components. Although cryptographers may consider it a solved problem, in reality PKI has a number of hidden challenges that mean in practice it is far from ideal in addressing this use case. The complexity and cost of key management alone make it very challenging to implement correctly especially for long-term data retention.
Indeed Rob Joyce, Head of the NSA Tailored Access Office, highlighted recently how the NSA approach network exploitation first by targeting the credentials of the system administrators. Once a credential has been compromised then an attacker will have unfettered access to the internals of a network, achieve persistence (by modifying firmware, key configuration parameters etc) which then leads to data exfiltration. PKI has other problems, not least of which are complexity and the reliance on trust anchors called Certificate Authorities (‘CAs’), and we know from well-publicised events that certificates can and have been exploited in the past, which brings the whole question of trust into focus.
Using PKI as a Swiss army knife for both confidentiality and integrity is indicative of the lack of innovation around integrity technologies and until recently we simply have not found the right technology. An important insight as to why this approach fails is the observation that integrity and confidentiality are diametrically opposite problems. Consider for example a crime in the physical world: the more people who witness a crime the stronger the integrity of the evidence, yet the less confidential the evidence becomes. For integrity we want more witnesses, for confidentiality we want less.
Assumptions and trust anchors
Ask a security professional what tools they use to address the integrity of the systems they are tasked to protect and you will get a wide range of responses, which typically include: “We have procedures in place operated by trusted insiders to ensure the proper handling of data,” or “We encrypt our data at rest and rely on key management operated by trusted administrators.”
As already indicated, this is a fallacy; encryption cannot solve the integrity problem - you simply cannot encrypt firmware, software or configuration files running on a machine, and rely on PKI to maintain state. Key management simply moves the trust anchor to a credential of the administrator of the keys, and we have to suspend healthy skepticism and trust certificates from an upstream CA. This points to the reason why modern security continues to fail at an epic scale; you cannot empirically verify trust.
“Any security technology whose effectiveness can’t be empirically determined is indistinguishable from blind luck,”
said Dan Greer, Security Researcher.
Put another way, when building security systems if you have an assumption that includes trust (in keys, in human administrators) then with sufficient time you will be compromised with probability one.
This is the promise of blockchain for cyber security - if you can eliminate the need for trust then you can build security systems that don’t rely on a single authority and create a paradigm shift in security. Instead of searching for vulnerabilities, equivalent to searching for a needle in a haystack, you can have mathematical certainty for every digital asset that constitutes the system you want to protect.
What is a blockchain?
In a nutshell a blockchain is a distributed database, shared and maintained by multiple parties (see Figure 3). Records can only be added to the database, never removed, with each new record cryptographically linked to all previous records in time. New records can only be added based on synchronous agreement or ‘distributed consensus’ of the parties maintaining the database. By cryptographically linking the records it is impossible for one party to manipulate previous records without breaking the overall consistency of the database.
Using the blockchain as a trust anchor
There are two key steps in using a blockchain as a trust anchor for security: registration and verification.
A system to be secured will comprise of multiple components: firmware, software, configuration files, audit and event logs etc. Each component will have an associated supply chain (i.e. a record of its history and provenance). As an example think about an IoT device: each component in the device may come from a different manufacturer and will have a history of revisions associated with it. To register a component a manufacturer will generate a fingerprint (or hash-value) of each component to be registered and submit that fingerprint into the network of participants who will attempt (using a process called ‘distributed consensus’) to simultaneously enter that fingerprint into their local copies of the blockchain. Once registered, the manufacturer will be returned a signature that cryptographically links the fingerprint of the component to an entry in the blockchain.
After a component has been registered and a signature generated, then at any point in the future the properties of the component (time of registration, integrity and identity of the registering entity) can be verified. To do this a verifier will regenerate a fingerprint of the component and use the signature as a cryptographic proof to confirm it matches the correct entry in the blockchain. A core value proposition of using a blockchain for integrity is that there are no keys to be compromised - the trust anchors in effect are a) the security of the algorithm that generates the fingerprint and b) widely witnessed evidence (the blockchain).
Who do you trust?
In mathematics, a proof is based on fundamental assumptions (or axioms). In security these assumptions are called trust anchors. So what are the assumptions behind any statement on security? For a CISO trying to protect an organisation, whether a nuclear power plant or a Fortune 500 company, there will be a long list of assumptions, almost certainly including the trustworthiness of the administrators of the system and the security of the keys that they manage.
Using a blockchain to verify integrity it is no longer necessary to maintain secrets or keys - verification is based only on publicly available information. Contrast this with PKI. In PKI the trust anchor is the security of the keys that must be managed by the signing entity, for the entire lifetime of the component. In some compliance regimes this means many years.
Experience tells us that key management is extremely hard to do well, and the major insight of blockchain-based security is that for integrity it is also completely unnecessary. Using the blockchain as a trust anchor makes evidence widely witnessed - anyone who has a copy of the blockchain can verify the absence of compromise without reliance on secrets, keys or administrator credentials.
Estonia - A case study in integrity
In 2007 the Estonian Government was the victim of what is considered the first instance of a state-sponsored cyber attack that paralysed the government for a period of days. There were many lessons learned from this attack not least of which is the importance of resiliency, the ability to recover, or roll back, to a known good state, and relying on secrets to guarantee that state is a dangerous strategy.
Under the auspices of the Estonian Government a team of scientists set out to build a security technology that could eliminate the need for trusted humans or insiders in this verification process. Today the technology they developed is known as a blockchain and is a core technology and underlying security substrate for government information systems. Every healthcare record modification and access, every financial transaction and every security event in cyberspace is registered in the blockchain, producing a level of security, transparency and auditability which has never been possible before.
The existence of a well-defined ‘trusted perimeter’ is quickly being eroded, and so enterprises must adopt a philosophy of data-centric security if they are going to have any chance of maintaining or verifying integrity. Blockchain-based solutions offer a data-centric approach to deter, detect and disrupt insider threat activities.
Although 100% crime prevention is impossible it is now possible to have 100% detection, accountability and auditability across highly complex systems. Where human motivation and behaviour must to be verified in conjunction with effective security controls and integrity of systems and processes - think blockchain.The article was first published in Cybersecurity Law & Practice.