Good morning ladies and gentlemen, on behalf of Guardtime as their CTO, I would to thank the organizers of this event and Asian Insurance Review, for the opportunity to speak at this inaugural cyber liability conference and for recognizing the importance of cyber risk in the world today.

My remarks today will focus on cyber security and an industry perspective focused on cyber integrity, what these terms mean, and why integrity it is important. Cyber integrity – the integrity of data, their storage repositories, and the networks that serve them is in my opinion the most fundamentally important cornerstone of cyber security and will completely transform the market over the next decade.

First off, I’d like to start by differentiating the definitions of trust vs. truth. These definitions are essential to ascertaining whether or not data and networks have integrity.

Trust is defined as, “firm belief in the reliability or ability of someone or something”. Trusting a network or the data stored in an enterprise or cloud service provider is nonsense without the basic instrumentation and metrics to develop a formal situational awareness into how reliable these assets really are and what they are doing with the data, services, and applications they are hosting.

Truth on the other hand can be measured – it means undeniable independent proof, which can be proven forensically in a court of law. Truth, not trust is essential for any network, enterprise, or data storage asset – it’s operation and interactions with the data being hosted should be able to be independently verified with forensic proof that holds up in a court of law.

This integrity instrumentation and the evidence it affords should also be able to work at the storage scales required for all the data being generated on public, industrial and private networks – by all estimates a conservative 35 zettabytes by 2020. This amount of information is equivalent to every person on earth reading 1258 newspapers every day.

Further setting this stage, I recently read that Accenture estimates the cost of ineffective cyber security to be three trillion USD by 2020. If you are to believe estimates – even Accenture’s analysis may be understated given Ericsson’s recent report highlighting that almost 50 billion new devices will be connected to the Internet by 2020 – also known as the Internet of Things (IoT). Their perspective is unique – close to half the worlds traffic traverses their networks and equipment. Ericsson also highlights from a recent blog on Truth not Trust – the Importance of Data Integrity in the Networked Society that…

Every day another breach in online security is reported and on a macro level, companies are spending approximately 46 billion per year to secure their operations with 46 percent of them saying they will increase spending going forward. When breaches do happen, the average payout and cost of insurance averages about 1M USD and tops out at around USD 3.5 million when you factor in pending claims and self-insured retentions. Also, IT security budgets have grown from 4-6 percent in two years and are on track to become 8-10 percent of total project budget and spend.

On a micro level, cyber breach costs are deeply personal – individuals are losing control of their personal information, companies are losing control of their core assets, IPR, and some employees are even losing their jobs – just look at the CEO of Target. The larger cost to the economy is that trust in the digital society has plummeted.

Businesses and the economy need a predictable and deterministic environment to grow, where risk can be quantified and managed alongside investment and return. The World Economic Forum believes the lack of functioning cyber security threatens as much as USD 3 trillion of non-realized potential growth during this decade. If we are investing more but performing worse, something is fundamentally wrong with the approach we are taking as a society to cyber security.

The way these new devices will communicate and to which systems and enterprises are not considered by Accenture’s estimate. From my perspective, there are a couple of impacts with this growth – These new devices further abstract humans from the machines that are using them – as they will now be increasingly communicating with each other using automated information rules. Information rules that are being written, monitored, and enforced by other machines.

In frame, machine generated data will exceed human generated data by 2016 and most of this data will be stored in the cloud. This is important to understand – that with this abstraction – the truth is, few analysis firms even understand how to weigh the security impact of such explosive growth or the integrity of this data, how to verify it’s authenticity and track where it has been stored and the transformations it undergoes, while still preserving privacy.

Consider that in 2014, more than 3 times as many devices as subscribers were added to carrier networks in the US market alone.

It is my unique perspective and experience that the code these machines run will ALWAYS be subject to compromise. Exploiters, attackers, and hackers are a creative bunch. Applications and software always has problems – especially when delivery of those assets has increased velocity to provide new services daily. These vulnerabilities inevitably will result in exploitation.

The industry must accept this and always assume that at some point, even the most ‘secure’ infrastructures will be compromised by people (also known as the insider threat) or insecure code. For this body.. Assume compromise.

In fact, much of the interaction I have – whether it be within US, EU, and Asian governments, their defense establishments, or multinational corporations is how to actually bring transparency into how this data is being stored, manipulated, and how it can be trusted given information rules outlined in outsourced service provider contracts.

Much of my interaction professionally in the energy, telecommunications, and banking sectors is with CIOs recalcitrant to embrace cloud migration and services due to cyber security, perceived loss of control over their IP and data and lack of situational awareness into where their data is, how it is being used/consumed, and the information rules that govern their critical digital assets.

My response to them is that without data integrity, “your concerns are well founded”.

The industry has been seeking this holy-grail for cyber security for some time – undeniable truth (not trust) in the data, whose authenticity, identity and proof of creation time can be verified independent of the hosting or service provider.

One of the dominant paradigms in cyber security research is the idea of “provable security”. This is the notion that to have confidence in the security of a security protocol it is necessary to have a mathematically rigorous theorem that establishes a conditional guarantee of security given certain assumptions.

The challenge, and reason why provable security has remained in the ivory towers of academia is that it is completely useless if the assumptions require trusted humans or the security of cryptographic keys – secrets can be exposed and people can be compromised.

Simply put, the provability doesn’t matter much in the real world if there are other attacks that can defeat the security much more effectively.

There is however one area of security where provability can be meaningfully applied, namely integrity. When Whit Diffie proposed public key cryptography he wasn’t thinking about integrity – he was thinking about key exchange and how two parties could communicate securely across an insecure channel – the foundation for today’s e-commerce and identity systems.

At the time it was perfectly natural to think about extending PKI for verifying the integrity of messages and the consequence of this that today when security professionals have an integrity problem they naturally turn to PKI, it has been the only tool in the toolshed.

The challenge however is that PKI requires secrets and trusted parties – that can’t be proven and will always remain the weakest link in security. Managing keys is hard and even the best security companies can’t do it successfully.

For integrity I would argue it is also completely unnecessary. By eliminating the need for keys and using widely witnessed consensus it is possible to have provable security and that is a really big deal for CISOs who want to secure their networks and data.

It is the difference between saying I know my network has integrity and I can mathematically prove it and “our security is based on key management and trusting system administrators”.

As we have seen with the NSA and now more recently with Target and AT&T the latter is not a winning strategy.

Truth under these definitions is now possible, definitive accountability can be realized and recovered from the service provider and enterprise, and with this truth – you have the ability to identify indemnification responsibility in the event of compromise. Being able to do this in real-time means tamper detection of the network and the data’s state from it’s creation baseline.

This basic level of instrumentation can now provide proof of data creation, authenticity (is it the same), and identity and now works at the scales required for cloud computing, with evidence portable across all regions and open, closed, and virtually closed boundaries.

As an example, at the end of 2013, the Cloud Security Alliance (CSA) published its annual report on ‘The Notorious Nine: Cloud Computing’s Top Threats in 2013” and the shift from ‘server to service-based thinking’.

Among the top threats outlined in the report include data breaches, data loss, account or service hijacking, insecure interfaces and APIs, denial of service, malicious insiders, abuse of cloud services, insufficient due diligence, and shared technology vulnerabilities.

The CSA says…’the most significant security risks associated with cloud computing is the tendency to bypass information technology (IT) departments and information officers. Although shifting to cloud technologies exclusively is affordable and fast, doing so undermines important business-level security policies, processes, and best practices – and the ability to quickly performer incident response and understand liability in the event of a breach.

In the absence of these standards, businesses are vulnerable to security breaches that can quickly erase any gains made by the switch to cloud’.

Handing over competition sensitive, Personally Identifiable Information (PII), or related Intellectual Property information to a Cloud Service Provider (CSP) is indeed an exercise in extreme trust without the ability to independently (and in real-time) ensure cloud service providers are acting in accordance with purported security guarantees, controls, and the information rules provided by their contracts.

I would argue that in 2014, in light of the CSA assessment and through our analysis of threats to Cloud Service Providers, as well as governments’ perceived nefarious interactions with the telecommunications and data storage, social media, and search industries; it has been widely witnessed that trust in these providers and insiders is dead.

For these multinationals, outsourcing business trust to the largely unregulated cloud service provider industry today (regardless of the contract guarantee) ultimately belies the belief in the constraints of that providers trusted insiders (and indeed any government) interactions with your data, as well as the integrity of purported technical security controls, abeyance of best practices and associated policies and processes.

While CSA and world standards bodies have pioneered a number of policies and best practice tenants to manage cloud computing and data risks and security threats. These best practice frameworks for business, organizations, and governments is merely a risk management framework and does not address very fundamental integrity problems or technology solutions that should be associated with cloud models.

We should begin to change the dialog and emphasize..

CIO’s should make the assumption that any outsourced infrastructure will at some point be compromised (if not already).

You can’t outsource trust with the complexities offered today or with the people operating those resources on your behalf.

Also assume your own internal infrastructure is already compromised or soon will be in the (near) future.

The more important and valuable your intangible assets are (your intellectual property, customer and supplier base, etc), the more likely you are to be compromised and to become – no pun intended.. a TARGET.

The siren song has become ‘we implement best practices!!’ to assuage concerns.

Our response has always been, ‘so prove it in a way I can independently verify the integrity of your systems any time I want’ – go beyond compliance. Prove it.

….Prove in an independently verifiable way that the integrity of my data, the information rules that govern it, and that my service contract is being enforced – and let me do it whenever I want – in real-time.

With these assumptions, guaranteeing the integrity of these machines, the data being generated both by them and the user, and the rules used to enforce the information policies and contracts is paramount.

With integrity guarantees, true cyber security for an organization is possible.

How did I come to this conclusion? I made a trip to Estonia as an advisor to Senior US cyber policy makers to inspect how integrity was guaranteed for the rules defined for electronic services utilized by the citizens, the public, and private sectors there. I specifically wanted to understand how a country could offer and mix such a rich set of citizen services in health, finance, energy, and e-governance across a common platform while still preserving data differention, privacy, accountability and transparency into service interactions.

Instead of solely using secrets, Estonia implemented Integrity and Attribution, while importantly preserving privacy. In Estonia, their citizen identity management systems are backstopped by data integrity generation and monitoring. Every critical security function, audit record, and critical data repository imparts integrity evidence with continuous monitoring to ensure tamper detection, and resilience without the reliance on stored secrets or trust anchors like system administrators. In fact, they have achieved mutual auditability between the state, the citizen, and participating private sector.

As background, Estonia is perhaps the worlds most advanced cyber-enabled nation state. Post 1992, they had the ability to create the world’s first society from the ground up post Internet. To do so, they wanted to allow all citizens the ability to interact with the government and private sector electronically. All banking transactions are done electronically in Estonia, so is voting, so is all legislation.

Unlike the rest of the world, the government, the private sector, and the citizen who interact on these platforms can mutually audit each other’s behavior. How do they do this – by imparting integrity instrumentation into literally every digital asset that is generated and event log files which audit the interactions being generated by responsible systems and administrators.

The integrity components have been designed in such a way that the context of time becomes immutable evidence for all signed data assets. It is mathematically impossible to manipulate data that has been generated in the past without the integrity of the data changing. Change detection of these assets and the ability to mutually audit that interaction is at the foundation of the ability for the government, private sector, and citizens to actually trust their data in these systems.

President Toomas Hendrik Ilves of Estonia recently emphasized in Berlin that it is important to protect people from malevolent alterations of databases. “The European Union member states must ensure the safety of the digital data of their citizens,” the Estonian Head of State said.

He went on to say, “Online security consists of three factors: privacy or confidentiality; data integrity and accessibility. Recent scandals have brought the issue of digital privacy into public debates; however, violating the integrity of data is even more dangerous, threatening lives and infrastructure essential to the functioning of society – we can for example imagine the damage that can be done by changing people’s blood groups or the cycles of traffic lights by simple changes of the data in databases,” said the president of Estonia.

According to President Ilves, “European Union member states must ensure a safe system for their citizens that can fend off those kinds of malevolent attacks and protects online identity and data integrity. “I would like to use the Estonian X-road as a good example, not necessarily as a technical solution but rather as a way of thinking, a concept that data must be protected by a specific identification system. A crucial factor for all secure and functional e-services is secure online identity and integrity.”

Transparent truth, not trust, is the Estonian mantra. With this technology, significant friction is introduced to those attempting to lie or cover their tracks, or those who want change the evidence of an event in the past. This architecture and the integrity technologies that underpin them preserve the historical provenance of all interactions – and indeed serves to preserve Estonian history electronically for the long term.

So for this industry weighing and insuring cyber risks…

How can you achieve truth to calculate in real-time the integrity of the responsible interfaces, applications, and service layers responsible for the data? For the insurance industry to back these assets, it should be required that evidence of integrity in the organizations data and information rules governing the systems that manage that data is a must – and should be independently verifiable without having to trust the organization hosting those assets.

There must be transparency and accountability if indemnification is to be identified when a mishap or compromise occurs – who was responsible and can the evidence be irrefutably proven in a court?

Today, how can you possibly trust the service provider to say, ‘it’s not our fault, we are not liable’, when there is no evidence to confirm or contradict the statement and what little evidence that remains might be presented is entirely shaped from the perspective of that service provider. Auditors provide little confidence as they also rely on the same evidence, which can be erased without attribution by the responsible party.

Data integrity technologies should be a fundamental requirement for anyone that cares about their data or insuring those assets.

Requiring data integrity instrumentation brings accountability to the service provider by highlighting the complete chain-of-custody and digital provenance of service provider interactions, which in turn then identifies the responsibility for compromises, tamper, malicious insider activities, or even misconfiguration.

With this ability, truth becomes widely witnessed evidence without disclosing the content of the underlying data (ensuring privacy), the evidence is portable and independently verifiable across infrastructures, and travels with the data.

Cyber integrity means that customers, auditors, data-brokers, and investigators can independently answer the critical question:

“What changed, what was eliminated, when did it occur, and what was responsible”

Integrity technologies such as those that you’ll learn about at this conference, provide this immutable proof and is the instrumentation necessary to verify Cloud Service Provider activities for data they store as well as the integrity of the services they support.

As my good friend Doctor Jason Hoffman of Ericsson once said, ‘You can’t be perfect at preventing crime, but you can be perfect at detecting crime’.

What I discovered is that Estonian scientists have built a near perfect detection technology that allows the entire planet to verify EVERY event in cyberspace in such a way that the PRIVACY of each event is maintained but the integrity of the events cannot be denied. These integrity technologies hold the promise of providing complete transparency – it becomes impossible for governments, corporations, or users to lie – everyone can verify the integrity of events independently from those presenting them.

I would further offer – that while we admire the efforts of the social networking, search, and cloud industries to increase transparency by issuing transparency reports – how can the insurance industry trust the message if you can’t trust the messenger?

What Estonia has implemented at the digital level is TRUST BUT VERIFY – independent verification of everything that happens in cyberspace.

Requiring data integrity is the foundational instrumentation required to provide this detection mechanism and provides a new dawn for cyber security: visibility (and accountability) into operations; bringing truth to network and data interactions. Instead of increasing security perimeter complexity and trusting more secrets – this is a fundamental paradigm shift in security – instrumentation afforded from the inside out at the data-level, with real-time integrity reporting for critical organizational applications and assets.

This baseline instrumentation will allow the insurance industry and the organizations they back to better identify and visualize threats and changes to important intangible assets and data such as copy and transfer, deletion, as well as the manipulation of assets in real-time. Integrity instrumentation allows you fundamentally the ability to tag, track, and locate your assets in cyberspace. A GPS for data.

Cyber integrity proof affords the consumer, service provider, insurer or data broker to finally independently trust the provenance and integrity of any network interactions, as well as the digital assets they are managing and/or consuming.

Addressing integrity challenges at the scale required for cloud computing holds the greatest promise to actually address industry hesitance to move to cloud and insurers to appropriately back those assets; providing a better solution to identify malicious insider behaviors and threats that take advantage of code and application vulnerabilities not imagined by the software vendor.

There is friction in adopting integrity technologies. Why? Cloud Service Providers have been hesitant and stonewalling integrity verification and transparency technologies. The reason? Compromise of your data or exploitation may or may not indemnify them for losses and has direct effects on insurance and reinsurance of both the customer and their assets.

If they (or you) can’t prove what was lost or compromised on their watch and how it occurred – if the evidence doesn’t exist, they can claim they are not liable. “Prove it”.

Moreover, such a paradigm shift puts an entire industry of audit and compliance personnel at risk. Why do you need these auditors if you have real-time insights into the assets you are backing or trying to protect.

Target still claims they are not responsible for the loss of over 100 million credit cards under their care despite firing their CEO. That’s quite a statement and belies an industry willing to ignore consequences in the race to provide and implement low-cost competitive services and automated transaction systems.

I’ll leave you with a quote by John Oltsik’s vision for next-generation Cybersecurity in Network World. In his article he focuses on 4 areas of data security. The implications of his point on data tagging and integrity should be carefully considered:

“Sensitive data identity tagging…. If some piece of data is classified as sensitive, it should be tagged as such so every host- and network-based security enforcement points knows its sensitive and can then enforce security policies accordingly. Are you listening to this requirement, Adobe and Microsoft? Cisco is making progress… but it would be incredibly helpful if the industry got together and agreed upon more universal standard data-tagging schemas and protocols. Think of the possible ramifications here as they are pretty powerful. With a standard tag, every DLP device could easily detect and block an email when a junior administrator uses Gmail to send a copy of the HR database to her home computer. Employees could use Box, Dropbox, or any other file-sharing services because the files themselves have their own “firewalls.”

This is where the cyber security industry and worlds standard’s organizations should be going – deploying integrity instrumentation for data, which proves authenticity, time, and identity.. Data and networks, which can in fact, now be aware of their own rules.

I’ll conclude my remarks today highlighting these concepts and I am open to any questions at this time… Thank you.