Why businesses can no longer hide their keys under the doormat

For good reason, enterprises rely on encryption, blockchain, zero-trust access, distributed or multi-party strategies, and other core technologies. At the same time, companies effectively hide the keys that could undermine all these protections under a (figurative) doormat.

Strong encryption is of little use when an insider or attacker can take control of the private keys protecting them. This vulnerability exists when keys must be executed on servers to be processed. Encryption can protect bits and bytes in storage or in transit, but when they need to be executed on a processor, they are “in the clear” to perform the necessary computation. Therefore, they are accessible to dishonest insiders, attackers and third parties, such as consultants, partners or even suppliers of software or hardware components used in data center infrastructure.

This is the nature of encryption. It provides strong security for storage and transit, but when execution is ultimately required – and it is always required at some point for data, code or digital assets to be useful or to enable a transaction – the process faces its Achilles heel.

Private keys require execution, using a processor, for their initial creation, the encryption or decryption required for the exchange of keys, the process of digital signatures and certain aspects of key management, such as handling expired public keys. This same principle – the need for plaintext execution on a processor – applies to certain blockchain and multi-party computing (MPC) tasks. Even more generally, simply executing application code or encrypted data exposes them since processors require data and code to be in the clear.

CIOs need to ask questions of their teams to assess this potential exposure and understand the risk, as well as put plans in place to address it.

Fortunately, recent breakthroughs have eliminated this encryption gap and maintained full private key protection. Major processor vendors have added security hardware in their advanced microprocessors that prevents unauthorized access to code or data during execution or afterwards in what remains in memory caches. The chips are now found in most servers, especially those used by public cloud providers, involving a technology commonly known as confidential computing.

This “secure enclave” technology bridges the encryption gap and protects the private keys, but it required changes to code and IT processes that can involve a significant amount of technical work. It is specific to a particular cloud provider (meaning it needs to be modified for use in other clouds) and complicates future changes to code or business processes. Fortunately, the new “middle-of-the-road” technology eliminates the need for such modifications and potentially offers multi-cloud portability with unlimited scale. In other words, technical drawbacks have been virtually eliminated.

CIOs should ask their managers or management teams how private keys are protected and what exposure gap they might face during processing. The same goes for the execution of data and code that is otherwise encrypted at rest and in motion. What gap or exposure does the data or code potentially face?

Companies using proprietary application code with a secret key should consider how the secret key is protected and what kind of risk it might face. If the applications involve the use of AI or machine learning, the algorithms she has developed are likely extremely valuable and sensitive.

How are they secured during runtime? Even algorithm testing, often done using MPC to use real data (perhaps from customers or partners), can involve exposing data, code, or both. What safeguards are now in place to secure them? Blockchain also involves this runtime exposure – how is this handled?

The execution gap is not limited to the public cloud. Private cloud and on-premises data centers face the same issues. CIOs need to ask themselves if and how the gap is being mitigated. It may be counterintuitive, but the public cloud, with the use of confidential computing, may be the safest place to run code, algorithms, and data. If an organization is not currently using the public cloud (for fear of potential exposure of regulated or proprietary data), it may be time to reconsider its use.

Public cloud abandonment is often due to control and access issues. With private clouds and on-premises data centers, organizations typically know and can control who has access to what through the use of combinations of physical, network, and application security, logging or monitoring, and various forms of zero-trust access. The concern with the public cloud has been how to prevent access to unauthorized insiders, third parties, various third party hardware or software components, and even potential attackers. Now, with confidential computing, these concerns could be totally eliminated.

CIOs need to challenge popular notions that encryption is entirely secure, and even the guarantee of blockchain and MPC. With so much that depends on private keys, leaders need to ensure that these are protected using the best practices and technology available.

Comments are closed.