Trusted Hardware and Software Cryptography

Date Published
Parent item
Sub-item
Tags
cryptography
TEE
Type
Problem Space
Status
In progress
Ready to make public??
Publish
Link
Contributors
Owner
tags

Context

At a high level, trusted hardware, such as Intel SGX and other trusted execution environments (TEEs), provide a small, secure enclave within a larger, potentially untrusted machine. Although these enclaves have limited computational power and memory compared to the host system, they offer stronger security guarantees: they can keep data confidential even from the host and ensure integrity by preventing malicious computations. The power of TEEs has recently been highlighted by the launch of BuilderNet (https://buildernet.org), which crucially relies on these enclaves for the security of the system.

Open Research Questions

The space at the intersection of trusted hardware and software cryptography is fruitful and large, with many potential interesting research questions. Here we just name a few of them:
 
Providing TEEs with large external memory:
TEEs themselves usually have limited memory, but are embedded within a larger host system with large amounts of memory. Ideally, the TEE could use the host’s memory for their computations and storage needs, without leaking private information. A promising technology for enabling this is known as oblivious RAM (ORAM). It allows TEEs to read and write from an encrypted storage array data structure (stored outside the TEE on the untrusted host system), such that the access pattern does not leak any information about what data locations are being read from or written to. Hiding these access patterns ensure that no data is leaked unintentionally from inside the TEE to the outside host by revealing the accessed memory locations.
Unfortunately, using ORAM does not come for free. Accessing data stored inside an ORAM requires to perform a series of dummy accesses along with the real access. This affects the efficiency of memory accesses and thus a natural question is to design concretely efficient ORAM schemes that allow for low latency memory accesses with as few dummy queries as possible.
 
 
Secure computation inside TEEs:
Ideally TEEs provide both privacy and authenticity, meaning that no data inside a TEE is ever leaked and in the sense that any message sent by a TEE is well-formed according to whatever protocol is being executed. In practice, it seems that violating privacy is easier than violating authenticity, i.e. TEEs may leak private information to a sophisticated attacker, but they are unlikely to ever send a malformed message.
To strengthen the privacy guarantees provided by solutions relying on trusted hardware, one can try and rely on multiple TEEs aiming for guarantees of the form ``as long as at least one TEE was not broken, the stored data remains private''. To simultaneously maintain the functionality of TEEs being able to also compute on the private data they stored, one can use a technology known as secure multiparty computation.
Given TEEs which store some secret shared data , using secure computation one can compute any function of the data without every revealing to any subset of the TEEs. Since we assume that the TEEs provide authenticity at all points in time, we can assume that no TEE will ever send a malformed message.
In the context of secure computation literature, the above setting is captured under the name of secure computation with semi-honest corruptions. If one additionally considers the setting where some TEEs may go offline or become unresponsive mid computation, then one falls under the umbrella of secure computation with fail-stop adversaries.
In the above context, it would be interesting to explore which secure computation protocols from academic literature provide the best concrete efficiency in real-world settings. This can be for the sake of general-purpose computations among multiple TEEs, but also for more specialized functions that the TEEs would like to compute on their data. Example of such could be knapsack approximations for packing as many transactions into a block as possible.
 
Bootstrapping trust in the host system:
On a purely intuitive level, one can expect an inversely proportional correlation between the size/complexity of a TEE and the security it provides. The smaller the hardware enclave is, the more likely we can fully secure it, and the larger it is, the more challenging this task becomes.
Naturally, one may ask the somewhat abstract question: ``what is the smallest useful TEE?''
Taking this question to the extreme, one could ask whether we can let the untrusted host system perform the bulk of computations on encrypted data and merely use the TEE for verifying that these computations are performed correctly. This line of thinking brings us to proof systems (popularized by SNARKs and STARKs in recent years).
TEEs could encrypt their data, the host system performs computations on the encrypted data and uses SNARKs/STARKs to prove the authenticity of the computation to the TEE. While this approach works, it would incur a likely prohibitively large computational overhead on the host system. Naturally, one may wonder whether one can design more efficient proof systems for this special purpose. Specifically protocols for delegated computations and interactive private-coin proof systems may be good starting points, since we are not concerned with proofs being interactive between host systems and their TEEs.
 
Preparing for the worst-case scenario:
All of the above discussions assumed that the authenticity of a TEE cannot be violated. While this may be a legitimate security assumption, it may still be worthwhile to explore what happens, if authenticity is violated and explore what countermeasures could be deployed.
A potentially interesting point in the design space, somewhere between ``fully trusting TEEs'' and ``not using TEEs at all'' could be to explore super efficient, weakly sound proof systems that the TEE could use to prove the authenticity of each computational step.
Commonly proof systems enable a prover to convince a verifier that some computation has been done faithfully, guaranteeing that a cheating prover can never cheat a verifier. This is done by letting the prover execute a computationally expensive proof generation procedure along with the actual computation it is performing.
Maybe, one can design much more computationally cheap proof systems that only provide guarantees of the form ``If I cheated, you have a constant probability of catching me''. While weaker than a full proof system, these systems would still provide non-trivial security assurances, even if the TEE was fully broken. At the same time, maybe they could be cheap enough to be executed by TEEs.

Related Notes

 
Flashbots Protect Badge

Add Flashbots Protect to your wallet

Prevent frontrunning and earn refunds.

Add to your wallet →