The subject of the piece is a renewed effort by U.S. intelligence and law enforcement agencies to mandate 'backdoors' in modern encryption systems. This is ostensibly a reaction to the mass adoption of strong encryption in smartphones, and a general fear that police are about to lose wiretapping capability they've come to depend on.
This is not the first time we've been here. Back in the
Thanks to the advent of smartphones and 'on-by-default' encryption in popular systems like Apple's iMessage, and WhatsApp,
Hence crypto backdoors.
As you might guess, I have serious philosophical objections to the idea of adding backdoors to any encryption system -- for excellent reasons I could spend thousands of words on. But I'm not going to do that.
What I'd like to do here is tackle the purely technical side of the question, since nobody in government seems to be doing this.
Thus the question I'm going to consider in this post:
Let's pretend that encryption backdoors are a great idea. From a purely technical point of view, what do we need to do to implement them, and how achievable is it?First some background.
End-to-end encryption 101
Modern encryption schemes break down into several categories. For the purposes of this discussion we'll consider two: those systems for which the provider holds the key, and the set of systems where the provider doesn't.
We're not terribly interested in the first type of encryption, which includes protocols like SSL/TLS and Google Hangouts, since those only protect data at the the link layer,i.e., until it reaches your provider's servers. I think it's fairly well established that if Facebook, Apple, Google or Yahoo can access your data, then the government can access it as well -- simply by subpoenaing or compelling those companies. We've even seen how this can work.
The encryption systems we're interested all belong to the second class -- protocols where even the provider can't decrypt your information. This includes:
- Apple and Android device encryption (based on user passwords and/or a key that never leaves the device).
- End-to-end messaging applications such as WhatsApp, iMessageand Telegram*.
- Encrypted phone/videochat applications such as Facetimeand Signal.
- Encrypted email systems like PGP, or Google/Yahoo's end-to-end.
How to defeat end-to-end encryption
If you've decided to go after end-to-end encryption through legal means, there are a relatively small number of ways to
By far the simplest
Fortunately for this discussion, we have some parameters to work
If we mix this all together, we're left with only two real options:
- Attacks on key distribution. In systems that depend on centralized, provider-operated key servers, such as WhatsApp, Facetime, Signal and iMessage,** governments can force providers to distribute illegitimate public keys, or register shadow devicesto a user's account. This is essentially a man-in-the-middle attack on encrypted communication systems.
- Key escrow. Just about any encryption scheme can be modified to encrypt a copy of a decryption (or session) key such that a 'master keyholder' (e.g., Apple, or the U.S. government) can still decrypt. A major advantage is that this works even for device encryption systems, which have no key servers to suborn.
Attacking key distribution
|Key lookup request for Apple iMessage. The phone|
number is shown at top right, and the response at bottom left.
Before you can initiate a connection with your intended recipient, you first have to obtain a copy of the recipient's public key.This is commonly handled using a key server that's operated by the provider. The key server may hand back one, or multiple public keys (depending on how many devices you've registered). As long as those keys all legitimately belong to your intended recipient, everything works fine.
Intercepting messages ispossible, however, if the provider is willing to substitute its ownpublic keys -- keys for which it (or the government) actually knows the secret half. In theory this is relatively simple -- in practice it can be something of a bear, due to the high complexity of protocols such as iMessage.
A final, and salient feature on the key distribution approach is that it allows only prospectiveeavesdropping -- that is, law enforcement must first target a particular user, and only then can they eavesdrop on her connections. There's no way to look backwards in time. I see this is a generally good thing. Others may disagree.
|Structure of the Clipper 'LEAF'.|
Abstractly, the purpose of an escrow system is to place decryption keys on file ('escrow' them) with some trusted authority, who can break them out when the need arises. In practice it's usually a bit more complex.
The first wrinkle is that modern encryption systems often feature manydecryption keys, some of which can be derived on-the-fly while the system operates. (Systems such as TextSecure/WhatsApp actually derive new
To deal with this issue, a preferred approach is to wrap these session keys up (encrypt them) under some master public key generated by the escrow authority -- and to store/send the resulting ciphertexts along with the rest of the encrypted data. In the 1990s Clipper specification these ciphertexts were referred to as Law Enforcement Access Fields, or LEAFs.***
With added LEAFs in your protocol, wiretapping becomes relatively straightforward. Law enforcement simply intercepts the encrypted data -- or obtains it from your confiscated device -- extract the LEAFs, and request that the escrow authority decrypt them. You can find variants of this design dating back to the PGP era. In fact, the whole concept is deceptively simple -- providedyou don't go farther than the whiteboard.
|Conceptual view of some encrypted data (left) and a LEAF (right).|
Who does hold the keys?
This is the million dollar question for any escrow platform. The Post storydevotes much energy to exploring various proposals for doing this.
Escrow key management is make-or-break, since the key server represents a universal vulnerability in any escrowed communication system. In the present debate there appear to be two solutions on the table. The first is to simply dump the problem onto individual providers, who will be responsible for managing their escrow keys -- using whatever technological means they deem appropriate. A few companies may get this right. Unfortunately, most companies suck at cryptography, so it seems reasonable to believe that the resulting systems will be quite fragile.
The second approach is for the government to hold the keys themselves. Since the escrow key is too valuable to entrust to one organization, one or more trustworthy U.S. departments would hold 'shares' of the master key, and would cooperate to provide decryption on a case-by-case basis. This was, in fact, the approach proposed for the Clipper chip.
The main problem with this proposal is that it's non-trivial to implement. If you're going to split keys across multiple agencies, you have to consider how you're going to store those keys, and how you're going to recover them when you need to access someone's data. The obvious approach -- bring the key shares back together at some centralized location -- seems quite risky, since the combined master key would be vulnerable in that moment.
A second approach is to use a threshold cryptosystem. Threshold crypto refers to a set of techniques for storing secret keys across multiple locations so that decryption can be done in placewithout recombining the key shares. This seems like an ideal solution, with only one problem: nobody has deployed threshold cryptosystems at this kind of scale before. In fact, many of the protocols we know of in this area have never even been implemented outside of the research literature. Moreover, it will require governments to precisely specify a set of protocols for tech companies to implement -- this seems incompatible with the original goal of letting technologists design their own systems.
A final issue to keep in mind is the complexity of the software we'll need to make all of this happen. Our encryption software is already so complex that it's literally at the breaking point. (If you don't believe me, take a look at OpenSSL's security advisoriesfor the last year) While adding escrow mechanisms seems relatively straightforward, it will actually require quite a bit of careful coding, something we're just not good at.
Even if we do go forward with this plan, there are many unanswered questions. How widely can these software implementations be deployed? Will every application maker be forced to use escrow? Will we be required to offer a new set of system APIs in iOS, Windows and Android that we can use to get this right? Answering each of these questions will result in dramatic changes throughout the OS software stack. I don't envy the poor developers who will have to answer them.
How do we force people to use key escrow?
Leaving aside the technical questions, the real question is: how do you force anyone to dothis stuff? Escrow requires breaking changes to most encryption protocols; it's costly as hell; and it introduces many new security concerns. Moreover, laws outlawing end-to-end encryption software seem destined to run afoul of the First Amendment.
I'm not a lawyer, so don't take my speculation too seriously -- but it seems intuitive to me that any potential legislation will be targeted at service providers, not software vendors or OSS developers. Thus the real leverage for mandating key escrow will apply to the Facebooks and Apples of the world. Your third-party PGP and OTR clients
Unfortunately, even small app developers are increasingly running their own back-end servers these days (e.g., Whisper Systems and Silent Circle) so this is less reassuring than it sounds.
If this post has been more questions than answers, that's because there really are no answers right now. A serious debate is happening in an environment that's almost devoid of technical input, at least from technical people who aren't part of the intelligence establishment.
And maybe that by itself is reason enough to be skeptical.
* Not an endorsement. I have many thoughts on Telegram's encryption protocols, but they're beyond the scope of this post.
** Telegram is missing from this list because their protocol doesn't handle long term keys at all. Every single connection must be validated in person using a graphical key fingerprint, which is, quite frankly, terrible.
*** The Clipper chip used a symmetric encryption algorithm to encrypt the LEAF, which meant that the LEAF decryption key had to be present inside of every consumer device. This was completely nuts, and definitely a bullet dodged. It also meant that every single Clipper had to be implemented in hardware using tamper resistant chip manufacturing technology. It was a truly awful design.