By Jon Callas
Note: This is part one of a four-part series where security expert Jon Callas breaks down the fatal flaws of a recent proposal to add a secret user — the government — to our encrypted conversations.
Twenty-five years ago, the FBI decided it needed a surveillance system built into the nation’s telephone network to enable it to listen to any conversation with the flip of a switch. Congress obliged by passing the Communication Assistance to Law Enforcement Act (CALEA), forcing telephone companies to rebuild their networks to be “wiretap ready.” In the more than two decades since then, the FBI has been seeking both legislative and judicial approval to expand this authority to internet communications, insisting that its investigations have “gone dark” because of increasingly widespread use of encryption. Technologists and civil libertarians have so far been successful in opposing those efforts, warning that requiring technology companies to build a backdoor into our encrypted communications would compromise security for everyone and would empower not just the FBI, but repressive governments like China and Iran, to demand or gain access to private communications. But law enforcement and intelligence agencies have not given up.
The latest proposal for circumventing encryption comes from Ian Levy and Crispin Robinson of the UK’s GCHQ (the sister agency to the NSA). Their proposal would enable surveillance on encrypted communications not by trying to mathematically attack or weaken the encryption, but simply by forcing service providers to secretly add an extra user — the government — to an encrypted conversation. The authors of this proposal say that their proposal would not “break” encryption, but it would nonetheless have the same effect by creating a situation in which people are no longer confident they are securely talking to their partners. Encryption gives people secure and private communications that ensure that their conversations are between them and their partners alone. Creating the possibility that a secret user may be listening in on an otherwise securely encrypted conversation destroys that confidence, thereby chilling First Amendment protected speech. A proposal that keeps encryption while breaking confidentiality is a distinction without a difference.
Beyond that, my experience tells me that the GCHQ authors’ proposal will not work — and for more reasons than just the flaws in the proposed core technological “solution” itself. Outside of the lab and in the real world, operating an encrypted service that also ensures secret government access faces some insurmountable obstacles. No technology can claim to be a “solution” without grappling with the massive global scale of the Internet, complex and often conflicting international legal requirements, and well-resourced and highly motivated adversaries seeking to exploit flaws in the technology.
Over the past thirty years of my career, at companies large (Apple) and small (Silent Circle), I have built encrypted software, hardware, and storage services. I am one of the founders of PGP Corporation, where we built secure email and disk encryption. I’m also one of the founders of Silent Circle, where we made apps for encrypted chat and phone calls, including secure conference calls as well as an extra-secure An-droid phone called Blackphone. I have taken raw encryption technologies from the lab to a product, as well as deployed the product worldwide to millions of people.
Coming up with an idea, as the GCHQ authors have, is the easy part. But as technology moves from idea to experiment to proof-of-concept to product to deployed, the problems multiply and finding solutions gets much, much harder.
If this proposal is going to work, then every company that implements it is going to have to build that capability into their product. And that will be no simple matter. It requires agreement and cooperation by all organizations making secure communications and all governments wishing exceptional access. The difficulty in this sort of technology is that the technology ultimately is about embodying policy and that policy is international politics and this is often impossible.
Here’s an example. If you and I are going to have dinner together, there are technical details ranging from picking the place, finding a time to eat, traveling, cooking (perhaps), and so on. If I bring someone along and you do, too, some of those details will get a little harder to nail down, not a huge deal, but dinner for two is easier to plan than dinner for four. Dinner for ten is a lot harder. Dinner for fifty —now that’s really hard. Not because cooking is hard (though I know it is), but because doing anything for fifty people is hard in ways that doing the same thing for two people is not. Some simple problems of dinner, like dietary restrictions, can be a minor challenge with two people but a truly significant one with fifty.
Securely encrypted apps are used by millions of people. It may not literally be impossible to cook a single dish that meets the dietary restrictions of a million people, but it is not going to be what anyone really wants to eat.
In one of my past jobs, our engineering department was some one-hundred people total, with the core security technology team being three to five people at a given moment. This is common; core technologies are built by a small team that might go as large as ten, sometimes as small as one. The other ninety-five people in the engineering department were there to deal with the exacting technical problems associated with building a viable product. These kinds of problems are serious and challenging; even a massive group of engineers, like the one that GHCQ has the resources to assemble still faces these and other problems.
Deployment comes next. The core tech team must work with other development teams for good user experience, integration with an organization’s IT infrastructure, management tools, deployment and scaling tools, regulatory auditing and legal compliance, user expectations, managing judicial authorization and oversight, access compliance and auditability, proportionality, transparency, and also multi-lateral international versions of all the previous. Deploying that system on a world-wide basis? This is way beyond dinner for two.
The GCHQ authors ignore these necessities, but they are essential challenges to overcome as a product progresses from development to beta test to rollout to global deployment to mature system. “It doesn’t scale,” is a technology cliché that means something real — there are different, harder problems with making something work in the real world than there are with simply making it work.
The GCHQ proposal is drowning in problems of scale. Exceptional access — as governments propose — is the problem of making a system selectively secure. I can tell you, it’s hard enough to make a secure system. It’s vastly harder to make a system secure except for governments, and only available to governments that consist of “democratically elected representatives and [a] judiciary” as the GCHQ authors imagine.
In their article, the GCHQ authors say, “We also need to be very careful not to take any component or proposal and claim that it proves that the problem [of exceptional access] is either totally solved or totally insoluble.” That may sound reasonable in the abstract, but in the case of exceptional access, the problems are nearly insoluble. (This problem is not new, and it’s not as if no one has ever considered it till GCHQ published its proposal.)
In the following series of essays, I show that the GCHQ proposal is necessarily unworkable at scale. Even aside from the technical obstacles with secretly adding a listener to an otherwise secure conversation (what some have called a “ghost user”), the proposal falters in a number of ways.
- Mandated exceptional access must work internationally, but the complex and often conflicting legal requirements and regimes in different countries mean a highly complex array of competing or conflicting technical requirements.
- The global communications system of telephones and the internet has developed in such a way that inserting surveillance capabilities that worked in the 1950s (the era of alligator clips) is no longer feasible. In fact, it has developed in a way that even the mechanisms of the 1990s are no longer applicable.
- To build anything that could allow a “ghost user” requires that the access mechanism exists on users’ devices. But eventually, an exceptional access mechanism stored on peoples’ devices will be detected. This cannot be squared with a primary “exceptional access” requirement — that the surveillance is surreptitious. This dooms the proposal to failure, just as previous attempts at exceptional access have failed, and at great risk to global cybersecurity.
Here is some further reading on the GCHQ Ghost User proposal and exceptional access itself.
The GCHQ Proposal
Ian Levy and Crispin Robinson, “Principles for a More Informed Exceptional Access Debate”
Specific Responses to the Proposal
Susan Landau, ”Exceptional Access: The Devil is in the Details”
Matthew Green, “On Ghost Users and Messaging Backdoors“
Bruce Schneier, ”Evaluating the GCHQ Exceptional Access Proposal”
Nate Cardozo and Seth Schoen, “Detecting Ghosts By Reverse Engineering: Who Ya Gonna Call?“
Open Technology Institute, “ Open Letter to GCHQ on the Threats Posed by the Ghost Proposal ” (Daniel Kahn Gillmor and I are signers.)
A General Discussion of Exceptional Access
Josh Benaloh, “What if Responsible Encryption Back-Doors Were Possible?“
Five Country Ministerial Quintet Meeting of Attorneys-General “Statement of Principles on Access to Evidence and Encryption“
Jon Callas is the Senior Technology Fellow with the ACLU