Eric Rescorla is a man who speaks frankly about internet security. You might have guessed that from the name of his consultancy – RTFM – an acronym politely translated as “read the friendly manual”… except that it’s usually not politely translated. He’s the co-designer of secure HTTP, and now works on security issues with Skype.
In a conversation about cyberwarfare at Princeton, his assertion – “The internet is still too secure” – raises a few eyebrows. But his key message – “things could be a lot worse” – is a useful counterpoint to the rhetoric of cyberwarfare that Dr. Lin explores, and helps show the tensions between the computer security community and the defense community (not to mention with the human rights community.)
Rescorla asserts the following:
We have nearly unbreakable crypto primitives.
AES resists all practical attacks, RSA isn’t quite as strong as we’d like – but there is good new stuff in the pipeline, and there’s been significant progress made on addressing collisions in SHA-1. “There is no serious concern we’re going to run out of crypto any time soon.” The problem instead is getting the new stuff into use – we’re not using the strong stuff that we already have.
We think we know how to build secure protocols.
There’s been no basic change in security design since 2000. We’ve got good protocols for data object security (S/MIME, PGP), for stream security (SSL/TLS, SSH), for packet security (IPsec, STLS) and for authentication (Kerberos). Our work isn’t on deploying new protocols – it’s on addressing flaws in existing implementations – updating to newer hash functions after MD5/SHA-1 attacks, for instance. And it’s on gluing together existing protocols, none of which have really changed since 2004.
We can’t build systems that are reliably secure.
Good implementations are hard. We recently saw an attack on OpenSSL where a single “record of death” can crash the whole system. Debian managed to break their psuedorandom number generator, which meant that Debian keys were highly predictable and crackable with a table of only 32,000 keys. This probably affected 1% of internet connected machines.
This is just the security critical code. As Steven Bellovin at Columbia has pointed out, “Software has bugs. Security software has security relevant bugs.” And we’re bad at finding these bugs – audits are time consuming and there’s no evidence that they significantly affect the discovery rate for bugs. And affected implementations are slow to go away.
User practice is appaling.
Users are careless. They install random software from untrusted sources. They ignore our carefully constructed messages designed to prevent man in the middle attacks. And crypto doesn’t do much for you if your system’s been compromised.
Things could be a lot worse.
Despite all this, things aren’t so bad. Why is computer security so poor despite twenty years of work on the topic? We’ve been working on personal security for 10,000 years – why aren’t human beings invulnerable? Security people always think about the worst case scenario, but the actual attacks we experience are fairly primitive.
The Debian PRNG bug is about as bad as it gets, but there was no evidence of practical attacks in the field. You can mount a DOS on the whole internet pretty easily – publish bogus BGP routes, as Pakistani ISPs did while trying to block YouTube. As a practical matter, we can almost always get to YouTube. The internet shouldn’t work, but it does. Perhaps the question we need to be asking ourselves is “What’s a rational level of paranoia?”