2014-04-08

A lesson from OpenSSL

If you are paranoid about secrecy on the web, today's news about a bug in OpenSSL may make you feel justified. OpenSSL is an open source library that is used by companies, individuals and governments around the word to secure their systems. It's very widely used for two reasons: 1) a very useful set of licensing conditions that essentially say you're fine to use it as long as you credit the right authors in the source and 2) because so many commercial firms depend on it, its source has been scrutinised to death to spot both performance and functional bugs.

A one-paragraph primer on SSL (Secure Sockets Layer): it's the method by which a regular web browser and a secure web server communicate. You're using it whenever the address bar in your browser displays a URL starting with "https:" instead of "http" - so that's your online banking, Facebook, Google, Twitter, Amazon... Most of these secure web servers will be using OpenSSL - there are alternatives to OpenSSL but none of them are compellingly better, and in fact the widespread usage of OpenSSL probably makes it less likely to contain security bugs than the alternatives so there's safety in belonging to the herd.

Anyone who's thinking "aha, my company should avoid this problem by developing their own SSL implementation" or better yet "my company should develop a more secure protocol than SSL, and then implement that!" has not spent much time in the security space.

And yet, someone has just discovered a bug in a very widely used version of OpenSSL - and the bug is bad.

To get some perspective on how bad this is, the Heartbleed.com site has a nice summary:

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.
Sounds dire, no? Actually the above description is the worst case; the bug gives an attacker access to memory on the secure server that they shouldn't have, and that memory *might* contain secrets, but the attacker doesn't get to control which area of memory they can read. They'd have to make many queries to be likely to gain access to secrets, and it's not too hard to spot when one small area of the Internet has that kind of unusual access pattern to your server. Even if they make 1000 reads and get one secret, they still have to be able to recognise that the data they get back (which will look like white noise) has a secret somewhere in it. I don't want to downplay how serious the bug is - anyone running an OpenSSL server should upgrade it to get the fix as soon as humanly possible - but it's not the end of the world as long as you're paying attention to the potential of attacks on your servers.

Still, isn't this bug a massive indictment of the principle of Open Source (that you'll have fewer bugs than commercial alternatives)? It's appropriate here to quote Linus's Law, codified by Open Source advocate Eric Raymond and named after the founder of the Linux operating system Linus Torvalds:

"Given enough eyeballs, all bugs are shallow"
or more formally:
"Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone."
Unfortunately, the larger and more complex your codebase, the larger the tester and developer base has to be and the longer it takes to find problems...

It's tempting to look at this security alert and declare that Open Source has allowed a critical bug to creep into a key Internet infrastructure component (clearly true) and declare that this can't be the right approach for security. But you have to look at the alternatives: what if OpenSSL was instead ClosedSSL, a library sold at relatively low cost by respected security stalwart IBM? ClosedSSL wouldn't have public alerts like this; if IBM analysis found bugs in the implementation then they'd just make an incremental version release with the fix. But the bug would still be there and would not be any less exploitable for the lack of announcement. You'd have to assume that government agencies (foreign and domestic) would bust their guts to plant someone or something with access to the ClosedSSL team mail, and in parallel apply object code analysis to spot flaws. The flaw would not be much less exploitable for lack of publicity, and would likely be in the wild longer because IBM would never announce a flaw so vocally and so users would be more lax about upgrades.

There are then two lessons from OpenSSL: 1) that even Open Source inspection by motivated agencies can't prevent critical bugs from creeping into security software and 2) that no matter how bad the current situation is, it would be worse if the software was closed-source.

No comments:

Post a Comment

All comments are subject to retrospective moderation. I will only reject spam, gratuitous abuse, and wilful stupidity.