February 12, 2012
Yeah, welcome to the 2012 edition of the full disclosure debate! As usual, there are reasonable people who disagree about the merits of non-coordinated disclosure - and more recently, about the value of developing and publishing exploits, even for already patched bugs. Especially when it comes to exploit development, the short-term risks are pretty clear to any sensible person. There is data to demonstrate that the availability of functioning exploits drives a good chunk of the lower tier, large-scale attacks. The long-term benefits are more speculative. I like to think of it as a necessary evil: non-disclosure does not prevent sophisticated and resourceful attackers from developing their own exploits and going after high-value targets; and if you don't see the broad, easily verifiable susceptibility to non-targeted attacks, you probably won't be working as hard to fix the underlying problems, or patch and monitor your infrastructure. There is ample evidence of that: we would not have Windows Update, silent autoupdates in Chrome, or MacOS X ASLR improvements weren't it for highly publicized exploits. The cost-benefit calculation here is mostly a matter of personal taste, and we won't be able to settle it any time soon. For the record, I don't see it as a clear-cut problem - for example, I am at best ambivalent about the merits of exploit packs and frameworks such as CORE Impact or Metasploit. I am also deeply uncomfortable with exploit trading, a trend all-too-eagerly embraced and supported by the industry. But the merits of the debate aside, there is a disturbing propensity for parties who struggled with security response, and have sometimes adopted openly hostile tactics to suppress security research, to be on the front lines of the anti-disclosure movement. This is why I couldn't help but find parallels between Brad Arkin's recent statements, and a position taken ten years ago by Scott Culp. Brad says: "My goal isn't to find and fix every security bug. I'd like to drive up the cost of writing exploits. But when researchers go public with techniques and tools to defeat mitigations, they lower that cost. [...] Too much attention is being paid these days to responding to vulnerability reports instead of focusing on blocking live exploits." "[We need to] work closer with the research community to curb the publication of information that can help malicious hackers. [...] Something hard becomes very very easy. These exploits and techniques are copied, adapted and modified very cheaply." We all agree that bug-free products are not a realistic goal, but reducing the availability of information (which seems to be the meaning of "cost" in this context) is probably also an ill-advised one. If it's still possible to write an exploit, and just "expensive" to do so - for example, because the knowledge on how to bypass ASLR is not common - then indeed, unskilled attackers will be less likely to go after your mom's credit card information. (Well, that is if you overlook the fact that simple phishing also works amazingly well.) Alas, for as long as exploitable vulnerabilities are abundant, the need to have a bit of nonpublic knowledge will not stop determined parties from leveraging these bugs when they have a reason to. Such parties will probably not find it worthwhile to go after you mom's computer - but going after her bank will be a fair game. As for unintended consequences: in this scenario, the bank no longer has to deal with a steady stream of nuisance malware, so they probably care less about patching and monitoring, and the attacker is more likely to succeed. Sure, one shouldn't be running on a vulnerability response treadmill. We can escape it to some extent simply by making the process more agile and lightweight. It is also very important to reduce the likelihood of malicious exploitation, but we should do so by tweaking factors other than the "cost" related to the knowledge of attack methods. We should embrace proactive approaches such as sensible coding practices, developer education, fuzzing, or tools such as ASLR, JIT randomization, and sandboxing - and when something slips through the cracks, we need to be thankful for the data point and simply make our solution more robust. Let's not obsess about what the researchers believe in: they haven't sold it to the highest bidder, and that's already a pretty good starting point, isn't it? "We have patched hundreds of CVEs over the last year. But, very, very few exploits have been written against those vulnerabilities. Over the past 24 months, we’ve seen about two dozen actual exploits." That's a frighteningly high number of exploits, by the way.