This is a personal blog. My other stuff: book | home page | Twitter | G+ | CNC robotics | electronics

November 04, 2011

In praise of anarchy: metrics are holding you back

It is a comforting to think about information security as a form of computer science - but the reality of securing complex enterprises is as unscientific as it gets. We can theoretize how to write perfectly secure software, but no large organization will ever be in a meaningful vicinity of that goal. We can also try to objectively measure our performance, and the resilience of our defenses - but by doing so, we casually stroll into a trap.

Why? I think there are two qualities that make all the difference in our line of work. One of them is adaptability - the capacity to identify and respond to new business circumstances and incremental risks that appear every day. The other is agility - the ability to make changes really fast. Despite its hypnotic allure, perfection is not a practical trait; in fact, I'm tempted to say that it is not that desirable to begin with.

Almost every framework for constructing security metrics is centered around that last pursuit - perfection. It may not seem that way, but it's usually the bottom line: the whole idea is to entice security teams to define more or less static benchmarks of their performance. From that follows the focus on continually improving the readings in order to demonstrate progress.

Many frameworks also promise to advance one's adaptability and agility, but that outcome is very seldom true. These two attributes depend entirely on having bright, inquisitive security engineers thriving in a healthy corporate culture. A dysfunctional organization, or a security team with no technical insight, will find false comfort in a checklist and a set of indicators - but will not be able to competently respond to the threats they need to worry about the most.

A healthy team is no better off: they risk being lulled into complacency by linking their apparent performance to the result of a recurring numerical measurement. It's not that taking measurements is a bad idea; in fact it's an indispensable tool of our trade. But using metrics as long-term performance indicators is a very dangerous path: they do not really tell you how secure you are, because we have absolutely no clue how to compute that. Instead, by focusing on hundreds of trivial and often irrelevant data points, they take your eyes off the new and the unknown.

And this brings me to the other concern: the existence of predefined benchmarks impairs flexibility. Quite simply, yesterday's approach, enshrined in quarterly statistics and hundreds of pages of policy docs, will always overstay it welcome. It's not that the security landscape is constantly undergoing dramatic shifts; but if you don't observe the environment and adjust your course and goals daily, the errors do accumulate... until there is no going back.

7 comments:

  1. I'm confused. How are metrics related to perfection? And why is a healthy or bright team unlikely to benefit from any of these? Your argument that metrics breeds complacency is the exact opposite of my experience. Assuming you're measuring the right things, shouldn't metrics incent teams to incrementally improve?

    If you're measuring the right things and are flexible in changing what you measure to reflect what's important, measuring shouldn't slow you down, in fact they should ensure you're not churning away on the wrong issues.

    As much as I enjoy most of the posts here, this just feels like a rant without any evidence to back it up.

    ReplyDelete
  2. My argument isn't against occasionally taking simple measurements - such as gauging bug response times, or checking the prevalence of strcpy() in your codebase.

    Rather, I'm arguing against enshrining these measurements in the form of static processes, and assigning some divine significance to the number alone. For example, it is completely meaningless that your average bug response time stayed constant from 2010 to 2011; it may be good, or it may be terrible, depending on a number of factors that need to be judged subjectively. If you rate your performance based on the measured number, I think that's a mistake.

    ReplyDelete
  3. We are pretty big into quantitative measurements, and qualitative analysis (powered by the former) at my work. I firmly believe in the maxim "if you can't measure it, you can't manage/defend it".

    Every new manager gets a copy of this book, which might help you sort your frustrations above:

    http://www.amazon.com/Measuring-Managing-Performance-Organizations-Robert/dp/0932633366/

    There are newer, longer books, but the above is all meat and no bun. Quick read. From there I tend to lean towards the evolving, agile variant of six-sigma called Lean. (no pun intended)

    For example I think your bug response point is completely valid, but easily solved in that the response time made contextually useful if you include the right qualifiers and quantifiers. We should be encouraging folks to move the bar, not be more subjective.

    ReplyDelete
  4. Arian,

    I have less of an issue with the maxim, and more with the underlying idea that once you come up with several hundred necessarily narrowly-scoped indicators, and weigh them according to some philosophy, you will be meaningfully measuring "it" - the overarching goal of doing a good job with security.

    A universal and long-lasting connection between the usual security metrics and the actual security of your organization, or the robustness of your code, is nearly impossible to make. It's easier to make in other fields, and I don't oppose metrics in IT / management as such.

    In the world of security, the very act of structuring the framework is extremely subjective, and in effect enshrines your current cognitive biases in a lasting framework. That only appears more objective, but seldom makes you better off.

    I kind of think that information security is not that different from physical security; you can collect some anecdotal data, or come up with limited-scope metrics, but you never know if you truly are cost-efficient, or preventing all the threats you need to care about. In fact, the more static, formalized, and predictable your processes are, the easier it may be to attack them.

    ReplyDelete
  5. Overall I don't agree, as I see formalized approaches helping organizations achieve the "don't get hacked" bar. (and, of course, we see the opposite, in the game-the-system or dysfunctional metrics programs)

    To your point though - one thing Jeremiah and I have seen is that if an organization only fixes low-risk, trivial remediation security defects say the first two years of their program, they establish a high frequency remediation rate. Then, in year three (say, after a compromise) they go fix all the Sev1 and Sev2 issues, which are more complex, and take longer, so going into year four - ....y3 looks like they got *worse* by simple frequency measurements.

    However - if this is contextualized around pulse (depth of remediation) and CVSSv2 type attack-surface data (Sev & Impact), and separate out "old" vs. "new" vulns (old vs new code) for y3, there are ways to put this into clearer context and wrap meaningful goals around it.

    This is too long for Blogger. You up for lunch in the next few weeks to discuss more?

    ReplyDelete
  6. I suggest that your criticism of "metrics" can apply to "process" in general.

    People have been promised that "security is a process", and that only if they have the right process, then they will be secure. This has led to organizations becoming full of process monkeys. I wouldn't call it a "false sense of comfort" so much as a "false sense of progress" -- that just because they are "active" doing something following the process that they are making a difference in security.

    The successful processes/metrics I see are those that are driven from the bottom-up, when desperate people use the process/metrics as a tool for solving a problem.

    The unsuccessful, rigid processes/metrics are those driven from the top down, by people who don't understand the actual problems they are trying to solve. Those are the metrics that match your description.

    ReplyDelete
  7. Well, bottom-up, ad-hoc metrics employed to gauge or solve specific problems are probably better called "measurements".

    A measurement turned into a formalized process and utilized as a performance indicator, is what I'm talking about...

    ReplyDelete