A Look at How to Measure Anything in Cybersecurity Risk (Chapter 5)

by Phil Conrad

How to Measure Anything in Cybersecurity Risk, second edition
by Douglas W. Hubbard and Richard Seiersen

PART 1 - Why Cybersecurity Needs Better Measurements for Risk

Chapter 5 - Risk Matrices, Lie Factors, Miconceptions, and Other Obstacles to Measuring Risk

There is too much in this chapter to cover to do an adequate summary, although, my goal is not necessarily to do an adequate summary of each chapter but to point out a few highlights and things that stuck out to me as I read through this book.

One significant item pointed out in this chapter is, "there is not a single study indicating that the use of [methods using risk matrices] actually helps reduce risk." So why do so many cybersecurity experts continue to use such methods? The short answer is simplicity of ordinal scales, but the authors go on to point out how the use of these scales isn't so simple. Psychologist David Budescu, for example, has published findings about how differently people will interpret terms that are meant to convey likelihood such as "unlikely" or "extremely likely." In one example, he found "unlikely" could mean anything from 8% to 66%. This was even when specific guidelines were provided stating that "Unlikely" means between 10% and 33%.

The authors share an example using a popular Likelihood/Impact matrix, drawn from an actual risk matrix example promoted by a major consulting organization, where they look at two very different risks ending up in the same cell. For brevity, I'm not going to go into the details of this example, so you'll have to get the book yourself to learn more about this one. Suffice it to say, it made a "high" "impact" on my perception of risk matrices.

Later, the authors introduce what they call the Exsupero Ursus fallacy. The basis of it is this, "if there is a single example of one method failing in some way or even having a minor weakness, we default to another method without ever investigating wheather the alternative method has even more weaknesses and an even worse track record." So if a risk manager finds fault in a quantitative model they revert to an alternative model without applying the same measures against that model as with the quantitative model...even though research shows quantitative models to be "measurably superior."

You will benefit by reading about their research, statistics and examples in this chapter. The conclusion I came to is the quantitative methods are worth my time to continue to explore and study.


Posted 2/7/24

Home