PART 2 - Evolving the Model of Cybersecurity Risk
"A range that has a particular chance of containing the correct answer is called in statistics a confidence interval (CI). A 90% CI Is a range that has a 90% chance of containing the correct answer."
The 90% CI is widely used in the methodology the authors demonstrate to assign probability. This chapter teaches us that we can get better at quantifying our uncertainty. The authors do this through the use of calibration test samples. Through the use of even these small samples, we (the reader) are able to get an idea as to our subjective confidence level. The two extremes of subjective confidence are overconfidence - overstating knowledge and being correct less often than expected, and underconfidence - understating knowledge and being correct much more often than expected. They go on to describe how to determine your overconfidence or underconfidence on particular answers or as a whole.
What the research finds in these calibration tests is "most people are providing ranges that are more like 40% or 60% CI, not a 90% CI." However, people can be calibrated through training. "Most people get nearly perfectly calibrated after just a half-day of training."
The methods they share to improve your probability calibration include:
Through various training they've done for research and for industry, the difference in accuracy is due entirely to calibration training.
This chapter was quite convincing that a person can indeed be trained to provide more accurate estimates, particularly if that is a responsibility in their job.