The problem with pretty colours
Open any risk management guide and you will find the heat map — a 5x5 grid with likelihood on one axis, impact on the other, and cells painted in reassuring gradients from green to red. It is clean, intuitive, and almost universally adopted. It is also deeply misleading.
The heat map gives the appearance of rigour. Risks are plotted, colours are assigned, and leadership reviews a tidy visual summary. But beneath the surface, the methodology is riddled with problems that undermine its utility as a decision-making tool.
This is not a theoretical critique. These problems manifest in real organisations, producing risk registers that consume significant effort but fail to influence actual security decisions.
The three fundamental problems
1. Ordinal scales treated as quantitative
When we place likelihood on a 1-to-5 scale — "Rare" to "Almost Certain" — we create an ordinal ranking. A 4 is higher than a 3, but it is not necessarily twice a 2. The intervals between categories are undefined and inconsistent.
Yet the standard practice is to multiply likelihood by impact to produce a "risk score." This multiplication assumes the scales are ratio-level — that the intervals are equal and meaningful. They are not. The mathematical operation is invalid, and the resulting scores create a false precision that obscures rather than clarifies.
A risk scored as 3x4=12 is not meaningfully different from one scored as 2x5=10 or 4x3=12. But the heat map treats them as distinct data points, implying a granularity of assessment that does not exist.
2. Anchoring and cognitive bias
Ask five people to independently rate the likelihood and impact of the same risk, and you will get five different answers — often spanning multiple categories. This is not because some assessors are wrong. It is because subjective probability estimation is something humans are systematically bad at.
We anchor to recent events. A team that just experienced a phishing incident will rate phishing risk higher than a team that has not, regardless of whether the underlying exposure has changed. We are poor at estimating tail risks — events that are unlikely but catastrophic. We conflate familiarity with probability; risks we think about often feel more likely than risks we rarely consider.
The 5x5 matrix provides no mechanism to correct for these biases. It aggregates subjective judgments without calibration, producing outputs that reflect the cognitive tendencies of the assessors more than the actual risk landscape.
3. Loss of information
The act of compressing a complex, multidimensional risk into two numbers — likelihood and impact — destroys information that is essential for decision-making. Consider a data breach risk. The impact depends on how many records are exposed, what type of data is involved, which jurisdictions are affected, whether the breach is detected quickly or persists for months, and whether the organisation has cyber insurance. Collapsing all of this into a single "impact" score of 4 out of 5 strips away the context that a decision-maker needs.
The same is true for likelihood. Is the threat actor a nation-state or an opportunistic script kiddie? Is the vulnerability actively exploited in the wild? Does the organisation have compensating controls that reduce exposure? All of this nuance vanishes into a single cell on the grid.
A better approach: scenario-based risk analysis
The alternative is not to abandon structured risk assessment. It is to make it more honest and more useful. Scenario-based analysis preserves the rigour of formal risk management while avoiding the traps of the heat map.
Define concrete scenarios
Instead of assessing abstract risks like "data breach," define specific scenarios: "An attacker exploits a SQL injection vulnerability in the customer portal to exfiltrate the names, email addresses, and hashed passwords of all users in the EU tenant."
Specificity forces clarity. The scenario defines the threat actor, the attack vector, the data at risk, and the scope. This makes the assessment more grounded and the resulting decisions more actionable.
Estimate in ranges, not points
Rather than assigning a single likelihood score, estimate a range: "We believe this scenario has a 5-15% probability of occurring in the next 12 months." This is still uncertain — all risk estimation is — but it is honest about the uncertainty rather than concealing it behind a false-precision score of 3.
For impact, estimate in the unit that matters most to your organisation. If the primary concern is financial, estimate in currency: "If this scenario materialises, we expect direct costs of $200K-$500K from incident response, notification, and remediation, with potential regulatory fines of $100K-$2M depending on the supervisory authority's assessment."
If the primary concern is reputational or operational, describe the impact in those terms with as much specificity as possible.
Calculate expected loss
With probability ranges and impact estimates in hand, you can calculate expected annualised loss: the probability of the event multiplied by its estimated cost. A scenario with a 10% annual probability and a $500K expected impact has an expected annualised loss of $50K.
This number is imprecise. It is a range, not a point estimate. But it is useful in a way that a heat map score of "12" is not. It tells you, roughly, how much risk you are carrying — and it provides a natural benchmark for how much you should be willing to spend on mitigation. If a control costs $30K per year and reduces the expected annualised loss by $40K, the investment is defensible.
Compare treatment options quantitatively
The greatest advantage of scenario-based analysis is that it enables quantitative comparison of risk treatment options. When you can estimate how much a proposed control reduces either the probability or the impact of a scenario, you can calculate the return on security investment.
This transforms risk discussions from subjective debates about colour assignments into structured conversations about resource allocation. Leadership does not need to interpret a heat map. They can see that Control A costs $X and reduces expected loss by $Y, while Control B costs $Z and reduces expected loss by $W. The decision is still a judgment call — risk tolerance varies — but it is an informed judgment call.
Making it practical
Scenario-based analysis sounds heavyweight, and it can be if you try to apply it to every risk in your register. The key is triage.
Use the heat map — or a simplified version of it — as a coarse screening tool. Identify your top 10-15 risks by rough severity. Then apply scenario-based analysis to those. The Pareto principle holds: a small number of risks account for the majority of your exposure. Invest your analytical effort where it matters most.
For the remaining risks, a simpler qualitative assessment is sufficient. Not every risk warrants a full scenario analysis. The goal is to concentrate rigour on the risks that drive the most significant decisions.
The cultural shift
The deeper change is cultural. Risk management in most organisations is treated as a compliance exercise — something you do because the framework requires it, reviewed when the auditor asks, and largely disconnected from operational decisions.
Effective risk management is a strategic function. It informs where the security team invests its limited resources. It shapes architectural decisions, vendor selection, and incident response priorities. It gives leadership a clear picture of the organisation's risk posture — not in abstract colours, but in terms they can act on.
This shift does not happen overnight, and it does not happen by changing your methodology alone. It requires that risk discussions become a regular part of leadership conversations, that risk assessments are updated when the threat landscape changes (not just at audit time), and that the output of risk analysis visibly influences decisions.
When a risk assessment leads to a budget approval, a design change, or a policy update, people notice. The practice gains credibility. And the risk register stops being a compliance artifact and starts being what it was always meant to be: a tool for making better decisions under uncertainty.
Start where you are
If you are currently using a 5x5 matrix, you do not need to overhaul your entire practice tomorrow. Start by selecting your top three risks and developing specific scenarios for each. Estimate ranges for probability and impact. Calculate expected loss. Present the analysis alongside the heat map in your next risk review.
The contrast will be instructive — for you and for your stakeholders. The heat map will show a coloured cell. The scenario analysis will show a decision framework. The difference is the distance between performing risk management and actually practising it.