The Risk Framework That Prevented a £2M Loss
Most risk assessment is theatre. Pretty matrices, detailed registers, regular reviews—all designed to show "risk management" is happening while the real risks go unidentified until they cause major problems.
A software company was planning their largest product launch ever. Standard risk assessment identified the usual suspects: technical delays, budget overruns, competitive responses.
But their Reality Check risk framework surfaced a different concern: "What if our primary database can't handle 10x traffic?" This seemed unlikely—their system had handled growth fine so far.
They spent £50k stress-testing their infrastructure. Good thing: at 7x their current load, response times became unusable. They redesigned the architecture before launch instead of after disaster.
Launch went smoothly. Post-mortem revealed that traditional risk assessment would have missed this issue because it wasn't "predictable" based on past experience.
Framework 1: Pre-Mortem Risk Analysis
The Process:
- Set the scene: "It's 12 months from now. This initiative has failed spectacularly."
- Generate failure stories: Each team member writes what went wrong
- Categorize failures: Group similar failure modes
- Assess likelihood: Which failure stories seem most plausible?
- Build defenses: What would prevent each failure mode?
Why This Works Better:
- Psychological safety: It's easier to voice concerns when failure is assumed
- Specific scenarios: Forces concrete thinking rather than abstract risk categories
- Team knowledge: Surfaces concerns that individuals might not voice otherwise
Real Example:
Marketing team planning major campaign launch.
Traditional risk assessment identified: Budget overruns, creative delays, media placement issues
Pre-mortem revealed:
- "Campaign launched but our website crashed from traffic we couldn't handle"
- "Great response to campaign but our sales team wasn't trained on new messaging"
- "Campaign succeeded but fulfilled orders wrong because we didn't update logistics procedures"
Result: They addressed infrastructure, training, and operational issues that weren't on traditional risk registers.
Framework 2: The "What Could Kill Us?" Analysis
Focus: Identify existential or business-critical risks, not just project risks.
The Questions:
- Revenue concentration: What if our biggest client left?
- Key person dependency: What if critical people were unavailable?
- Single point of failure: What systems/processes have no backup?
- Market shifts: What if customer needs changed dramatically?
- Competitive threats: What if a competitor made our offering obsolete?
- Regulatory changes: What if compliance requirements changed?
Implementation:
- Monthly leadership review: 30 minutes focused only on existential risks
- Quarterly deep dive: Full analysis with scenarios and mitigation plans
- Annual external review: Outside perspective on blind spots
Case Study:
Consultancy discovered 67% of revenue came from three clients. Traditional risk management hadn't flagged this because each client relationship was "stable."
"What could kill us?" analysis revealed this concentration risk. They diversified their client base over 18 months. Good timing: one of their major clients got acquired and immediately cancelled all external consulting contracts.
Framework 3: Dynamic Risk Monitoring
Problem with traditional approaches: Risk assessments become outdated quickly but are reviewed infrequently.
Solution: Build risk indicators that update automatically.
Leading Risk Indicators:
For project risks:
- Budget burn rate vs. milestone completion
- Team overtime hours (indicator of scope creep)
- Stakeholder meeting attendance (indicator of engagement)
- Code quality metrics (indicator of technical debt)
For business risks:
- Customer concentration ratios
- Employee retention by department
- Cash runway at current burn rate
- Customer acquisition cost trends
Implementation:
Create dashboards that track these indicators monthly. Set thresholds that trigger risk reviews.
Example: If top 5 clients represent >60% of revenue, trigger client diversification planning. If employee retention in critical departments drops below 85%, trigger retention analysis.
Framework 4: Stress Testing Assumptions
Recognize that most risks come from wrong assumptions, not unexpected events.
The Process:
- List critical assumptions: What must be true for success?
- Identify assumption dependencies: Which assumptions depend on others?
- Test assumption robustness: What if each assumption is 50% wrong?
- Plan assumption monitoring: How will you know if assumptions become invalid?
Business Application:
Strategic planning assumptions:
- Market size will grow 15% annually
- Competitors won't respond aggressively
- Our team can execute this new capability
- Regulatory environment will remain stable
Stress test: What if market grows 5% instead of 15%? What if main competitor cuts prices 30%? What if key team members leave during implementation?
Real Example:
E-commerce company assumed 20% annual customer growth. Stress testing revealed that if growth was only 10%, their entire business model needed revision (high fixed costs, long payback periods).
They redesigned their cost structure to be profitable at 10% growth. When actual growth turned out to be 12%, they were still successful instead of in crisis.
Framework 5: Cascading Risk Analysis
Recognize that risks don't happen in isolation—they trigger other risks.
Mapping Process:
- Primary risk identification: What could initially go wrong?
- Secondary impact analysis: What would each primary risk trigger?
- Tertiary effect assessment: What would secondary impacts cause?
- System vulnerability points: Where do risk cascades become uncontrollable?
Example: Product Launch Risk Cascade
Primary risk: Technical delays push launch back 2 months
Secondary impacts:
- Marketing campaign money wasted
- Sales team loses momentum
- Competitor gets market advantage
- Team morale suffers
Tertiary effects:
- Revenue targets missed for quarter
- Investor confidence drops
- Key team members start looking elsewhere
- Next product development delayed
Critical insight: The real risk isn't technical delays—it's losing market position and team confidence.
Implementation: Building Risk Reality Checks
Monthly Risk Reality Check (30 minutes):
Question 1: "What evidence do we have that our top risks are under control?"
- Not "status green" but actual evidence of risk mitigation
Question 2: "What surprised us about risk this month?"
- New risks that emerged
- Existing risks that behaved unexpectedly
- Risk mitigation that didn't work as expected
Question 3: "What risks are we pretending not to know about?"
- Risks people privately worry about but haven't voiced
- Risks that are "someone else's problem"
- Risks that are uncomfortable to acknowledge
Risk Warning Signs
- Risk assessments haven't changed in months
- Most risks are rated "medium" or "low"
- Risk reviews focus on process compliance, not actual risks
- People say "that's not my risk" frequently
- Risk mitigation plans are vague or lack specific owners
- No risks have actually materialized (suggesting you're missing real risks)
The Bottom Line
Risk management isn't about predicting every possible problem—it's about building organizational capability to identify, assess, and respond to risks quickly.
The best risk frameworks focus on:
- Surfacing hidden risks through structured imagination
- Monitoring risk indicators that update continuously
- Testing critical assumptions regularly
- Understanding risk cascades and system vulnerabilities
Remember: The biggest risks are often the ones you haven't identified yet.