Analysis of Competing Hypotheses: A Step-by-Step Guide for CTI.
A report lands from the SOC. Unusual outbound connections from three endpoints in R&D. The destination IP appears in two threat intelligence feeds — one attributes it to a Chinese espionage group, the other to a financially motivated botnet. The connections started 72 hours ago.
What do you do?
If the answer is “check which feed is more reliable and go with that attribution,” that's what most CTI analysts would do. It's also skipping the most important step in intelligence analysis.
Analysis of Competing Hypotheses (ACH)is a structured analytic technique developed by CIA analyst Richards Heuer in the 1990s. It forces analysts to evaluate evidence against multiple hypotheses simultaneously, rather than anchoring on the first plausible explanation. It remains the single most practical tool in the CTI tradecraft toolkit — and one the field has underused since its inception.
This post is a hands-on walkthrough. By the end, there's a reusable process for any assessment that matters.
What is ACH?
Analysis of Competing Hypotheses (ACH) is a structured methodology for evaluating multiple explanations for observed events by systematically testing each hypothesis against available evidence. Instead of asking “what supports my theory?”, ACH asks “what eliminates the alternatives?” — focusing on inconsistencies rather than confirmations.
The technique addresses a well-documented problem: human analysts tend to anchor on an early hypothesis and unconsciously filter subsequent evidence to confirm it. ACH counteracts this by making the evaluation explicit and exhaustive. For a deeper dive into the cognitive biases that make ACH necessary — confirmation bias, anchoring, and the availability heuristic — see the companion post in this series: Cognitive Biases in CTI.
The ACH process: seven steps

Here's ACH applied to the SOC scenario above: unusual outbound connections from R&D endpoints to an IP with conflicting feed attributions.
Step 1: Generate hypotheses
List every reasonable explanation — not just the obvious two. The danger in CTI is stopping at two hypotheses when there are five.
For this scenario:
- H1: Targeted espionage by a Chinese APT group
- H2: Compromise by a commodity botnet (financially motivated)
- H3: Legitimate traffic misclassified by threat feeds (false positive)
- H4: Insider threat — unauthorised data exfiltration using shared infrastructure
- H5: Compromised third-party software phoning home to repurposed infrastructure
Five hypotheses instead of two. Already, the analytical aperture is wider than “which feed do I trust?”
A practical tip: involve more than one analyst in hypothesis generation. Different people bring different mental models. In multi-national analytical teams, the best hypotheses often come from the person who knows least about the specific case — they aren't anchored to the evidence yet.
Step 2: List all evidence
Everything known goes on the list. Be exhaustive, and include negative evidence — things that were looked for and not found.
- E1: Three R&D endpoints connecting to the IP
- E2: Feed A attributes IP to Chinese APT group (confidence: high)
- E3: Feed B attributes IP to commodity botnet (confidence: moderate)
- E4: Connections started 72 hours ago
- E5: Connection timing aligns with Chinese business hours
- E6: No known vulnerability exploitation on the endpoints
- E7: The IP was registered 18 months ago (not newly spun up)
- E8: No lateral movement detected from the endpoints
- E9: Data volume transferred is small (< 50MB total)
- E10: One endpoint has unauthorised software installed
- E11: The third-party vendor whose software runs on all three endpoints pushed an update 4 days ago
Step 3: Build the matrix
For each combination of evidence and hypothesis, assess whether the evidence is Consistent (C), Inconsistent (I), or Not Applicable (N/A) with each hypothesis.
The key columns to watch: where do “I” marks appear?
- H1 (Chinese APT): Inconsistent with E6 (no vulnerability exploitation)
- H2 (Botnet): Inconsistent with E6 (no vulnerability exploitation)
- H3 (False positive): Inconsistent with E2 and E3 (both feeds wrong simultaneously)
- H4 (Insider): Inconsistent with E9 (small data volume for deliberate exfiltration)
- H5 (Third-party): No inconsistencies. Vendor update timing (E11), lack of exploitation (E6), and all endpoints running the same software all fit
Step 4: Focus on inconsistencies
This is the key insight. Instead of counting which hypothesis has the most “C” marks, look for “I” marks — evidence that contradicts a hypothesis.
H5 has no inconsistencies. That doesn't make it correct — it makes it the hypothesis that survives initial scrutiny and warrants further investigation.

Step 5: Refine and reassess
The next analytical step: investigate the vendor update. Check whether it introduced new network behaviour. Contact the vendor. Check whether other organisations running the same software report similar connections.
Step 6: Assess sensitivity
Ask: “If I removed one piece of evidence, would my ranking change?”
If E11 (the vendor update) is removed, H5 loses its strongest support and the field levels. This means the conclusion is sensitive to a single piece of evidence — E11 needs verification before making a high-confidence assessment.
Sensitivity analysis reveals how fragile or robust a conclusion is. An assessment that depends on one piece of evidence is a flag, not a finding.
Step 7: Write the assessment with appropriate confidence
“We assess with moderate confidence that the observed connections are related to a third-party software update rather than adversary activity (H5). This assessment is primarily supported by the timing correlation between the vendor update and the connection onset, the absence of vulnerability exploitation, and the affected endpoints sharing the same third-party software. This assessment is sensitive to the vendor update timing — if the update is confirmed not to have introduced the observed network behaviour, H1 (Chinese APT) and H2 (commodity botnet) should be reassessed with elevated priority.”
Compare that to: “Feed A says it's a Chinese APT so it's probably a Chinese APT.”
Same data. Dramatically different analytical rigour — and dramatically more useful to the people making decisions based on it.
Integrating ACH into daily workflow
“This looks great for a complex investigation, but who has time for this every day?”
Fair. Here's how it scales:
- For routine triage: Run a mental mini-ACH. Force yourself to generate at least three hypotheses before committing to one. It takes 60 seconds and catches the worst anchoring errors.
- For significant assessments:Full ACH with a matrix. Takes 30–60 minutes depending on complexity. Any assessment that will influence resource allocation or executive decisions deserves this level of rigour.
- For attribution: Always use full ACH. Attribution assessments are the most bias-prone analysis in CTI, and they carry the highest consequences when wrong.
- Build it into peer review:When reviewing another analyst's work, don't just check for accuracy — check for alternative hypotheses they didn't consider. The most valuable peer review question: “What else could explain this?”
Beyond ACH: the broader toolkit
ACH is the most widely applicable technique, but it's not the only one. The intelligence community has developed dozens of structured analytic techniques over decades. Key ones for CTI practitioners:
- Key assumptions check— Before any analysis, list assumptions explicitly. Which are based on evidence? Which are based on habit? What happens if each is wrong?
- Devil's advocacy— Formally assign someone to argue against the prevailing assessment. The goal isn't to be contrarian — it's to stress-test the analysis.
- Pre-mortem analysis— “Imagine this assessment turns out to be completely wrong in six months. What happened?” Surfaces risks that standard analysis misses.
- Red team analysis— Adopt the adversary's perspective. “If I were this threat actor, what would I do next?”
For a deeper exploration of these techniques and the cognitive biases they address, see: Cognitive Biases in CTI: Structured Techniques for Sharper Analysis.
Richards Heuer and Randy Pherson's Structured Analytic Techniques for Intelligence Analysisis the definitive reference. Katie Nickels' widely-cited CTI self-study plan also points practitioners toward these resources — they're foundational, not optional extras.
The real bottleneck

The reason most CTI teams don't use structured analytic techniques isn't ignorance. It's time. When the day is consumed by processing feeds, writing the morning brief, and responding to ad hoc requests, a hypothesis matrix feels like a luxury.
This is the same structural problem running through this series. The collection and processing grind eats the hours that should go to analysis. The fix isn't “try harder” — it's removing time-consuming rote work from analysts' plates so they can do the thinking that actually matters. That rebalancing — automating collection and processing to create space for structured analysis — is central to what Liberty91 is building.
Because when analysts have time to think — with structured tools and disciplined process — the quality of intelligence transforms. And so do the decisions it supports.
This is Part 3 of the “CTI from the Trenches” series. ← Previous: How to Build a Collection Plan in 5 Steps | Next: Cognitive Biases in CTI →


