Cognitive Biases in CTI: Structured Techniques for Sharper Analysis.
CTI analysts are, as a rule, among the sharpest people in any security organisation. The problem isn't how well they think. It's that human cognition has built-in shortcuts — shortcuts that evolved for quick survival decisions, not for assessing whether a set of network indicators represents a targeted intrusion or a commodity campaign.
These shortcuts are called cognitive biases, and they systematically shape the intelligence every team produces. Not occasionally. In every assessment that relies on human judgment — which is all of them.
The good news: the intelligence community spent decades developing structured techniques specifically to counteract these biases. The tradecraft exists. It's been battle-tested in national security, law enforcement, and military intelligence. And most of it transfers directly to CTI.
Three biases do more damage in threat intelligence than all others combined. Understanding them is the first step toward building analytical processes that compensate.
Confirmation bias: the evidence filter you don't notice
Confirmation biasis the tendency to seek, interpret, and remember information that confirms existing beliefs. It's the most studied cognitive bias, the most pervasive, and the hardest to eliminate — because it operates below conscious awareness.
Here's what it looks like in practice:
A team has been tracking a financially motivated threat actor targeting retail organisations. One morning, a SOC alert flags unusual activity sharing some characteristics with this group's known TTPs. The analyst on duty has been researching this actor for weeks. They see similarities. They write up a preliminary assessment: probable intrusion by the tracked group.
Now the bias kicks in. Every subsequent piece of evidence gets evaluated through the lens of that assessment. A command-and-control domain with a similar registration pattern? Consistent. Network traffic timing that aligns with the actor's known working hours? Consistent. The fact that the same TTP is used by dozens of other groups? Mentioned but not weighted appropriately. The fact that the targeted system doesn't match the actor's known victimology? Noted as “possible expansion of targeting.”

None of this is sloppy analysis. It's human cognition doing what it does. The brain is a pattern-matching machine, and once it locks onto a pattern, breaking the lock takes disproportionate evidence. Research in experimental psychology suggests people require roughly twice as much evidence to abandon a hypothesis as they needed to adopt it in the first place.
In CTI, confirmation bias is most dangerous in attribution. Once an analyst attributes activity to a specific group, every new data point gets filtered through that attribution. Contradicting evidence gets explained away. Supporting evidence gets amplified. The assessment calcifies.
The structured antidote:Analysis of Competing Hypotheses (ACH), developed by Richards Heuer at the CIA specifically to combat this bias. ACH forces the generation of multiple hypotheses upfront, then evaluates each piece of evidence against all of them — not just the favourite. For a full worked example with a matrix template, see the companion post: Analysis of Competing Hypotheses: A Step-by-Step Guide for CTI.
Anchoring: the first answer wins (unless you intervene)
Anchoring is the tendency to over-rely on the first piece of information encountered when making judgments. In CTI, the first vendor report, the first attribution, or the first hypothesis sets a reference point that all subsequent analysis is measured against.
A fraud network investigation at a national cybercrime unit illustrates this. The first significant lead pointed to Eastern European operators. Everything found subsequently was interpreted relative to that anchor. When evidence of infrastructure in Asia surfaced, it was classified as “proxy infrastructure used by the Eastern European operators” rather than prompting consideration that the operators might actually be in Asia.
The hypothesis turned out to be correct — but by accident, not by method. The analytical process would have produced the same conclusion regardless of where the operators actually were. That's how you know anchoring is driving the analysis: the conclusion doesn't change when the evidence should change it.
In CTI, anchoring manifests most clearly in how analysts respond to vendor reports. When a major vendor publishes an attribution, that becomes the anchor for every subsequent assessment involving similar indicators. Independent analysis effectively stops. The question shifts from “what does the evidence tell us?” to “does the evidence match what the vendor said?”
The structured antidote: A key assumptions check. Before starting any analysis, write down assumptions explicitly. Where did each come from? Is it based on evidence, or on something that was read or heard? Then the critical question: “If this assumption were wrong, how would the analysis change?”
Availability heuristic: when what's memorable beats what's likely

The availability heuristic is the tendency to judge the probability of events based on how easily examples come to mind. Recent events, dramatic events, and personally experienced events all feel more probable than they actually are.
After a major supply chain attack, every ambiguous indicator looks like a supply chain compromise. After a ransomware incident hits the organisation, the probability of ransomware feels elevated even if the baseline hasn't changed. After reading a detailed report on a North Korean APT campaign, every anomaly in East Asian traffic becomes more suspicious.
This bias is particularly damaging in threat prioritisation. When a CTI team advises the security programme on where to focus resources, availability biases systematically distort the threat model. Whatever happened recently gets overweighted. Whatever happened personally gets overweighted even more. And threats that are genuinely probable but haven't manifested dramatically get underweighted.
The structured antidote: Base rate analysis. Before assessing a specific threat, establish the baseline: “What is the historical probability of this type of attack against organisations like mine?” Use data — the Verizon DBIR, internal incident history, sector-specific reporting — to establish the base rate. Then adjust from that baseline based on specific evidence, not on how vivid recent examples feel.
The full structured toolkit
ACH, key assumptions checks, and base rate analysis address the three core biases. But the intelligence community developed a broader menu of structured analytic techniques over decades:
- Analysis of Competing Hypotheses— Tests evidence against multiple hypotheses; focuses on eliminating rather than confirming. Use for attribution, incident analysis, any high-stakes assessment.
- Key assumptions check— Forces explicit listing and testing of underlying assumptions. Use before any strategic assessment.
- Devil's advocacy— Formally assigns someone to argue the opposite position. Use for team assessments where consensus forms quickly.
- Pre-mortem analysis— “Imagine this assessment is wrong in six months — what happened?” Use before finalising threat forecasts.
- Red team analysis— Adopt the adversary's perspective. Use for threat modelling and proactive defence planning.
- Base rate analysis— Anchors probability estimates to statistical baselines. Use for threat prioritisation and resource allocation.
For a hands-on walkthrough of ACH — the most widely applicable technique on this list — see: Analysis of Competing Hypotheses: A Step-by-Step Guide for CTI.
The difference between analysis and summarisation
Post 1 of this series drew the line between intelligence and information. Analytical tradecraft is what actually bridges the gap.
Summarisationsays: “Threat actor X was observed using technique Y against sector Z in Q1 2026.”
Analysissays: “Based on the observed campaign cadence, targeting pattern, and our organisation's overlap with the victim profile, we assess with moderate confidence that our European operations face elevated risk from this actor over the next 60 days. This assessment is sensitive to the assumption that the actor's targeting criteria are sector-based rather than opportunistic — if opportunistic, the risk elevation applies to all regions equally. We recommend prioritising detection coverage for technique Y and conducting a tabletop exercise simulating initial access via this vector.”
The difference isn't length. It's judgment, context, confidence calibration, and actionability.
Why CTI largely skipped this tradecraft
The CTI discipline grew fast. It emerged from security operations — a world where the operational tempo doesn't naturally accommodate structured analysis. SOC analysts need answers now. Incident responders need indicators now. The morning threat brief is due now.
But the choice between speed and quality is a false one, created by an operational model that hasn't evolved. When analysts spend the majority of their time on collection and processing, there's no time left for structured analysis. The fix isn't to accept weaker analysis. The fix is to reclaim the time by automating the work that doesn't require human judgment — so the human judgment gets applied where it matters most.
That rebalancing is what Liberty91is designed around: not replacing the analyst's thinking, but removing the mechanical workload that crowds it out.
Building tradecraft into process
Knowing about biases doesn't fix them. (That's actually another bias — the bias blind spot, the belief that awareness of biases confers immunity. It doesn't.)
What works is process:
- Build ACH into attribution workflows.Any assessment that names a threat actor should have competing hypotheses documented — even if one is overwhelmingly likely.
- Require key assumptions checks before strategic assessments. Write them down. Review them quarterly. Track which assumptions turned out to be wrong and why.
- Mandate analytical peer review.Not proofreading — analytical challenge. The reviewer's job is to ask “what else could explain this?” and “what would change your mind?”
- Study the tradecraft. Richards Heuer's Psychology of Intelligence Analysisis available free from the CIA's Center for the Study of Intelligence. The UK's PHIA analytical standards are also freely available and immediately practical.
- Ask one question before every assessment: “What evidence would change my conclusion?” If nothing comes to mind, the process is confirming rather than analysing.
The tradecraft gap in CTI isn't about talent. The analysts are sharp. It's about training, time, and process. Closing that gap — through structured techniques, peer review, and the operational space to actually apply them — is how the field produces better intelligence. Not more intelligence. Better.
This is Part 3b of the “CTI from the Trenches” series. ← Previous: Analysis of Competing Hypotheses | Next: Threat Actor Profiling for Defenders →


