Liberty91
Cinematic hourglass on a navy desk with warm orange sand almost fully collected in the bottom chamber, a small amount still falling, a geometric Liberty91 bull head embossed on the wooden frame, dramatic side lighting
Industry12 min read

The 80/20 Problem in Cyber Threat Intelligence (and Why It’s Structural).

Ask a room full of CTI analysts what they actually spent yesterday doing, and the answers are remarkably consistent. Scanning vendor blogs. Deduplicating feeds. Formatting a report. Tagging indicators. Updating a spreadsheet. Chasing a false positive. Cleaning up a STIX bundle. Waiting for someone else to clean up a STIX bundle.

Actual analysis — the part where you sit with a problem, weigh the evidence, form a view, and write something that helps someone make a decision — usually shows up later in that list, if at all. Often it's the work you meant to do after you'd finished clearing the queue. And the queue, in practice, does not finish.

This is the 80/20 problem. Roughly 80% of a typical CTI analyst's day goes to work that doesn't require human intelligence — collection, processing, formatting, deduplication, triage, report assembly. The remaining 20% is what's left for the work the analyst was actually hired for. It's the single most important thing to understand about how modern threat intelligence operations work, and in my view it's the structural challenge behind almost every other complaint CTI teams have about their output, their impact, and their careers.

It's also the structural challenge that Liberty91 was built around. Before getting to that, though, it's worth slowing down and looking at the problem carefully — because a lot of the conversation around it misdiagnoses what's actually going wrong.

Where the 80% actually goes

When I first ran into this pattern at Barclays, my instinct was to think I was doing it wrong. Surely experienced analysts spent most of their day analysing. Surely the collection and formatting work was the stuff you'd automate away once you got organised. That turned out to be wrong on both counts. As I met more teams — at Mandiant, across the Middle East, at ISACs, at peer networks — the same breakdown kept showing up. It's not a skills problem. It's what the job is made of.

A partial taxonomy of where the 80% goes, in no particular order:

  • Collection. Monitoring dozens of vendor blogs, RSS feeds, vulnerability databases, CERT advisories, dark web forums, social media, premium intel portals, and the mailing lists that nobody has quite got around to migrating off.
  • Processing.Deduplicating overlapping reports from different vendors, normalising formats, reconciling conflicting attributions, checking timestamps, extracting IOCs by hand when the source doesn't publish them structurally.
  • Filtering for relevance. Of the hundred things that landed in the inbox this morning, which three matter to this specific organisation? The answer requires reading all hundred, at least briefly, to know which three to keep. That reading time counts.
  • Context assembly. Pulling together the background a reader will need. What has this actor done before? What does this technique look like in our environment? What have we said about this topic in past reports?
  • Formatting and distribution. Writing the report, adjusting it for the audience, generating the PDF, updating the portal, posting to the ticketing system, sending the Slack message, following up on the email.
  • Tool maintenance.Onboarding new sources, fixing broken integrations, writing the glue code for the feeds that don't speak STIX, managing API keys, updating the TIP taxonomy, explaining why the SIEM integration is down again.
  • Administrative overhead. Status updates, ticket management, capacity planning, vendor reviews, compliance inputs, the quarterly metrics report that takes an entire day.

None of that is bad work. All of it genuinely needs to happen. The problem isn't that analysts are doing the wrong things — it's that the things they're doing consume almost all of the day, leaving little room for the one type of work the team is ultimately judged on: analysis that supports decisions.

Why “just hire more analysts” doesn't fix it

The instinctive response to a team running at capacity is to grow the team. And it does help at the margin. But the proportion doesn't change much, because the work scales with the team rather than with the output. Hire a second analyst, and you get two people doing the 80% in parallel — with new overhead for coordinating between them, deduplicating each other's work, and handing off on-call. Add a third, and you get formal handovers, meetings, and a shared taxonomy that someone needs to maintain.

It's not that larger teams are worse — of course they're not. It's that the ratio between data-plumbing work and analytical work doesn't improve linearly with headcount. The 80/20 split tends to persist through team growth, because the structure of the work is the same. Every new analyst brings their own 80% with them.

This is also why “work harder on analysis” isn't the answer. It's the advice analysts have been giving themselves for a decade, and it hasn't shifted the ratio. Effort isn't the bottleneck. The operational model is.

The consequences of the split

If this were only a productivity complaint, it wouldn't matter much. But the 80/20 split shapes almost everything else CTI teams struggle with, and those consequences are worth being explicit about.

Strategic intelligence rarely gets produced.The three intelligence levels — strategic, operational, tactical — scale inversely with urgency. Tactical intelligence has the shortest shelf life and the loudest stakeholders, so it dominates output. Strategic intelligence, the kind that actually shifts board-level decisions, gets pushed to “when we have time,” which never arrives. The three-level hierarchy is covered in more depth in the intelligence lifecycle post.

Direction is skipped.Priority Intelligence Requirements — the formal, documented questions the business needs answered — take time to write and more time to keep current. Under the 80/20 split, that time almost never exists, so PIRs either don't get written or drift out of date. The downstream effect is that collection drifts toward whatever's in the news rather than what the organisation needs. This is covered in the collection planning post.

Feedback loops don't close.Knowing whether an intelligence product was useful requires going back to stakeholders after the fact and asking. That conversation takes time, and it's almost always the first thing to get cut when the queue is full. The result is that teams produce reports into silence, and can't tell which of their work is landing.

Analytical quality takes a hit. Structured analytic techniques like Analysis of Competing Hypotheses and bias-countering tradecraft take more time than quick-and-dirty assessment. Under pressure, the structured approaches get skipped, which is exactly when cognitive biases have the biggest impact on the work.

Careers stall.The analysts who thrive in the field are the ones who get to do the 20% work and build the experience that goes with it. Analysts stuck in the 80% — however hard they're working — end up feeling like they're not developing, because the growth happens on the other side of the split.

Burnout.There's a specific flavour of burnout that comes from knowing you're not doing the work you trained for, while running flat out on work that feels replaceable. The CTI community has been quietly carrying this for years. It's one of the things I care most about changing.

What AI actually does to the 80/20 split

This is the part of the conversation where AI usually enters, and where it's easy to overclaim in either direction. A lot of vendor positioning implies AI will replace the CTI analyst. A lot of analyst positioning implies AI can't help with anything that matters. Neither is true, and the honest picture is much more useful than either caricature.

Here's the framing that has held up well across the transition from first-generation ML to modern LLMs and agent architectures: AI is extremely good at the 80%, and not very good at the 20%. That's not a hedge. It's a precise statement about what the technology currently does well, and it maps neatly onto the structural problem.

What AI does well in CTI:

  • Collection at scale.Continuously monitoring hundreds of sources in parallel without getting tired or context-switching. This is the single biggest lever, because it's where the most human time goes.
  • Filtering for relevance.Given a well-articulated description of what the organisation cares about, modern models are remarkably good at discarding the 90-plus percent of inputs that aren't relevant.
  • Structuring unstructured data.Pulling entities, IOCs, TTPs, and ATT&CK mappings out of prose reports in a format the downstream tooling can consume.
  • Initial triage. Categorising, prioritising, and routing. Not the final word, but a very strong first pass that saves the analyst from having to do the first pass themselves.
  • Draft generation. Producing the first version of a report, an IOC list, or a detection rule. The analyst edits rather than starts from blank.

What AI does notdo well, and probably won't in the time horizons that matter for current planning:

  • Hypothesis development under uncertainty.The creative act of saying “what if it's actually the opposite of what the report suggests?” is not something current models reliably initiate on their own.
  • Intent assessment.Figuring out why an adversary is doing what they're doing — as opposed to what they're doing — requires judgment about motivation, context, and strategic signalling that AI handles poorly.
  • Relationship-based intelligence. The phone call from a peer who just saw the same thing. The ISAC conversation that never gets written down. The quiet tip from a researcher who trusts you. This is the most valuable intelligence most analysts will ever have access to, and it runs on human trust.
  • Knowing when the data is wrong. AI will confidently extract an IOC from a badly-written report that a human analyst would recognise as an error. Editorial judgment is still a human job.
  • Strategic judgment.The sort of assessment that goes to an executive and says “this matters, and here's what we should do about it” requires context and accountability that a model can't provide.

Notice the pattern. The list of things AI does well is almost exactly the 80%. The list of things AI doesn't do well is almost exactly the 20%. That's not a coincidence — it's what “generative AI” and “agentic AI” mean in operational terms. They handle the bulk manipulation of text-heavy data. They do not replicate human judgment under uncertainty.

What changes when the operational model flips

If the 80% shifts to automation and the analyst is left with the 20%, several things happen at once, and most of them are better than teams initially expect.

Strategic output becomes possible. The reports that used to get deferred because there was no time start getting written, because now there is time. This is where the career-defining work lives.

PIRs get written and maintained. Direction becomes something the team actually practices, which tightens the whole downstream loop. Collection gets sharper. Outputs get more relevant. Stakeholders start asking for more.

Structured techniques come back.ACH, key assumptions checks, and premortems need time — time that now exists. The quality of analytical judgment goes up measurably, because the structure that produces good judgment is being used.

Feedback loops close.The analyst has capacity to go back to the CTO and ask whether last week's report was useful. That conversation, in turn, makes the next report more useful. Over months, the team's credibility with stakeholders compounds.

Analysts develop. People who were stuck running the mill get to practise the craft they were hired to practise. The growth curve that was flat starts bending upward again.

Burnout eases.Not because there's less work — the work is still plenty — but because the work is the work the analyst signed up for. That distinction matters a lot.

A caveat, because this sounds like a magic wand and isn't: the transition is not free. Teams that flip the model have to re-learn what to do with reclaimed time. Analysts who are used to being measured on volume have to adjust to being measured on judgment. Stakeholders need to see the new outputs before they trust them. And the automation itself needs supervision, because AI that's left unchecked will cheerfully hallucinate IOCs with high confidence. The operational model gets better, but it gets different, and the change is real.

How Liberty91 approaches it

All of the above is the reason Liberty91 exists, and it's the frame I use for almost everything the platform does. The product goal is simple to state and harder to deliver: take as much of the 80% off the analyst's plate as possible, in a way that's safe enough to trust, so the analyst can spend the reclaimed time on the 20%.

Concretely, that means the platform continuously monitors hundreds of sources against the PIRs and threat profile the team defines, filters for relevance at source-level, extracts structured intelligence (IOCs, TTPs, ATT&CK mappings) automatically, drafts reports in the formats different stakeholders need, and notifies only when the analyst needs to be notified. The analyst is in the loop at the judgment points — not in the loop for every feed that updated overnight.

The philosophy is worth stating explicitly: the analyst is the centre of the system, not the thing being replaced. Liberty91 is designed around reclaiming analyst time for the work only they can do, not around removing the analyst from the workflow. That distinction shows up in how the platform surfaces decisions, how it handles ambiguity, and how it logs its own reasoning so the analyst can audit it.

For teams that want to see how this shows up in day-to-day use, the solutions pages walk through the workflows by role. For the theory behind why the operational model matters, the CTI from the Trenches series covers the full picture — and most of it was written partly to make this post make sense.

What I'd want a CTI lead reading this to take away

Three things, if the rest of this post ends up blurred together in memory.

First, if your team feels like it's running hard and not getting to the work that matters, that's not a failure of discipline. It's the structure of CTI work under the current operational model. You're not doing it wrong.

Second, the lever that actually moves the ratio is changing what the analyst's time is spent on, not how hard they work. Hiring helps at the margin. Tooling that takes the 80% off the plate moves the ratio.

Third, AI is not the enemy of the CTI analyst. It's the enemy of the parts of the CTI analyst's job that nobody trained for and nobody enjoys. The craft — the judgment, the writing, the relationships, the stakeholder work — is the part that stays, and gets more room to breathe.

If any of this sounds familiar from your own team's Monday mornings, I'd love to hear about it. The 80/20 problem is the thing I've spent the last few years thinking hardest about, and the more honest conversations the community has about it, the better the next few years of CTI will be for everyone.

Further reading

This post is the structural frame for the rest of the CTI from the Trenches series. If you want to go deeper on any of the connected themes:

Ready to do more with less?

Request a demo or start your free trial today. Get instant access to AI-powered threat intelligence tailored to your organisation.