Liberty91
Abstract geometric checklist or evaluation matrix on a deep navy background, with grid cells illuminated in warm orange and amber suggesting selection and evaluation
Industry10 min read

How to Evaluate a TIP: A Practitioner's Checklist.

This post draws on experience evaluating threat intelligence platforms as a buyer at a major bank, advising on platform selection as a consultant, and now building one. Across all of those experiences, the same pattern emerges: teams spend months evaluating platforms against feature lists, pick the one with the most ticks, and then spend the next two years fighting with a tool that doesn't fit how they actually work.

This post is the evaluation guide that would have been invaluable before that first TIP procurement. It's not a product comparison — it won't tell you which platform to buy. It's a methodology for working out which platform is right for your team, your stakeholders, and your operating model.

Step 1: Define what you need before you look at what's available

This is the step everyone skips and everyone regrets skipping.

Before you open a single vendor website, answer these questions:

What intelligence do you produce? List your actual outputs. Daily briefings? Tactical alerts? Strategic assessments? Incident support? RFIs from stakeholders? Be specific about frequency, format, and audience.

Who consumes your intelligence? SOC? IR? Vulnerability management? CISO? Board? Each audience needs different things. Your platform needs to support all of them, or at least not obstruct any of them.

What are your PIRs?If you don't have documented Priority Intelligence Requirements, stop evaluating tools and go write them. A TIP without PIRs is a database without a query. As discussed in the first post of this series, without Direction, you're producing information, not intelligence.

Where does your time go today? Track it for a week. How many hours on collection? Processing? Analysis? Reporting? Administration? The platform should compress the biggest time sinks.

What are your integrations? SIEM, SOAR, ticketing system, email, collaboration tools. Non-negotiable integrations versus nice-to-haves. This list will eliminate candidates faster than any feature comparison.

Abstract funnel or filter structure narrowing from many geometric elements to a few refined ones, deep navy background with orange and amber light accents

Step 2: Map the market honestly

The CTI tools landscape in 2026 broadly breaks into categories. Understanding these helps you know where to look.

Full-platform TIPs— Recorded Future, Anomali, ThreatConnect, and others — aim to be your single pane of glass for intelligence management. Strengths: comprehensive workflow, established integrations, vendor support. Trade-offs: complexity, cost, and the risk that you're paying for capabilities you don't use.

Open-source platforms— MISP and OpenCTI lead here. Strengths: flexibility, community-driven development, no licence cost, strong sharing capabilities. Trade-offs: deployment complexity, maintenance burden, and a learning curve that's steeper than most will admit. MISP in particular has become the lingua franca of intelligence sharing, and any TIP you choose should integrate with it.

SOAR platforms with TIP capabilities— Cortex XSOAR, Splunk SOAR, Tines, and others. These started as orchestration tools but increasingly incorporate intelligence management. Strengths: automation-first design, strong integration frameworks. Trade-offs: the intelligence data model is often bolted on rather than native, and analytical features tend to be thinner.

AI-native platforms— the newest category, including Liberty91. These are built with AI and automation at the core rather than added on top. Strengths: designed for the 80/20 problem, strong at collection and processing automation. Trade-offs: newer products with less market validation, and AI capabilities that vary significantly in maturity.

Point solutions— Shodan, VirusTotal, GreyNoise, Abuse.ch, ANY.RUN, and many others. These solve specific problems exceptionally well, and most of them aren't really separate workflows so much as enrichment sources. A good TIP integrates them natively, so when an analyst is investigating an indicator inside the platform, the relevant context — VirusTotal scan results, GreyNoise classification, Shodan exposure data, ANY.RUN behavioural traces — appears alongside it rather than requiring a context switch to another tab. The right question isn't whether these tools are part of your toolkit (they will be), but how deeply your TIP integrates with them. Shallow integration (“we have an API”) is friction. Deep integration — the tool's output piped into the analyst's workflow automatically, with results stored as structured enrichment on the indicator itself — is leverage.

Step 3: Evaluate on workflow, not features

This is the most important section of this post. Feature matrices lie. Workflows don't.

Feature matrices lie. Workflows don't. The scenarios that feel smooth and natural in the tool are genuine strengths; the ones where the vendor says “you could do that by...” followed by a complex workaround are where the platform doesn't fit your work.

Create three to five scenarios that represent your team's most common and most important tasks. Here are examples to start with — adapt them to your reality:

Scenario 1: Morning triage.“It's 9 AM. Overnight, three vendor advisories were published, a new vulnerability was disclosed, and a threat actor associated with our sector posted on a dark web forum. Walk me through how an analyst processes this in your platform.”

Scenario 2: Stakeholder RFI.“The CISO asks: ‘Are we exposed to the campaign reported by [vendor] yesterday?’ Show me how an analyst answers that question, from initial assessment to stakeholder response.”

Scenario 3: Strategic assessment.“We need a quarterly threat landscape assessment for the board. Show me how your platform supports producing that — not just the final report, but the analysis process.”

Scenario 4: Indicator operationalisation. “We've identified IOCs from a new campaign. Show me the path from raw indicators to detection rules deployed in our SIEM.”

Scenario 5: Collaborative investigation.“Two analysts are tracking the same threat actor. Show me how they share analytical progress, avoid duplication, and build on each other's work.”

Abstract workflow diagram with parallel paths converging at decision points, deep navy background with orange and amber light accents

Run these scenarios during every vendor demo. The scenarios that feel smooth and natural in the tool are genuine strengths. The scenarios where the vendor says “you could do that by...” followed by a complex workaround — those reveal where the platform doesn't fit your workflow.

Step 4: Questions that get past the demo

A demo will tell you what a platform can do under ideal conditions. These questions surface what it's like to actually run on it day-to-day:

“Where does the intelligence come from, and what's bundled versus add-on?” Sources are most of what you're paying for in a commercial TIP. Some platforms include premium feeds in the base licence; others charge per source, per region, or per use case. Knowing the boundary up front prevents the renewal surprise where you discover the feed your detections rely on is now a separate line item.

“Can we bring our own sources, and how deeply do they integrate?” Most teams already pay for some feeds or run internal collection. The platform should treat your sources as first-class data — same enrichment, same correlation, same workflow surface — not as STIX uploads that lose context once they land. If “bring your own” means a manual import flow, that's not integration.

“What does the data exit look like?”At end of contract, what can you export, in what format, and how complete is it? Indicators are usually portable. Analytical artefacts — assessments, comments, links between objects, custom fields, version history — often aren't. This matters little in year one and matters enormously in year four.

“What happens when our integration breaks at 2 AM?” Not the SLA on paper — the actual support experience. How quickly does a real engineer (rather than a chatbot or a tier-one ticket-mover) respond? Is there a meaningful escalation path? Ask a reference customer this question specifically, without the vendor in the room.

“How does pricing work at scale?” Per-user? Per-indicator? Per-source? Per-API-call? Volume-tier pricing can turn a reasonable initial cost into a surprising renewal conversation as your programme matures and your indicator volume grows.

Step 5: The build-vs-buy decision

Some teams, especially those with strong engineering support, consider building their own platform. This works well rarely and fails painfully often. Here's the honest assessment.

Build when:You have dedicated engineering resources (not borrowed from the CTI team). Your requirements are genuinely unique — not “we're special” unique, but “no platform on the market handles our specific data model and workflow” unique. You can commit to treating the internal tool as a product with documentation, testing, and succession planning.

Buy when:Your team is small and analyst time is your scarcest resource. Your requirements are complex but not unique. You need to show value quickly. You don't have dedicated engineering support.

The hybrid approach that actually works: Buy a platform for core workflow. Build custom integrations and automations on top. Use open-source tools for specific tasks where they excel. This gives you the stability of a supported product with the flexibility of custom development where it matters most.

The newer dimension in this conversation is agentic tooling. Open-source agentic frameworks — including Liberty91's CTI Skills — let teams encode their analytical tradecraft into reusable agents without building a full platform from scratch. The honest version of build-vs-buy in 2026 is less binary than it used to be: buy the platform for the data layer, build the agents that capture how your team actually thinks about intelligence. Agentic is where the discipline is heading, and the open-source layer means you don't have to wait for a vendor's roadmap to get there.

The idea that “tools won't make you a better analyst” is true but often misunderstood. The right tools, in the hands of a competent analyst, remove friction. The wrong tools — or no tools at all — create friction that degrades even excellent analysts over time. The goal isn't to find a perfect tool. It's to find the tool that creates the least friction for the work your team actually does.

Step 6: Plan for change

One final point that evaluation guides rarely mention: your needs will change. The platform that's perfect for your team of two today may not scale to a team of six. The workflow that works for tactical intelligence may not support the strategic assessments your CISO will inevitably ask for.

Evaluate flexibility alongside fit. Can you add new sources easily? Can you create custom workflows? Can you change your data model without a professional services engagement? The platform that grows with you is worth more than the platform that fits perfectly today but locks you in tomorrow.

Abstract expanding geometric structure radiating outward in concentric layers from a compact core, deep navy background with orange and amber light accents

Where Liberty91 fits in this landscape

This post lives on the Liberty91 blog, so it's worth being explicit about how the platform answers the questions in it.

Liberty91is built on a specific premise: AI can automate roughly 90% of the work that fills an analyst's day — collection, normalisation, enrichment, initial filtering, and draft reporting — in real time, with output filtered for organisational relevance rather than firehose feeds that everyone gets the same version of. Faster intelligence, more actionable, doable with fewer analysts than the traditional CTI model assumes.

The practical implication: you don't need a multi-year programme build and a team of hard-to-find specialists to have a functioning intelligence capability. Mid-tier organisations — the ones who historically couldn't justify a six-figure platform contract or staff a dedicated CTI team — can run real intelligence operations. And for organisations that already have analysts, AI handles the daily grind so they can spend their time on the complex, technical analysis that requires human judgment, rather than on collection and formatting.

The right way to evaluate Liberty91 is the same way you'd evaluate any other platform on this post: run the workflow scenarios from Step 3, ask the questions from Step 4, calculate total cost of ownership including analyst time. The thesis stands or falls on whether it actually fits the work your team does.

A threat intelligence platform is an investment measured in years, not months. Choose accordingly — not based on today's demo, but on where your programme needs to be in three years.

This is Part 6 of the “CTI from the Trenches” series. ← Previous: From Intelligence to Detection — Operationalising MITRE ATT&CK | Next: PIRs in Practice — How to Make Your Intelligence Impossible to Ignore →

New to the series? Start from the beginning: What CTI Actually Is →

Ready to do more with less?

Request a demo or start your free trial today. Get instant access to AI-powered threat intelligence tailored to your organisation.