PIRs in Practice: How to Make Your Intelligence Impossible to Ignore.
This is a how-to post. If you want the philosophy behind intelligence dissemination, the rest of the CTI from the Trenches series covers it. If you want a practical guide to building PIRs, writing better reports, and managing stakeholders — this is the one.
PIRs are the most important and least practised concept in CTI, and most of the available guidance is either too theoretical or too basic. FIRST's curriculum covers them well in principle — it's one of the strongest publicly available references. But translating theory into practice is where most teams get stuck.
Part 1: Building PIRs that work
Step 1: Identify your stakeholders
Before you write a single PIR, map your stakeholder landscape. For each stakeholder, document:
- Who they are (name, role, team)
- What decisions they make that intelligence could inform
- How they currently receive intelligence (report, email, briefing, nothing)
- How they'd prefer to receive intelligence (ask them — the answer is often different from what you assume)
A typical CTI stakeholder map includes: SOC lead, IR lead, vulnerability management lead, CISO, CTO, compliance/risk function, and possibly business unit security leads in larger organisations.

Step 2: Conduct PIR development interviews
Schedule thirty-minute sessions with each key stakeholder. Ask these questions:
- “What keeps you up at night regarding cyber threats?” — Open-ended, gets their priorities on the table.
- “What decisions do you make regularly that intelligence could improve?” — Connects intelligence to action.
- “Can you give me an example of a time intelligence was useful to you?” — Reveals what “useful” means to them.
- “Can you give me an example of intelligence you received but couldn't use?” — Reveals the delivery failures.
- “If you could ask the CTI team one question and always get an answer, what would it be?” — This is often your best PIR.
Record the answers. Look for patterns. Where multiple stakeholders express similar concerns, you have a strong PIR candidate.
A useful synthesis technique — borrowed from product management — is to translate the interview output into a user story. The format forces specificity on every dimension that matters for a workable PIR:
As a {role}, I need {intelligence requirement} in {format}, delivered by {delivery method} at {interval}, to do {stakeholder task}. Without it, {negative consequence} would happen.
A concrete example: “As the IR lead, I need adversary TTP profiles in a one-page briefing, delivered via Slack within thirty minutes of activation, every time we open a P1 incident, so I can assign the right responders and predict the likely next move. Without it, we lose the first hour to manual triage and miss the most plausible lateral movement path.”
The user story is doing more work than it looks. It captures the four components of a well-formed PIR (covered in Step 3) in a single sentence — the question, the decision-maker, the cadence, and the format implicit in the delivery method — and it surfaces the failure mode, which makes the value of the PIR concrete and gives you something measurable to come back to in feedback conversations later.
If you can't fill in every blank in the template, the requirement isn't ready yet. You've got a topic of interest, not a PIR.
Step 3: Draft and validate PIRs
A well-formed PIR has four components:
The question: specific, answerable, decision-relevant. “What ransomware groups are actively targeting the healthcare sector in Europe using known vulnerabilities in our technology stack?” is a good PIR. “What are the threats to our organisation?” is not.
The decision-maker: who will use this intelligence and for what decision? “This PIR supports the CISO's quarterly risk assessment and budget allocation process.”
The feasibility check: can the team actually answer this PIR, and what would it take? An honest read across four dimensions:
- Sources— where will the data come from? “Dark web forum monitoring, vendor reporting on ransomware campaigns, ISAC advisories, and internal vulnerability scan data.”
- Skills— do the analysts have what's needed to interpret what comes back? Russophone forum monitoring, malware family triage, and statistical trend analysis on victim disclosures are all easy to require and hard to staff. The PIR is only as good as the analytical capability sitting behind it.
- Tools— can you actually get to the data and wrangle it? Dark-web access, the right TIP integrations, a malware sandbox, paid forum subscriptions, language tooling. A PIR that depends on a feed you don't subscribe to is a PIR that quietly never gets answered.
- Effort— is the ongoing time and energy investment realistic given your team's other commitments? Maintaining a watchlist of twenty-plus ransomware groups across multiple sources is a real commitment, not a side task. If the honest answer is “we don't have the capacity,” the PIR needs to be scoped down, deferred, or backed with new investment — explicitly, not silently.
A PIR you can't actually answer well is worse than no PIR at all. It generates output that looks like intelligence but isn't, and it erodes stakeholder trust the first time someone notices.
The review cadence: when is this PIR re-evaluated, and who signs it off? “Reviewed quarterly, or ad hoc if a significant ransomware incident affects the healthcare sector. Approved each cycle by the CISO.”
The sign-off matters as much as the schedule. PIRs come from individual stakeholders, but at every review cycle the CTI manager needs to step back and validate the whole PIR set with executive leadership — typically the CISO, or whoever owns security risk for the organisation. The question isn't only “are these still the right requirements?” but “are you, the person accountable for security risk, happy with the time and resources we're committing to these specificPIRs — including the ones that came from other teams?”
This catches a class of disconnect that's easy to miss otherwise. If the CISO is strategically worried about ransomware against the wider sector but the SOC's PIRs are entirely about endpoint detection tuning, the CTI team can be doing perfectly competent work that misses what leadership actually cares about most. The cycle review is where that gap surfaces — provided someone is asking.
A good review produces two outputs: a renewed executive mandate for what the team willdo, and — at least as importantly — an explicit list of what the team will notdo. “We are not tracking insider threat. We are not running brand monitoring. We are not handling RFIs from Marketing.” Naming the out-of-scope items directly, with the CISO in the room, gives the team cover to redirect those requests when they inevitably arrive mid-cycle, and it forces leadership to acknowledge the trade-offs rather than expecting everything from a finite team. Without this step, PIRs drift toward whatever's loudest. With it, you have a defended set of priorities and a refresh cadence you can point to.
Draft five to eight PIRs maximum. More than eight and you've lost focus. Share the drafts with stakeholders for validation. The question “Did we capture what you need?” is more productive than “Did we write this correctly?”
Step 4: Operationalise PIRs
This is where most PIR programmes die — they're documented once and then forgotten. Operationalising PIRs means:
- Every collection task maps to at least one PIR. If you're monitoring a source or ingesting a feed that doesn't serve any PIR, question whether you should be.
- Every analytical product addresses at least one PIR. If you're writing a report that doesn't answer any PIR, question whether you should be. (This doesn't mean you never write ad-hoc analysis — it means ad-hoc work is recognised as such, not confused with your core mission.)
- PIR status is tracked and reported.Stakeholders should receive regular updates: “Here's what we learned against PIR-3 this month.” This closes the feedback loop and demonstrates value.
Part 2: Writing intelligence that gets read
The inverted pyramid
Journalism figured this out a century ago: put the most important information first. Most CTI reports do the opposite — they build up from background through analysis to conclusions. By the time the stakeholder reaches the conclusion, they've already stopped reading.
Structure your reports like this:
- Bottom line up front (BLUF): One paragraph. What's the assessment? Why is this relevant to us (the “so what”)? What should the reader do?
- Key judgments: Three to five bullet points. The main findings with confidence levels.
- Analysis: The reasoning behind the judgments. This is where you show your work.
- Background / context: For readers who need it. Many won't reach this section, and that's fine.
- Appendix: IOCs, technical details, full source list. For the SOC and IR consumers.

Confidence language
Adopt a consistent confidence language framework and use it every time you make an assessment. Here's a proven framework, adapted from intelligence community standards:
- Almost certainly (90%+ probability): Strong evidence from multiple reliable sources. Limited alternative explanations.
- Likely / probably(60–90%): Good evidence, but some gaps. Alternative explanations possible but less supported.
- Roughly even chance(40–60%): Evidence supports multiple interpretations. Genuine uncertainty.
- Unlikely(10–40%): Limited evidence for this assessment. Alternative explanations are more supported.
- Remote chance(<10%): Minimal evidence. Included because the impact would be significant if it occurred.
Stating confidence isn't hedging. It's precision. A stakeholder who knows your assessment is “likely” rather than “almost certain” makes a different risk decision, and they should.
Format for different audiences
As covered throughout this series, different audiences need different products. Here's a practical template set:
Flash alert (SOC/IR): One page maximum. What happened, what indicators to look for, what action to take. No background. No analysis. Get to the point. Include machine-readable IOCs.
Intelligence briefing (CISO/security leadership): One to two pages. Assessment, key judgments, implications for our organisation, recommended actions. Confidence levels on every judgment. Strategic context where relevant.
Threat landscape assessment(executive/board): One page or five slides. Risk trends in business language. No unexplained acronyms. “What's changing, what it means for us, what we're doing about it.” Include a clear risk rating that maps to the organisation's risk framework.
Detailed analytical report(CTI team, interested analysts): Full analysis with methodology, sources, confidence assessment, alternative hypotheses. This is where rigour lives. It's the document of record, not the dissemination product.
Part 3: Stakeholder management as tradecraft
The regular cadence
Establish regular touchpoints with key stakeholders. Not reports — conversations. A fifteen-minute fortnightly check-in with your CISO is worth more than a hundred emailed reports. Use these conversations to:
- Share emerging intelligence that isn't report-ready yet
- Gather informal feedback on recent products
- Identify shifting priorities before they become formal requests
- Build the relationship that makes your written products actually get read
The feedback loop

After every significant intelligence product, ask the primary stakeholder: “Was this useful? What would have made it more useful?” Track the answers. Patterns in feedback reveal systemic issues that individual conversations miss.
Common feedback patterns observed across teams:
- “Too long”— Your reports need to be shorter. Always. Without exception.
- “Too late”— Your dissemination process is too slow. Invest in faster delivery, even if it means less polished output.
- “Too technical”— You're writing for analysts, not for the actual audience. Adjust the language, not the content.
- “Not relevant to us”— Your PIRs are misaligned. Go back to Step 2 and re-interview.
- Silence— The most concerning feedback. It usually means your intelligence isn't reaching the right people, or they've given up on it. Investigate.
Intelligence sharing as a force multiplier
Finally, a practical note on sharing intelligence externally. Participating in your sector's ISAC, contributing to peer-to-peer intelligence sharing, and engaging with the broader CTI community is not a nice-to-have. It's a capability multiplier.
Practical steps:
- Classify your intelligence using TLP from the start. Don't decide what's shareable after the fact — build sharing classification into your production workflow.
- Identify your sharing communities. ISAC, sector CERT, bilateral relationships with peer organisations, vendor communities.
- Assign a sharing cadence.“We contribute intelligence to our ISAC at least twice monthly” is achievable. “We share when we have something” means you'll never have time.
- Automate where possible.If your TIP can push structured intelligence to sharing platforms as part of the normal workflow, sharing costs almost nothing in analyst time. If it's a manual process, it'll be deprioritised every time.
Liberty91was built with this in mind — intelligence sharing shouldn't be additional work for the analyst. When the platform structures intelligence for internal consumption, it simultaneously prepares it for external sharing in standard formats. The analyst's job is the analysis and the judgment. The distribution should be automated.
This is Part 7 of the “CTI from the Trenches” series. ← Previous: How to Evaluate a TIP — A Practitioner's Checklist | Next: The Future of CTI — AI, Automation, and What Analysts Should Actually Learn → (coming soon)
New to the series? Start from the beginning: What CTI Actually Is →


