A while back I sat in on a quarterly review where a CTI lead walked his leadership through a slick deck — dashboards, new threat feeds, a freshly procured TIP, the works. Near the end, one of the independent directors asked the only question that ever really matters: What decision did any of this help us make last quarter?
The room went quiet.
That silence isn’t rare. It’s the background hum of our industry. Most cyber intelligence programs are busy, well-staffed, and funded better than ever — and still can’t describe, in plain English, how their product shapes an executive choice. They confuse volume with value. Activity with capability. Instrumentation with insight.
Which is why Treadstone 71’s Intelligence Readiness Score is worth the eight minutes it takes.
It’s a fifteen-question diagnostic across five operational dimensions, scored against industry benchmarks, with a gap analysis that actually points at the things most programs won’t admit to themselves in a staff meeting. I want to walk through what those five dimensions are, why each one matters, and where organizations most often come up short when they look at them honestly.
The Problem With Most Self-Assessments
Before the dimensions, a word on the genre. Most self-assessments in this space are garbage. They ask whether you “have” a thing — a policy, a tool, a team — and reward you for checking yes. That’s why security programs with two shelfware platforms and a pile of Confluence pages score respectably on questionnaires while failing in contact with actual adversaries.
A good assessment scores posture, not inventory. It punishes the gap between what you own and what you operate. The T71 tool leans that way. Its own framing is that it scores operational reality, not aspiration — which is the right stance, even if you’ll wince at your results.
Three questions per domain, five domains, a number out of 75 at the end, and a report that tells you where you sit relative to peers in your sector. Simple. The value isn’t the number. The value is being made to answer each question without the buffer of marketing language.
Dimension 1: Intelligence Program Maturity
This is the load-bearing beam. Governance, lifecycle, and leadership integration — in other words, whether there is actually a program here or just a function.
The honest test: can you show someone your Priority Intelligence Requirements (PIRs)? Are they signed off by the business or just drafted by the team that wrote them? When an analyst produces a finished intelligence product, is there a feedback loop from the consumer telling them whether it mattered? Does the board know what decisions got made on the basis of your assessments in the last twelve months?
https://www.treadstone71.com/intelligence-readiness-scoring
Most shops fail one of these. I’d guess half fail three. Teams were hired to “do threat intel,” built a pipeline that pulls feeds, started producing weekly bulletins, and never closed the loop with anyone above them. The program exists on an org chart but not in the decision architecture of the company. Five years in, the CTI team is still a very expensive
IOC-enrichment service.
The maturity dimension forces you to look at that. If you can’t articulate your lifecycle — requirements, collection, processing, analysis, production, dissemination, feedback — then what you have is a feed farm, not a program.
Dimension 2: Cognitive Warfare Resilience
This one is newer, and it’s where most security leaders I talk to are flatly unprepared. They don’t have a playbook because until recently the problem wasn’t on their desk.
Cognitive warfare is the operational side of influence — hostile narrative, disinformation, synthetic media, reflexive control. It used to live in the domain of elections and geopolitics. It now lives inside corporate risk. Short-seller campaigns weaponize deepfakes. State-aligned networks seed narratives against pharmaceutical companies, defense primes, and energy operators. Activist groups coordinate across platforms with tradecraft borrowed from intelligence services. If your adversary can move a stock price, shape a regulatory outcome, or collapse a reputation faster than your comms team can draft a statement, you have a cognitive warfare exposure whether you’ve named it or not.
The resilience dimension asks whether you can detect hostile narrative campaigns early, attribute them with any rigor, and respond without making the situation worse. Most organizations fail on step one. They find out because a journalist calls. By then the narrative is ambient and the response options are all bad.
A mature program here looks like a hybrid between intelligence, comms, and legal — with someone who understands how influence operations actually work and isn’t going to confuse an inauthentic amplification network with a real customer backlash. That capability is rare. Scoring yourself on it will surface how rare.
Dimension 3: Insider Threat Posture
Most organizations have an insider threat tool. Fewer have an insider threat program. Almost none have an insider threat capability.
The tool is DLP or UEBA. It watches data and behavior. The program has governance — policies, thresholds, legal review, HR partnership, an escalation path. The capability combines both with an actual counterintelligence lens: behavioral indicators, personnel security integration, the sociocultural context that separates a disgruntled employee from an actively cultivated one.
The Snowden-type case is the one everyone thinks about, but that’s not really the modal insider problem. The modal problem is the ordinary employee with financial pressure, a grievance, a contact through LinkedIn who seemed too friendly too fast, and an org that collected none of the behavioral signals that should have raised a flag. Or it’s the departing engineer who took the crown jewels because nobody thought to check exfiltration against her resignation timeline.
This dimension scores detection, behavioral analysis, and program governance. If your “program” is a Splunk alert and an HR contact in your phone, the score will reflect that. T71 publishes a full Insider Threat CMM that goes deeper if the result prompts you to.
Dimension 4: OSINT Exposure
The premise here is uncomfortable but essential: look at yourself through the adversary’s eyes.
Every organization has an OSINT footprint — the aggregate of what a competent collector can learn about you from open sources without ever touching a system. Executive calendars from paparazzi-style coverage or speaker circuits. Employee biographies on LinkedIn detailing which systems they own. Vendor disclosures buried in SEC filings. Photos of your data centers on Google Maps. GitHub repos with internal hostnames. Forum posts from engineers asking questions that reveal your tech stack. Breach-exposed credentials from third parties that still work on your perimeter because people reuse passwords.
An adversary with a week and no special tools can build a target package that would surprise most CISOs. The OSINT exposure dimension asks whether you’ve ever actually done that exercise against yourself — and whether you have a feedback loop that feeds findings back into awareness training, policy, technical controls, and executive protection.
Nobody scores perfectly here, and perfection isn’t the goal. Awareness is. If your program has never produced an OSINT self-assessment, never briefed an executive on what the internet says about them personally, never mapped the open-source disclosures that adversaries would use to pretext your help desk — then what you have is a blind spot the size of everything that isn’t inside your firewall.
Dimension 5: Analytic Tradecraft
This is the dimension where traditional IC-trained analysts and self-taught cyber threat intel practitioners tend to score very differently.
Analytic tradecraft is a real discipline. It has standards — ICD 203 in the US intelligence community is the benchmark — and methods. Structured Analytic Techniques (ACH, key assumptions check, red team analysis, alternative futures, and the rest of the roster) exist because unstructured analysis is biased analysis. Analytic writing has conventions: bottom line up front, estimative language with defined confidence levels, clear sourcing, explicit distinction between what you know, what you assess, and what you assume.
Most CTI reports I’ve read in the wild don’t do any of this. They’re chronologies of IOCs with a summary paragraph. They hedge everything or hedge nothing. They use “may” and “could” interchangeably with “will.” They attribute with certainty on thin sourcing and then fail to attribute when the sourcing is strong. They reach the executive summary and the executive has no idea what the analyst actually thinks.
Scoring this dimension forces you to look at your production. Is bias audited? Are dissenting views documented? Do your analysts know what a key assumptions check is, and have they done one on their last major assessment? Are reports written for the reader or for the writer?
If your tradecraft is weak, your other four dimensions are working against a ceiling you didn’t know was there.
What the Gap Analysis Actually Does
The scoring is per-dimension out of 15, rolled up to 75. That matters less than what happens in the report: the tool benchmarks you against your sector. A bank’s CTI program should not look like a defense contractor’s, which should not look like a pharma’s or a regional utility’s. The threat environments differ, the regulatory context differs, the adversary set differs.
Industry benchmarking gives you the comparison that matters. You’re not measuring against a theoretical ideal. You’re measuring against the organizations that have the same adversaries you do, working with similar constraints. A score of 45 in defense might be underwhelming; the same score in a mid-sized SaaS firm might put you in the top decile for your peer group.
The prioritized recommendations in the output are the other piece worth the eight minutes. They translate the per-dimension scores into sequenced actions. The honest ones will look uncomfortable — “formalize PIRs” and “establish a counter-influence playbook” aren’t things teams put off because they’re hard; they put them off because they require leadership conversations nobody wants to have. Seeing them on a scored report gives you the cover to raise them.
How to Use the Results
A few practical notes, because the worst possible outcome is that you take the assessment, nod, and do nothing.
Run it yourself first. Don’t delegate the initial pass to the team you’re assessing — they have an incentive to score generously, even subconsciously, and the questions are calibrated to reward honesty, not diplomacy.
Run it again with your team. Compare. The delta between how leadership sees the program and how practitioners see it is usually the most interesting finding in the room.
Treat the lowest dimension as the first priority, not the highest. Intelligence programs fail at the weakest joint, not the strongest one. A world-class tradecraft capability is wasted on an organization that can’t articulate its PIRs or detect an influence campaign landing on its own brand.
And if the exercise surfaces something systemic — a missing capability, a governance gap, a dimension the team has never been asked to address — use the report as the conversation starter with the executive who can actually fix it. Scored diagnostics carry weight in boardrooms that sternly-worded memos don’t.
The tool lives at treadstone71.com/intelligence-readiness-scoring. No account required to take it. Fifteen questions, five dimensions, a benchmarked score with a gap analysis at the end.
Eight minutes. If the results sting a little, that’s the point. Better to find out in a diagnostic than in contact with an adversary who already knew.
