In the earlier article on early warning signs of politicisation, we looked at how language, records and decision processes can shift away from purpose. We saw how analysis can be shaped to fit preferred outcomes, how trade-offs drop out of the record, and how it becomes harder to see why a decision was made.
This article goes one level deeper and examines the incentive culture that sits underneath those patterns. Incentives are the informal rules about what is valued as practice. They sit in who gets promoted, whose work is praised, what shows up in board papers, and what is ignored. If those incentives reward smooth narratives and “on track’ dashboards, people will optimise for appearances. If they reward grounded judgement and correction when things change, people will optimise for truth.
Clarity, dissent and reversibility have been covered in earlier pieces and are taken as given here. Building on that foundation, this article explains why incentives are particularly risky in intelligence work, how to recognise when incentives are misaligned, and how misalignment shows up in day-to-day practice. It then sets out what leaders can do to realign incentives with the organisation’s real interests and what analysts can do to protect the quality of their own work, even when local incentives are not well aligned. The aim is practical; to reduce the chance that an organisation slides back into politicisation because of how it rewards, tolerates or ignores certain behaviours, and to give both senior leads and newer analysts a shared way of talking about these risks.
Incentives are the formal and information signals that tell people what behaviour is rewarded, tolerated or penalised. Politicisation is when analysis is shaped to fit a preferred political or organisational agenda rather than the best available assessment of the evidence. Bias is a systematic pattern of error in judgement, whether cognitive (such as confirmation bias) or institutional (such as an over reliance on familiar sources). Bias operates at the level of thinking, politicisation at the level of purpose and incentives at the level of behaviour. Misaligned incentives do not create all bias or all politicisation, but they amplify both by teaching people which kind of errors are ‘safe’ and which lines of inquiry are risky.
Incentives are always present, whether they are written down or not. People notice whose work is quoted, which projects are called ‘flagship’, and what types of problems get leadership time. In intelligence work, this creates three specific risks.
Intelligence products almost always rely on incomplete and changing data and conditions. If the organisation rewards confident predictions and neat answers, analysts feel pressure to strip out visible uncertainty. Ranges become single-point estimates; caveats are pushed to the footnotes; scenarios that are politically awkward are dropped rather than written down. The written picture becomes more definite than the real situation.When that happens, decision-makers start to act as if they are operating in a world of solid facts rather than probabilities. Surprise becomes more likely, contingency planning is neglected, and operations are designed on optimistic assumptions. In day-to-day intelligence work this shows up as over-confident targeting, brittle plans, and an inability to pivot when new information arrives, because the system has taught itself to hide the very uncertainty that should be shaping judgement.
Intelligence teams usually work near ministers, executives, or senior operational leaders who have clear preferences, political timelines, and resource constraints. If the incentive culture rewards being seen as helpful to those preferences, analysts quickly learn to bring forward work that supports the current direction and to soften, re-frame, or delay work that challenges it. Over time, this behaviour looks like politicisation, even if no explicit instruction was ever given.
The effect is that the proper relationship between intelligence and power is quietly reversed. Instead of informing policy, intelligence begins to confirm it. Warning signals are blurred, uncomfortable assessments are under-weighted, and blind spots harden around the priorities of the day. For practitioners, that means fewer truly independent assessments, more pressure to “fit” conclusions to expected outcomes, and a higher risk that operations and policy are built on wishful thinking rather than best available judgment.
The quality of reasoning behind an assessment is rarely obvious in a brief. A weak product can sound confident and polished; a strong product may sound cautious, conditional, and full of “on the one hand / on the other hand” language. If reward systems focus on what is easy to see—volume, speed, confident tone, positive status with seniors—then careful, well-qualified judgement is not recognised. Analysts adapt to what is visible, not to what is correct.
Over time, presentation starts to outrun thinking. Shallow but certain assessments travel fastest and shape decisions, while deeper, more nuanced work is sidelined. This skews day-to-day intelligence practice toward fast-turnaround talking points instead of rigorous analysis, weakens challenge and red-teaming, and increases the chance that leaders are confidently briefed into error. When the confident products later prove wrong, it also corrodes trust in the intelligence function itself.
Because incentives are partly cultural, misalignment usually shows up in patterns rather than in a single clear event. The following signs are practical checks. One on its own may not be serious. Several together should trigger concern.
Teams spend significant time on issues that are easy to count and low risk to discuss, while more sensitive or strategically important questions receive little sustained analysis. Dashboards are full of peripheral indicators, but there are few deep products on the most consequential topics.
When this happens, the incentive is to stay visibly busy on safe work rather than to take on the harder, more exposed questions that matter most.
Formal briefings consistently frame information in ways that support existing decisions or minimise friction with senior preferences. Risk is routinely described as “being managed”. Negative developments are surrounded by reassurances. Products that land well are those that make leaders more comfortable, not necessarily those that best reflect the evidence.
This pattern suggests that incentives reward alignment with the current narrative more than they reward accuracy.
When key indicators start to move in the wrong direction, more effort goes into explaining why the metric is misleading than into testing whether something important has changed. Definitions are updated, thresholds reset, or categories re-coded to avoid an apparent “failure”.
If reasonable questions such as “Is this still the right measure?” or “What would we do if this really is a genuine shift?” are treated as unwelcome, the system is rewarding compliance with existing metrics over genuine curiosity about reality.
Leaders and analysts sit at different points in the system, but they both have leverage. Leaders shape the environment; analysts shape the day-to-day practice. The goal is the same: make it easier for people to act in line with truth-seeking, and harder for politicisation to take hold.
For leaders, this means building regular questions into planning cycles:
Pay attention to the mix of work in your portfolio. When teams choose to stop or reshape work because it is low value, misleading, or likely to drive politicised behaviour, recognise that explicitly in performance discussions and forums.
For analysts, it means asking at the start of an assignment: “What decision will this actually inform?” and “What would success look like in practice?” If the link to a real decision is unclear, record your own short decision statement in your notes. This keeps your effort tied to purpose and gives you a basis for pushing back gently if the work drifts into low-value activity.
When both levels do this, the system starts to reward taking on the right problems, not just the most expedient problems.
Leaders can lower distortion pressure by changing how major forecasts and estimates are reviewed. After events unfold, run short, standard reviews that focus on process rather than whether the call was “right”:
Make it clear that careers are built on the quality of reasoning and willingness to adjust, not on perfect prediction.
Analysts can support this by keeping a simple log of their judgement calls: what they initially thought, what changed their mind, and what they would have recommended under ideal conditions. This helps them explain their reasoning later and provides concrete material when process reviews occur. It also helps maintain personal calibration over time.
Together, these behaviours shift reputation away from “never being wrong” and toward “thinking clearly and updating honestly”.
Leaders have a role in how tasks are framed. When commissioning work, be explicit about whether you want neutral analysis, options, or a recommendation based on specified priorities. Avoid asking for “a case for X” without also asking for the strongest counterarguments. That framing sets the incentive tone.
Analysts can make the boundary visible in their own language. Use straightforward phrases such as:
If you are asked to emphasise a preferred option, reflect that in the way you structure the product (“From the perspective of advancing priority X, option 1 is strongest; on a pure risk basis, option 2 is safer”). This keeps the reader aware of where evidence ends, and value judgements begin.
When both leaders and analysts do this, it becomes harder for politicisation to hide inside technical phrasing.
Leaders can normalise simple probing questions that link analysis back to action and measurement:
Ask these questions in a neutral tone and as a matter of routine. Over time, staff learn that these checks are expected, not personal criticism.
Analysts can use the same question style upwards and sideways. When they see safe topics dominating or metrics being defended, they can ask: “If this is just noise, reporting that looks important because it’s loud or frequent but doesn’t change the analytical judgement, what evidence would show that?” or “If we are confident this is under control, what would a loss of control look like in the data? “These small questions draw incentives into the open. They force a link between status reporting and real behaviour, without accusing anyone of bad faith.
In intelligence work, incentives are risky because they shape which questions are asked, how uncertainty is presented, and how honest the system can be about bad news. Misaligned incentives can lead committed staff to optimise for the wrong objectives without realising it.
Leaders and analysts both have tools to counter this. Leaders can reward the choice of relevant, sometimes difficult problems, separate reputation from single outcomes, commission work in ways that keep analysis distinct from advocacy and use simple questions to link reports back to real decisions. Analysts can anchor their work to explicit decisions, log their own judgement calls, make assumptions and value judgements visible, and use neutral questions to test whether the system’s incentives still match its stated purpose.
These are not dramatic interventions, but over time they change what feels safe, normal and rewarded. That is where politicisation either gains traction or loses it, and where intelligence work either moves toward performative theatre or stays anchored in reality.
AI tools were used to assist with structuring and editing for clarity. All views expressed are those of the author(s) and are offered to support open, respectful discussion. The Institute for Intelligence Professionalisation values independent and alternative perspectives, provided safety, privacy, and dignity are upheld.