Portfolio Prioritisation:
Moving Beyond RAG Status.

Author

Philipp Eiselt

Topic

Portfolio Management
Risk Scoring

Published

December 2025

Read time

10 min

Red-Amber-Green status is the most widely used project reporting tool in enterprise IT. It is also one of the least useful for portfolio prioritisation decisions. The problem is not the format; it is what the format lacks: a structured basis for comparing projects against each other, and for ranking their claims on constrained resources.

Why RAG is not enough.

RAG status is self-reported. In a portfolio of any meaningful size, a significant proportion of amber and red projects are reported green by project managers who are managing upward rather than reporting accurately. This is not malice; it is a rational response to incentive structures that punish projects for being visible when they are struggling. The result is a portfolio status view that systematically underrepresents risk.

Even when the colours are accurate, they do not support the decision that leadership actually needs to make in a prioritisation review: which projects should receive additional resource, which should be de-prioritised, and which should be stopped. A portfolio with fifteen green projects, eight amber projects, and three red projects tells you nothing about how those twenty-six projects rank against each other in terms of strategic value, deliverability, and consequence of failure.

The question prioritisation needs to answer is: if we can only fully resource twenty of these twenty-six projects this quarter, which twenty? RAG status cannot answer that. A scoring model can.

The four scoring dimensions.

The model we developed and used at portfolio level scored each project against four dimensions. Each dimension was scored 1–5 by a defined assessor, not the project manager, based on documented criteria.

Strategic alignment: How directly does this project deliver against a defined organisational priority? Score 5 if it is a direct mandate from the executive strategy. Score 1 if it is an internal improvement with no clear line to strategic objectives. This dimension forces a conversation about whether projects in the portfolio actually map to anything the organisation has decided matters.

Delivery risk: Based on the project's current trajectory, dependencies, team capacity, and technical complexity, how likely is it to deliver on its commitments? This is where the PMO's independent assessment matters most. A project manager will score their own project as low risk. The PMO, with visibility across the portfolio and the ability to compare against historical data, will often see a different picture.

Consequence of delay or failure: If this project slips six months or fails to deliver, what is the downstream impact? This captures regulatory exposure, dependencies that other projects are carrying, revenue implications, and operational risk. A project with moderate strategic alignment but catastrophic failure consequences ranks higher than a high-alignment project where delay is inconvenient but survivable.

Benefit realisation timeline: How quickly after delivery does the benefit materialise? A project that delivers measurable value within ninety days of go-live is worth more, from a cashflow and organisational momentum perspective, than one where the benefit case is contingent on multiple subsequent changes over two years. This dimension often reveals that projects with compelling business cases are actually lower priority in a constrained environment because the benefit is too far away to justify the resource consumption now.

How it held up under governance scrutiny.

When we first presented the model to the portfolio steering committee, the reaction was sceptical, particularly from project sponsors who felt their projects were being ranked unfairly. The key to surviving that scrutiny was the documentation behind the scores. Every score had a rationale attached. Project sponsors could challenge a score, but they had to challenge the rationale, not just assert that their project deserved a higher number. That changed the conversation from advocacy to evidence.

Over three cycles, the model became the default framework for prioritisation decisions. Project sponsors started engaging with the scoring criteria during planning rather than arguing about results afterwards. The portfolio mix shifted. Projects that had been carried forward on political momentum rather than strategic merit were stopped or deferred, freeing capacity for higher-priority work. Budget variance improved because resources were no longer spread across projects that had no realistic path to delivering value within the planning period.

The political dimension.

It would be dishonest not to name this directly: prioritisation is political. Any model that produces a ranked list is also producing a list of winners and losers, and the losers will push back. The model does not remove that dynamic. What it does is change the nature of the pushback, from "my project deserves more priority" to "the criteria you are using do not capture something important about this project's value." The second argument is more productive, because it can be evaluated on its merits and, if valid, used to improve the model.

The other political reality is that leadership sometimes overrides the model. This is acceptable; a scoring model is a decision support tool, not a decision-making machine. What matters is that overrides are explicit, documented, and carry an owner. When a project is prioritised above its model ranking because of a leadership decision, that decision should be visible in the record, with a named sponsor who is accountable for the outcome. Invisible overrides are where prioritisation models lose their credibility and eventually stop being used.

Philipp Eiselt

Independent consultant in IT Portfolio Management, PMO & Governance, and Digital Transformation. Based in APAC, working globally.

Follow on LinkedIn