Red. Amber. Green.
The system that every project manager in every manufacturing organisation uses to tell leadership how their programme is going. The system that gives the CEO a one-glance dashboard. The system that, in my experience, is one of the most reliable sources of false comfort in the entire history of project management.
I want to be precise about this. I am not saying that the people who set RAG statuses are dishonest. Most are not. What I am saying is that RAG status, as a measurement system, is structurally designed to mislead.
The Problem of Self-Declaration
RAG status is almost always self-declared. The person whose programme is being assessed is the person who chooses the colour.
Think about the incentive structure this creates. If I am a programme manager and my programme is amber — meaning it is at risk — I know what happens when I put amber on the dashboard. I get questions. I get scrutiny. I get pulled into extra meetings. I get asked to explain myself to people who have less context than I do and who will form opinions based on a thirty-second glance at a slide.
So I ask myself: is this really amber? Or is it still manageable? I have a plan. The supplier has committed to a date. The risk is real but I am on top of it. Maybe green with a note is more accurate. Maybe I will reassess next week.
And so the programme stays green.
Until it is undeniably red.
RAG status, as a measurement system, is structurally designed to mislead. Not because PMs lie — but because the incentive to stay green is overwhelming. The colour chosen is always the colour that minimises scrutiny, not the colour that tells the truth.
I have seen programmes go from green to red in a single week — skipping amber entirely. Not because the situation deteriorated suddenly. Because amber was never acknowledged. The signal was there for weeks. The colour was never changed. And when the delivery failed, everyone was 'surprised.'
They were not surprised. They were unprotected by a measurement system that had given them false comfort.
The Review Room Performance
The second problem with RAG status is what it produces in review meetings.
When leadership looks at a dashboard full of green dots, there is nothing to discuss. The meeting becomes a performance of confidence. The PM reports good news. The director nods. The meeting ends early. Everyone goes back to their work feeling better than they should.
I call this the review room performance. It is not deliberate deception. It is a social dynamic produced by the incentive to present well. And it is deadly.
Because the purpose of a review meeting is not to report what is going well. The purpose of a review meeting is to surface what is at risk — early enough to do something about it. A review meeting full of green dots and comfortable nodding is a review meeting that has completely failed at its only real job.
The best review meetings I have ever been in were uncomfortable. They were uncomfortable because the data on the table was honest — and honest data in a manufacturing programme almost always contains things that need to be addressed. The discomfort was productive. It produced action. It changed outcomes. The worst review meetings were pleasant. Everyone agreed. Nothing was challenged. And three weeks later, the delivery failed.
Why This Is a System Problem, Not a People Problem
Before I continue, I want to say something clearly: the programme managers who keep statuses green are not bad people. They are rational people responding to a broken incentive structure.
In an organisation where amber status leads to scrutiny and green status leads to a comfortable meeting, choosing green is the rational decision. Every week. Until the situation is undeniable.
The people are not the problem. The system that asks people to declare their own health score — and then treats the declaration as data — is the problem.
The solution is not to demand more honesty from programme managers. It is to design a system where the health score is calculated, not declared. Where the number comes from actual task execution data, weighted by consequence, and computed automatically. Where the person whose programme it is has no ability to choose the colour.
What Calculated Health Looks Like
OPV — On-time Performance Velocity — is not a colour you choose. It is a number that emerges from your execution data.
It looks at every active task in your programme. It weighs each task by how long it was supposed to take. It measures whether that task is tracking to its planned date or drifting away from it. And it produces a number between zero and one that tells you, with brutal honesty, what percentage of your planned delivery capability you are actually achieving.
An OPV of 0.9 means you are executing at ninety percent of your planned velocity. That is healthy. An OPV of 0.6 means you are executing at sixty percent. That is not.
You cannot choose your OPV. You can only influence it by actually improving your execution. Which means that when leadership sees it on a dashboard, they are seeing the truth — not the PM's management of the truth.
When the number is calculated, the conversation shifts from "how do we look?" to "what do we need to fix?" That is the only conversation that matters in a manufacturing review room.
The Cultural Change That Follows
When calculated metrics replace declared statuses, something changes in the culture of the programme team.
The PM stops managing perceptions. Because the system shows the truth regardless of what the PM reports, there is no benefit to softening the picture. The data is the data. The only question is what to do about it.
The review meeting stops being a performance. The discussion moves from "the programme looks healthy" to "OPV is 0.72 — which tasks are driving this and what is the recovery plan?" That is a fundamentally different conversation. It requires specific answers. It produces specific actions.
This transition is uncomfortable for some PMs — especially those who have built their reputation on confident presentations rather than accurate ones. But over time, in programmes where the health score is calculated and visible, a different culture emerges. Early warning becomes normal. Raising a risk at week minus-five is not seen as admitting failure — it is seen as doing the job properly.
Ask yourself: if your programme's health score is green today, is it green because the data says so — or because that is what you chose? If the answer is the latter, you are managing perceptions, not programmes. The organisations that confuse the two are the ones whose customers stop being surprised when deliveries fail.
The Alternative Is Not More Reporting
I want to address one common response to this argument: "We should just ask PMs to be more honest."
This does not work. Not because PMs are fundamentally dishonest, but because the incentive structure does not change. As long as amber status leads to scrutiny and green status leads to comfort, PMs will choose green. Every week. In every organisation. Across every industry.
The alternative is not more honesty training. It is a different measurement system — one where the health score cannot be gamed because it is calculated, not declared. One where the PM's job is not to choose a colour but to respond to the number the system has produced.
That system exists. It is not theoretical. And the organisations that use it stop being surprised by delivery failures — because the truth was on the screen the whole time.
Your Projects Don't Have to Fail
This article is based on Book 1 of The Execution Series — five books on manufacturing programme management written from 25 years of experience. The Number and The Signal cover OPV, LFV, and the Risk Number in full. All five books are free.
Request the Free Series →OPV and LFV — calculated automatically. Not declared.
Project Perfect computes your programme health from actual task execution data. No colour choices. No managed narratives. Just the number — so leadership always sees the truth, and PMs always know where to focus.