Hey friends,
Welcome back to Theory of Change. This is episode eight of the Anti-Pattern Editions, a season of newsletters looking at overused and misunderstood frameworks that have been ported into nonprofitland from the product and tech world, with varying results.
This week: Net Promoter Score.
If youâve ever been asked âOn a scale of 0 to 10, how likely are you to recommend us to a friend?â, youâve been part of the NPS ritual.
The promise is one magic number that captures loyalty, predicts impact, and gives leaders a dashboard to steer by. Itâs the Fitbit of donor/member/community sentiment, basically.
But of course, in our world, one number rarely tells the whole story. And, when we start to pick NPS apart, we find much bigger questions about the nature of not just feedback, but mission-driven work in general.
So, letâs get into it.
(If you were forwarded this and would like to subscribe, please go ahead. It would make my day).
|
|
What it's supposed to be
Fred Reichheld introduced NPS in the early 2000s as a replacement for sprawling customer satisfaction surveys. The genius was simplicity:
Ask one question about likelihood to recommend.
Classify people as Promoters (9â10), Passives (7â8), or Detractors (0â6).
Subtract detractors from promoters and voilĂ : your Net Promoter Score.
Nonprofit organisations started to love it sometime around 2010, as I recall. And I get why: itâs easy to benchmark, presents well to boards, and (supposedly) correlates with revenue growth.
For development agencies, it looked like an efficient way to listen to beneficiaries without imposing lengthy surveys. For fundraising teams, it offered a plausible stand-in for donor loyalty. In both cases, the attraction was (and remains) the same: one question, one number, a common language.
Supposedly.
|
|
Where it falls apart
NPS's elegance unravels once we step outside the consumer marketplace. The first problem is predictive power. In my experience (and thereâs a bit of anecdotal evidence around the web that backs this up), many organisations find that âpromotersâ donât reliably give more âŹâŹâŹ, stay longer, or advocate more than âdetractors.â
The second problem is that a single number is too blunt to guide improvement. It collapses the complex realities of donor trust or beneficiary dignity into a binary of promoter and detractor, offering little sense of what actually needs attention.Â
A donor might feel deeply connected to the mission but frustrated by opaque governance; a community member might find the service useful but humiliating in its delivery. Both realities vanish into a single digit. The Fund for Shared Insight, for instance, found that responses often cluster heavily at the top end (i.e loads of 10s), making it hard to distinguish meaningful variance.Â
(Also: the very act of recommendation can be meaningless or even inappropriate. Would you ârecommendâ registering to vote to a friend? Itâs a civic right, sir/madam, not a consumer preference).
|
|
A better way to think about it
The value of NPS therefore lies not in the number but in the discipline of asking. We just have to be careful with our questions.
For donors, think not about asking for recommendations, but whether it is more useful to ask how confident they are that their contribution makes a difference, or how likely they are to give again.Â
For beneficiaries, it might be better to ask whether the programme/service/thing made a positive difference in their life, or whether they would return if they needed to. These questions are closer to the behaviours that matter in purpose-driven work.
Equally important is to treat the score as the beginning of a conversation rather than its end. It is the follow-up âwhyâ that reveals the roots of dissatisfaction or loyalty (most of you will be familiar with the Five Whys, Iâm sure).
Taken lightly and often, NPS can point to trends, prompt curiosity, and create space for course-correction. Elevated into a OKR (see last weekâs newsletter if you missed it!), it becomes one more number to be gamed, and loses its value.
|
|
Try it this week
If youâre heading into a strategy day or even just a team meeting, run a quick NPS inversion exercise.
Take the classic question (âWould you recommend us?â) and rewrite it for your own context. For a small donor, perhaps: âWould you give again?â For a newsletter: âWas this worth your time?â For a community service: âDid this make life less precarious this week?â
Once youâve drafted a handful of alternatives, sort them. Put three columns on a flipchart or Miro board: Behaviour (what people actually do), Relationship (how they felt treated), and Impact (what changed for them).
Now place each new question you drafted under the heading that best describes what the answers will tell you. âWould you give again?â is looking to map behaviour. âDid you feel respected?â is about your relationship. âDid this support make the week less precarious?â is trying to gauge impact.
Many nonprofits discover that their surveys over-index on satisfaction (relationship!), underplay actual behaviour, and only vaguely touch on impact. Seeing this imbalance laid out makes it harder to ignore and usually prompts a richer discussion about what kind of feedback you really need.
If you are feeling brave, pick one question from each column and test them in your next member survey, newsletter check-in, or community consultation.
In addition, if I've piqued your interest, you can go even deeper with these eight high-quality listening and feedback principles and this research on nonprofit data collection best practices (not as dry as it sounds, I promise).
|
|
Future Perfect
Working on your mission statement? Who isn't.
Future Perfect is a simple methodology I've devised following decades of rewriting mission and vision statements so theyâre clear, human and fit for purpose.
I created a 15-page workbook that helps you rewrite yours, using everything I've learned. You can download it here for free.
|
|
đ WAVE GOODBYE đ
What NPS really reveals is not the power of a single number but our own craving for simplicity. The corporate world promises us that if we can just find the right proxy, everything else would fall into place. Our sector borrows that logic because, frankly, itâs tempting: funders want dashboards, boards want benchmarks, and we want the reassurance that loyalty and impact can be boiled down to a tidy score.
But mission-driven work resists that kind of compression. Loyalty is messy, trust takes years, and dignity rarely shows up in metrics. Which doesnât mean we should stop measuring; only that we should measure in ways that creates learning, not illusions.
On a personal note, Iâve been thinking about this while sorting through the chaos of my own inbox. There are newsletters I never recommend to anyone, yet I always open them because they help me think differently. Others Iâve forwarded to friends with great enthusiasm, only to quietly unsubscribe a month later. Loyalty, it turns out, is not a score but a relationship, sometimes steady, sometimes fleeting, often hard to predict.
All that to say: thanks for being here. For reading to the end. And for sharing, should you feel so inclined.
Adam
p.s. New here? Welcome! Sign up here to get this newsletter every Thursday.
p.p.s. If youâre someone who, like me, thrives on curiosity and spinning plates, you might enjoy my latest video about the four multipassionate models of entrepreneurship.
|
|
|