Hacker News new | past | comments | ask | show | jobs | submit login
If your customers don't talk, NPS is a vanity metric (elliotcsmith.com)
38 points by smitec 88 days ago | hide | past | favorite | 55 comments



I work in a not for profit functional monopoly space. A member body distributing resources according to policy.

I can get why we do satisfaction scores, but NPS never made sense to me. Like, does the justice system do NPS for family law cases? It's not like you can get divorced in a 7-11 by the clerk, and even mediation is a process inside the model so NPS in a non competitive self-regulated monopoly.. what does it even mean to recommend to others?


>what does it even mean to recommend to others?

I will give you a real-world example. My girlfriend works for the government and they have an initiative to do outreach to people in the local community and let them know of all the funded help available to people for different situations.

However, they can't go to all 30k+ households at once and even in targeted areas (which is the way it works now) it takes a very long time to get to everyone.

The thinking about a recommendation is that perhaps they go to a home and let the person know about all the free programs available and leave a list with phone numbers. And this person might not need that information at the moment, but maybe they know someone in their family or from church or elsewhere that could use it and they let them know.

I do understand that if you help people who have lost their home due to fire it is silly to do a survey and ask them if they would or did tell someone else in case their home burns down. However, it isn't across the board silly even or not for profit cases.


That's a good counter example. Not "would you recommend" but "if you were trying to help somebody do you believe we've done enough to make it easy to explain how to use us"


On a scale of 0 to 10, how likely are you to recommend the use of NPS as a metric to a fictitious competitor in your not for profit functional monopoly space?


"I wish all my fictitious competitors would do this"


7


> I need you to understand that people don't have conversations where they randomly recommend operating systems to one another

Maybe you are in the wrong crowd, but the crowd that does do recommendations on operating systems it is rarely windows that is recommended


Software is so abysmal that when I find a good product I scream it from the heavens. For example I've moved multiple teams off whatever ticketing system they were using and moved them to Linear.app which was extremely painless and paid dividends.


Yeah, reminds me of that episode of the Corecursive podcast literally called "Software that doesn't suck". Just reaching the threshold of "people use this software daily and don't think it sucks" is so rare that it's a point of pride.

https://corecursive.com/software-that-doesnt-suck-with-jim-b...


In stead of asking “would you reccomend”, shouldn’t you rely on “how did you learn about us?”, or “why did you buy our product”, the first one being the most reliable, as it is a neutral question about a fact.


Would you recommend is probably easier to track as an OKR. You can keep asking it every quarter for new data to pat (or smack) yourself.


No, the goal of tracking NPS is to find out whether the customer enjoys the product enough that they would recommend it. Whether the customer would recommend it to other potential customers is almost completely irrelevant. The article is the author realizing what NPS is meant for, not some hot take.

Asking "How did you learn about us?" is a question that helps evaluate your marketing and sales pipeline. Asking "Would you recommend?" is about helping your product development process. Your sales / marketing team can't effectively influence the NPS (only thing they could do is divert marketing from customers that wouldn't be satisfied by the product).


Ok. I am clearly out of my comfort zone here. So NPS is not a proxy for organic marketing success, but for customer satisfaction. Is the article’s point then that the question is sometimes too hypothetical?

I still think that it is useful to track actual reccomendations when that is expected to happen.


The article is literally just not on point. They're confused about what NPS is for, and wrote a confused article about it.

I agree that it's very useful to track where your leads come from, and I love the short radio button lists that allow you to select a source.

If you feel like you should only ask one question, and it's either that or the NPS question, then it really depends on what your goals are at that point. Are you trying to figure out if your customers are happy with your product, or are you trying to figure out which of your marketing channels is most effective?

I think tracking NPS over time is really powerful, so then you're tracking not an absolute number, but you're tracking if customers have recently changed their opinion of you.


Also it’s something you can track over time. It’s not meant to market really, it’s meant to gauge how your customers view your company and areas of operations that may need improvement.


At scale, a relevant number of your customers talk. One example in TFA is that customers do not have conversations where they randomly recommend operating systems to each other. Well, I'm sorry, but in my bubble they certainly do. I have recommended more operating systems than car models to others.

The other thing the article misses IMO is that detraction is also growth, albeit negative growth (and additionally, people are in my experience much more likely to passionately recommend _against_ something they hate than recommending something they love). So the NPS tells you a thing or two about: - Your potential to utilise whichever chance you have to grow via word of mouth - Your potential to squander that same chance due to people hating your product - Your potential to have negative growth because your customers are leaving in droves.


I never understood the logic behind that asymmetric ranking scale.

Around 16 years ago our CEO was talked into using this. After 3 quarters of using it, he killed it because it wasn’t making any visible change in sales; the only metric he cared about.


Essentially it says few customers will move the needle on bringing you new business, only the ones who rave wildly about you. Hence, "Customer obsession" as Amazon's Bezos used to put it.

Since NPS and changes in NPS is measured by surveys, not attributing individual sales, it's a KPI that's gameable (especially when used on its own); near-impossible to tell whether "improving" NPS in some existing-customer segment from '5' to '7' results in anything tangible. On the other hand you can actually measure sales and attribute which channel they came from.


There's irony in bringing up Amazon here: From what I've seen, the thing most people rave about when it comes to Amazon is it's customer service. Everything else is pretty mid: the products are poor quality, sometimes counterfeit and if you are not careful way overpriced.

Amazon is not selling customer service as a separate product, yet that's the product that most customers really rave about, even though they don't really like anything else about Amazon.


I said "used to put it". In the early days of Amazon, it totally differentiated itself from other online and brick-and-mortar sellers of books(/CDs/etc.) on price, speed, accuracy, customer support, responsiveness.

Obviously it has evolved since into a huge organization and many other things have changed.


I occasionally answer a customer support survey, just because I know the person on the other end probably gets to keep their job or not based on some average score, but I am not wasting my time answering a NPS survey for some random app or company that I've used. It's just not worth my time and I get too many surveys to care.


Every time you get one of those surveys rank them at zero, then add "Net Promoter Score is a flawed vanity metric and shouldn't be used for business purposes" in the comment box. Sometimes I link the Wikipedia NPS "Criticism" section as well.

Most places don't care about the results from an actual customer service perspective. The above gets crickets, not even an auto responder.

For companies that do care (tiny startups, mostly) I've gotten IMMEDIATE personal email responses from CEOs and founders asking what they can fix for a zero NPS. That's a great place to link the criticism section if not done previously, and to provide useful, raw feedback on what you love/hate about their products.


This tanks the evaluation of any individuals you interacted with while dealing with the company, which can impact their pay or even push them towards getting laid off for low performance. So I'd advise caution when applying this particular idea, since many employers use these surveys to decide who to fire.


That's "negotiate with terrorists" logic. I'm not going to pretend to take some company's bullshit seriously because they implicitly threaten to fire their employees at random if I don't.

(I do advocate for laws against arbitrary firings and encourage employees to unionise and/or move to jurisdictions with strong labour laws).


I always find this metric bizarre when companies ask about it. Everything is in either a category of things I’m not going to bother talking about, or I’ve already recommended it to people who might care, or I’ve already anti-recommended it to those people. Those are 0, 10, 0 I guess. I’m never like “well I haven’t mentioned it yet but there’s a 60% chance I’ll recommend it at some point”

From when I worked at a company that used it I seem to recall it was actually just used as binary too. 8+ is good and everything else is bad or something like that. So it’s weird that they collect it with such fake precision.


I did not realise they meant it literally.

Would I recommend a product to others? Yes. Does it ever come up in conversation? No. Do I go around telling random people about this product? No.


I work in a company where they try NPS both with customers and staff. Both of these NPS results were available in very fine grained (anonymous) reports... That is until quite recently.

Previously we had access to all the freeform comments, such as "as a customer we need feature X", or "as a staff member we want to see more transparency around Y".

Today, after a few particularly turblent quarters including layoffs, all we get to see are summarized versions of the staff NPS.

Vanity project indeed.


NPS is very useful at millions of customers scale provided you don't conduct the research yourself and pay for it to be done correctly by a firm who knows how to extract the information fairly. Basically: NPS is a solid late stage business tool but isn't applicable until your customer base is sprawling and you want to understand the mechanics of your brand more.


NPS: Net promoter score.

Expand your initialisms, folks.

<https://en.wikipedia.org/wiki/Net_promoter_score>


My answer is usually I think your product is great but I'm not going to recommend it to anyone.

So you get a zero even though you're product is great.

Ask the right question!


I would go a step further: NPS is a garbage metric.

First, you start by assuming your customers can even reasonably ascertain their likelihood to recommend. They can't; there are people who answer 10 but will never recommend, and there are people who answer 0 but already have and will again.

Next, you assume your customers are idiots and don't know how an 11-point scale works by adjusting the midpoint: Instead of 5, the middle is now 7 and 8.

Then you realize there are two many numbers, so you throw several out by reducing your 11-point scale to a 3-point scale, after which you re-interpret "unlikely to recommend" as "likely to snag some other customers on my way out the door."

Finally you calculate your 'net promoters' by subtracting the percentage of low scores from the percentage of high scores to give you a nice round number that doesn't correlate with what's actually happening in the real world.

And this is just what happens when you do it 'the right way.'

NPS is said to measure growth using loyalty as a proxy. But then, what does that have to do with recommendations? Nothing.


Bahaha oh man this brings back memories. I worked for a startup where the CEO spent weeks working on determining our NPS and talking our ears off about it. At the end of the process, he was so happy - our NPS was higher than Apple! I didn’t really know what that meant but I figured he had an MBA and knew what he was doing.

The startup never became profitable and ran out of investor money 18 months later.

There were many things wrong with the company but this was one of the things that made me most feel like I was in Office Space.


Based on practical experience, NPS is garbage because:

1) Even with stable mean and median, NPS tends to vary month over month, at least for my B2B settings where samples are probably much smaller than for B2C. Then, management goes nuts because of very subtle shifts in the distribution caused by NPS' arbitrary aggregation into promoters, neutrals, detractors. Of course, often investors are married to NPS, so educating management does not solve the problem.

2) NPS varies unreasonably across cultures. We used to say, somewhat tongue-in-cheek, that NPS is a US-centric metric, where things are either amazing or awful (with little space in between). E.g., in northern/central Europe, an 8 can be pretty amazing.


NPS is the electronic version of a poorly run focus group. People will assume all sorts of things and bring in all manner of bias. You may get some data, but you have no idea how accurate it is. Even worse, you have no idea why they answered what they did, or how to fix things to make your customers happier.


Not defending NPS here - I don't use it - but some of your assumptions are wrong.

> They can't; there are people who answer 10 but will never recommend, and there are people who answer 0 but already have and will again.

What matters with NPS is trend over time, and getting the numbers at a scale. Yes, there are people who randomly click on one end of the scale or the other, but the assumption is that on average the portion of these people is stable.

> Next, you assume your customers are idiots and don't know how an 11-point scale works by adjusting the midpoint: Instead of 5, the middle is now 7 and 8.

This does not come from the assumption customers are idiots, it comes from the idea to treat people who vote "in the middle" not as neutral, but as detractors. Which makes sense: If someone tells me "hey I know Product X and it's meh", then I'm less a promoter but more a detractor.

> Then you realize there are two many numbers, so you throw several out by reducing your 11-point scale to a 3-point scale

The 3 point scale was the goal all along though, it's the idea of an asymmetric scale that leads to the 11-scale to 13-scale reduction.

> after which you re-interpret "unlikely to recommend" as "likely to snag some other customers on my way out the door."

If your assumption is that promoters drive positive growth, it's fair to assume that detractors drive negative growth by recommending an alternative. If you believe in that core assumption that NPS measures word of mouth, then this interpretation of "likely to snag some other customers on my way out the door" is a sensible one.

> NPS is said to measure growth using loyalty as a proxy. But then, what does that have to do with recommendations? Nothing. I don't think the underlying assumption is bad. That's how influencers work: people are more likely to buy something that is being recommended to them by someone they trust and someone who is passionate about the product.

Does NPS work? I don't know - I'm not using it as I said above. But at least the assumptions under which NPS are designed on top of the idea of word-of-mouth as a growth diver seem solid to me.


> Yes, there are people who randomly click on one end of the scale or the other, but the assumption is that on average the portion of these people is stable.

That assumption is predicated first on the idea that people can and will tell you their likelihood to recommend within a reasonable degree of accuracy. I don't think they do.

1: https://www.xminstitute.com/data-snippets/gap-consumer-recom... 2: https://hbr.org/2019/10/where-net-promoter-score-goes-wrong

> If you believe in that core assumption that NPS measures word of mouth, then this interpretation of "likely to snag some other customers on my way out the door" is a sensible one.

But that's not the scale given to the respondent. The scale is given as going from "not at all likely" to "very likely" to recommend. There isn't an option for likely to recommend against. The low end of the scale probably captures some, but to assume it is near 100% is a mistake.


Re. your first point: When working with NPS, you would not actually assume that 1.000 people who voted with the highest possible value actually translate to 1.000 actual recommendations. I think not even the most vocal proponents of NPS would claim this. What matters is the _trend over time_, and that trend allows drawing conclusions regardless of actual individual customer behaviour, unless you have reason to believe that what customers actually do is completely independent of the score they give. If today 50% of respondents rate positively, and tomorrow 25% do, for large enough pools of respondents, something has gotten worse and it's not a stretch to assume that the total number of recommendations that go around for your product will go down.

Re. your second point: You are right that it is not explicitly asked for, but that does not imply that the core assumption would be faulty.

So much depends on the actual product being sold, and many other aspects, but being an imperfect metric or only a rough approximation would not imply that the metric itself is garbage, as the comment I replied to stated.

Here's a simplified example: Let's say I own a web shop, where growth implies a growth in sales. Someone on the positive end of the scale is more likely to shop again (contributing to keeping the current growth rate stable), and more likely to recommend my shop to others (contributing to positive growth). Someone in the middle or on the negative side is less likely to shop again (contributing to negative growth) or even actively recommend against my product (counteracting any positive growth the promoters would cause).

Does the NPS tell me anything about _actual additional sales_ I can expect? No. Does it tell me anything about _actual customers I will lose_? Also no. But it is one predictor of future growth, and as such useful.

Does my speedometer tell me if I'm driving in the right direction? No. Does it show how many traffic jams are ahead? Also no. But it is one predictor of when I will arrive at my destination, and as such useful.


Thank you for taking time to explain this to people who are think NPS should be interpreted literally.

No, it’s really just a decent enough proxy for how your product is trending over time among your users.

Individual responses are uninterpetable. Single NPS survey is also not very interpretable. But with enough people and over enough time, it is a useful signal


I’m inferring that NPS here refers to “Net Promoter Score” [0]. Presumably within his niche, the author can assume his readers already know that.

[0] https://en.wikipedia.org/wiki/Net_promoter_score


I never knew this was an actual "thing."

Like, I've seen these for years. It feels like 5 seconds after I open any given website a "rate us on 1-5" popup shows up right in the middle of the screen. I assumed it was just some thing that's automatically thrown in just to annoy users and has no practical purpose, like the cookie warnings (that are ignored regardless of what you select), the email spam requests (which nobody reads despite what people claim [if someone out there wants to get angry and claim their daily spam emails and annoying popups do good for their business, go ahead {I will laugh at you}]), and the "subscribe/follow us on (social media)" that plague every site these days.

Knowing some manager is thinking this is valuable info and this may decide their job is just hilarious. I used to randomly pick numbers just to dismiss it but now I'm motivated to actively mess with them.


I suppose I should have known they're being used for something, but I tend to not answer these because they don't seem answerable. I had a surgery a few weeks back and had an e-mail asking me to rate the experience afterward. Anesthesia results in most memories of the direct experience not existing, so I can't rate that. My satisfaction with the surgery itself depends on the ultimate outcome of it, which I can't possibly know within a day.

It's the same thing with regular product purchase. You get these "would you recommend?" or "would you purchase again?" solicitations either immediately after receipt of the product or within a few days, or solicitations to go leave a review somewhere. How am I supposed to rate something I've owned for somewhere between minutes and days and have barely used? Many of my most disappointing purchases seemed like great things when they first arrived or I first put them together, but after prolonged usage they showed serious flaws or simply stopped working.


Marketing executives often have it as one of their KPI.

Clearly the less fuzzy way to define it is "How many people have you recommended this product/service to in the last n months?" not "Would you, if asked?"

Also, whether customers talk to each other (directly, privately) is not the only thing, there are online reviews, resources like HN, etc. And it's always hard to tell what's compensated or not, e.g. Snowflake's "grassroots" testimonial campaign on LinkedIn, testimonials etc., with the huge budget.


I would have expected everyone to have heard of it by now. I first saw it mentioned on HN in 2017 or so. Good reminder that there's always newer folks around and to define the terms!


That has nothing to do with newer folks, more with interest in certain topics.


"Nonsense per second" is probably a more accurate definition. Or maybe "per sale"?


I know this is silly but for a split second I gasped when I saw the name of the author here.


One day I hope to do enough that Google doesn’t suggest my name is a typo.



NPS, Myers-Briggs Type Indicator, and Astrology are all things that seem scientific, get a lot of attention, and are junk.


As an INTP Virgo who's unlikely to recommend products to your friends or colleagues, you would say that.


The problem with MBTI is that reduces a continuum to a binary for each of the letters. IIRC, the basis for what each pair of letters means isn't any worse than any other personality test.

Astrology isn't about stars. It's about cold reading if done in person, and about the art of writing descriptions that sound specific but actually apply to pretty much everyone.

NPS is a way to reduce a histogram to a scalar.


Astrology has some validity if you dig deep enough:

* when should we plant this crop?

* how do I generate a random number if literally any human bias is worse than a random choice?

* were you the oldest (or youngest) in your class at school?


I like to call MBTI "corporate astrology" and then my colleagues retort with "that's exactly what an ESTJ will say".


At least astrology has some interesting historical background.

That is the only redeeming quality of any of them, however.


MBTI gives young people who are feeling lost a reassuring sense of self-discovery. Sometimes you just need a tidy four-letter label with a generic description to pretend to understand the chaos of your own existence.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: