A recent blog on the telecom and mobile publication site Billing and OSS World discussed the merits of Net Promoter Score or NPS. The writer suggested that the methodology could lead to erroneous conclusions because many friends and colleagues are not the relevant audience for a given product or service and, therefore, the survey respondent may understate their satisfaction level.
NPS was pioneered by Fred Reichheld who has written several best-selling books including The Loyalty Effect, and is a frequent contributor to The Harvard Business Review. His research around customer satisfaction and loyalty led him to conclude that a customer’s answer to a single question, “How likely are you to recommend this product or service?” is the single highest predictor of loyalty and customer retention and that one question generates nearly as much reliability as much longer customer satisfaction surveys. Reichheld wrote a seminal book on the topic called The Ultimate Question, and launched a movement of sorts around NPS as a critical methodology in customer satisfaction.
In recent years, his methodology hasn’t been without critics. I found the the criticism of NPS in the OSS World blog appears unwarranted but, at the same time, offers good insight around survey design. The writer’s criticism focuses around the fact that not all products and services are relevant to all consumers. Therefore a question around a hypothetical recommendation could lead to misleading data, presumably because some people may say they would not recommend a product or service (because it wouldn’t be relevant to their friends), when in fact that consumer is happy with the product (therefore creating “false negatives” in the data). Here are a couple of the writer’s examples:
“Nor would I recommend, even though I thought it was great, the latest Richard Dawkins book because I know it would irritate the tender sensibilities of their spirituality. And I wouldn't recommend my favorite steak place to my vegan friends — actually, I only have one of those, too. So why would I ever make a blanket statement that I would recommend my wireless provider to them?...I am delighted with my iPhone, but (1) I don't pay for it and (2) I would not recommend it to my friend who loathes computers even though I try to tell him it is just like using that machine at the Off-Track Betting Parlor he goes to every Saturday.”
This argument overstates a corner case. In my experience, consumers don’t over-think a recommendation question; they generally understand the spirit of the question and answer accordingly.
That said, there is an excellent takeaway from the author’s concern. Much of this criticism, and potentially survey inaccuracies (however small), can be prevented through effectively wording the question. Instead of bluntly saying, “Would you recommend product X to a friend?” the wording can be, “Would you recommend product X to someone who is considering this type of purchase?” So in the restaurant example, the question would be “Would you recommend Cardiac Steaks to a friend who is looking for a good steakhouse?” If the answer is “no,” it’s not because the “friend” is a vegan. It’s because they were looking for a good steakhouse, and apparently Cardiac Steaks doesn’t qualify. That’s highly useful and highly efficient. Another variation of the question could be, “Based on your experience with product X, would you still make the same decision to buy it if you had to do it over again?”
What I like about NPS is that it quickly gets to the heart of satisfaction and cleverly segments customers in a way that is meaningful to the business, but uses a survey format that is not so burdensome to the participants that it creates low participation rates.
Lastly, in a B2B purchase, especially for high cost products or solutions, I would not restrict a satisfaction survey to a single question. Adding a few questions with minimal burden to the respondent can add a great deal of information and also alleviate the risk of weak or spurious data.
Varolii has used the NPS methodology for the past two years. We ask ten questions in our survey, all are rating scales except for one optional open-ended question. One of the questions has seven sub-components (also ratings), so I suppose that makes it 16 questions, but three of the questions are optional, so we’re back to 13. There is always a trade-off between the depth of information and the likelihood of participation/survey fatigue, but the reluctant participant can still get through it in less than five minutes, so we’re trying to get that right balance.
My recommendation is to stick with NPS if you’re currently using it or try it if you aren’t and judge the results. Like any survey, invest up-front in the design, including the specific wording of questions. Be a critical thinker and look for ways that the wording could be misinterpreted by the respondents or lead to bad data. A great way to do this is to “start with the end in mind,” as Stephen Covey put it. Start with thinking about what the final report and the data set that you’re ultimately presenting to your manager, stakeholders or teams to look like, i.e. what specific charts do you want to show? What variables do you want to correlate? And so forth. Then build the survey tool that will give you the data and reports you want.