A big thank you to everyone who completed the IT Performance
Company Customer Survey. The results are in and here’s our Net Promoter Score
compared with a clutch of IT Services companies:
IT Performance Company Customer Survey August 2013 of 31 respondents compared with Temkin Group Q1 2012 Tech Vendor NPS Benchmark Survey |
We can say, hand on heart, that our lovely survey respondents gave us a better Net Promoter Score (NPS) than 800 US IT professionals gave IBM IT Services last year. What does that tell us? Is this a valid comparison? Is NPS useful for internal IT providers?
The Net Promoter Score is undoubtedly fashionable at the
moment and the market awareness driven by Satmetrix and Bain & Company seems to have endowed
it with some credibility. According to Google Trends, interest is still on the increase.
Google Trends Web Search Interest and Forecast for 'Net Promoter Score' |
The Net Promoter Score has its critics too. Wikipedia covers some of the arguments and Customer Champions have a nicely balanced article. It's right for measures to be held up for scrutiny, especially when attempting to use NPS to predict financial performance or benchmark disjoint groups (as with our example above). Equally we shouldn’t obsess about data quality perfection because the presence of types of error doesn’t invalidate the measure as long we interpret it in context coupled with cause-effect improvement action.
NPS is derived from the response to the so-called ‘Ultimate
Question’ : “How likely are you (on a scale of
0-10) to recommend our company/product/service to your friends and colleagues?”.
Respondents who score you a 9 or 10 are ‘Promoters’, those giving you 0-6 are
‘Detractors’ and the 7-8 are ‘Passives’. The % of Promoters minus the %
Detractors yields the NPS score on a scale of -100 to +100.
The
recommendation question itself is powerful because it goes beyond satisfaction
and loyalty to seek a measure of active advocacy. Customers are being asked if
they would ‘sell’ on your behalf and in doing so make a prediction about the
outcome and put their own reputation on the line. In essence they are expressing their
confidence that you will do a good job and not embarrass them.
Its
very, very hard to get an NPS of 100 because even those who score you an 8
don’t get counted and the scale is very heavily weighted towards the Detractor
buckets. The aggregate NPS score throws away the Likert Scale resolution so it's
important to look at the distribution of scores, ours for example:
Distribution of Responses to 'Likely to recommend' question in IT Performance Company Customer Survey August 2013 of 31 respondents. |
Would we have shared this chart if there had been any Detractors? The point is that having both an aggregate comparator score and a differentiating scale yields useful insight from a single recommendation question. The insight is richer still if a second qualifying question is asked about what could have been done better eg. “What would have made you score us higher?”.
I’d expect all enlightened internal or external IT providers
running ‘IT as a business’ to have some strategic customer objectives. These
may be coated in slippery management-speak which needs to be scraped off to
expose an observable performance result. See StacyBarr’s post on weasel words.
If there’s an objective in the customer perspective of the IT
strategy map like: “Customers trust us enough to recommend us” then the Net Promoter
Score will be a strong candidate for an outcome measure. The responses to the open-ended qualifying question should give clues to the results which should drive improvements in things
that customers value such as “Call-back commitments are met”.
Monopoly IT providers might be tempted to dismiss the
relevance of the ‘Ultimate Question’ when their consumers don’t have an
alternative, or external benchmarks aren’t valid (and anyway, their strategy is
cost reduction). This is to reject the power of the recommendation question for
hearing the ‘Voice of the Customer’, the foundation of Lean IT Thinking for
reducing waste. Experiments should instead be used to analyse (by correlation)
whether the Net Promoter Score is a strong indicator of some dimension of the
IT service to prioritise improvements by customer value and answer questions
like: “If we change X, what
happens to NPS?”.
By
virtue of simplicity and low cost, the two complementary NPS questions lend themselves to
transactional interactions such as exit polls from service desks. Care
is needed though to avoid types of sampling bias and distortion of the respondent's ‘stated
preference’.
You don’t need a large sample to begin to see signals and to
paraphrase Doug Hubbard: “Small measurements reduce uncertainty a lot”. An IT provider might choose to use stratified
NPS sampling to compare service perception gaps at different delivery points, internal
or outsourced. NPS could be used for longitudinal tracking of how IT is improving
its primary value streams. More in-depth, periodic surveys like variants of the Gallup CE11 will
still be needed to get a fuller understanding of IT customer engagement.
As a small, growing professional services company, our very
survival depends on reputation, trust and enduring relationships. If we have
earned our customer’s willingness to promote us then we’re proud, honoured and
probably doing a good job. So yes, the Net Promoter Score is an outcome measure
in the Customer Perspective of our very own Balanced Scorecard.
A more recent Temkin 2013 Tech Vendor
survey can be purchased here
in which VMware came top (47) and CSC came bottom (-12).