Jul
0

Article Synopsis: The High Price of Customer Satisfaction

MIT Sloan Management Review

March 18, 2014   Magazine: Spring 2014

Timothy Keiningham, Sunil Gupta, Lerzan Aksoy and Alexander Buoye

Highly satisfied customers = revenue dollars. Or do they?  Some data has shown that the relationship between customer satisfaction and customer spending behavior is surprisingly weak. 1 In this article, the authors share their analysis of the relationship between satisfaction and business outcomes, gathering data from more than 100,000 consumers covering more than 300 brands.   This data came from two sources, the American Satisfaction Index data (2000-2009) which are measures of stock returns, appended with market shares of these companies, and consumer satisfaction ratings and customer spending levels across 315 brands.2

This analysis revealed three critical issues that have an impact on correlating customer satisfaction to positive business outcomes.  1) There is a downside to continually devoting resources to raise customer satisfaction levels; 2) High satisfaction is a strong negative predictor of future market share; 3) Knowing a customer’s satisfaction level tells you almost nothing about how customer spending will be divided among the different brands used.

The authors share strategies to align customer satisfaction and profitability that companies should understand and implement as follows:

“Value to the Company vs. Value to the Customer—research and analyze your customers’ satisfaction levels with your product to the product’s profitability.”

“Market Share vs. Customer Satisfaction—begin with an analysis of customers’ satisfaction levels with not only your company but also with your competitors, as well as your and your competitors’ market shares.”

“Satisfaction and Customer Advantage—what really matters is whether or not your customer satisfaction rating is higher for your brand than for competing brands that a customer also uses.”

The authors conclude that increasing satisfaction levels can be a component of a company’s strategy, but perspective is needed.  In fact, a company may need to accept lower satisfaction scores from a smaller group of customers, in order to increase market share within a larger less homogenous group.  For researchers conducting customer satisfaction research, this context provides some fresh inspiration about how to weave conventional satisfaction research with additional data sources.

References

1 J. Hofmeyr, V. Goodall, M. Bongers and P. Holtzman, “A New Measure of Brand Attitudinal Equity Based on the Zipf Distribution,” International Journal of Market Research 50, no. 2 (2008): 181-202; and A.W. Mägi, “Share of Wallet in Retailing: The Effects of Customer Satisfaction, Loyalty Cards and Shopper Characteristics,” Journal of Retailing 79, no. 2 (2003): 97-106.

2 Some examples cited include: L. Aksoy, A. Buoye, P. Aksoy, B. Larivière and T. L. Keiningham, “A Cross-National Investigation of the Satisfaction and Loyalty Linkage for Mobile Telecommunications Services Across Eight Countries,” Journal of Interactive Marketing 27, no. 1 (February 2013): 74-82; Aksoy et al., “Long-Term Stock Market Valuation”; and others.

 

This synopsis was written by Lynn Croft, independent marketing and market research consultant. With 15 years of experience at companies such as Genzyme, Bayer Corporation, Shire, and Eli Lilly, Lynn has expertise in market research, market analysis regarding product launches, pricing and lifecycle management. 

 

[Are you planning your organization’s first customer satisfaction research? Or looking to refresh an existing program? Learn about goal setting, monitoring strategies, and common challenges in our 90-minute, live online Improving Customer Satisfaction class. MRA approved for 1.5 hours of PRC credit.]

 

May
0

Market Research & Lost Mojo: Article Synopsis

Andrew Reid, son of Market Research luminary Angus Reid, says Market Research has “lost its mojo.”

In a new article published in Entrepreneur Magazine, Reid states, “In the early 2000s, with the increased use of email, the internet, mobile phones and social media, many companies transformed their way of doing business, but market research companies did not.”  Reid himself is the President of Vision Critical, a well-known provider of market research software and services.

Reid makes some excellent points in a brutally honest way. He asks, “Why do some market researchers still use 15-minute surveys and deliver 60-page reports that companies, their clients, have trouble digesting?” Hard to hear, but so-so-true.  He advocates for short reports, infographics and the Pecha Kucha-style presentation.  I could not agree more: indeed, I think it would be amazing have a panel at one of the market research conferences where, say, 3 market researchers do Pecha Kucha presentations of research results—just so the audience can see that it can be done (warning: it takes more time to prepare this style presentation than to prepare a standard 45 minute one. Really).

So while I applaud his boldness, one of his points about the lost mojo is only partially correct. He says we missed the technology boat in the early 2000′s, stating, “(market researchers) should have worked with early tech adopters to gain insight. And market research companies could have launched products in beta and made some risky decisions. Yet, all they did was undertake the same paper-and-pen surveys.” Intentional hyperbole? Probably. But still factually incorrect. SurveyMonkey was founded in 1999, and they report completing 2 million survey responses a day. And even if SurveyMonkey is the 800 pound gorilla in online survey research, it is still one of more than 50 such companies. Online surveys took off years ago. That is not the issue.

The issue wasn’t technology, it’s what we as researchers did with it. We took the fabulous new technology and applied it to tired old methodologies.

Market researchers remained overly-focused on surveys and focus groups—no matter if done online, or other modes. In fact, as an industry, to this day we allow our profession to be defined by these two methods. Markets research should be defined by our deliverables, not our methods. Our deliverables are discovering and measuring customer attitudes and behaviors. Our methods are surveys (whether paper, online, phone, etc.), focus groups (in-person or online), and these days at least ten other options.  Yet we continue to be perceived as “surveys and focus groups.” Ask even a group of market researchers what comes to mind first when they think of the phrase “market research”, and most will say “surveys” or “focus groups.” I know, I have asked this question at public speaking venues.

So kudos to Reid for A) getting an article about market research in a business magazine and for B) being bold in his assessments. But if we really want to get market research’s mojo back, we have to make sure we are offering more than surveys and focus groups.

Read Reid’s article here.

Written by Kathryn Korostoff
KKorostofff@ResearchRockstar.com

 

Apr
0

3 Tips to Avoid Bad Market Research Software Purchases

You don't need to get “married”Market research software comes in many forms these days: survey programming, data analysis, text analytics, and social media analysis are among the most common.

The good news for buyers is that many firms offer monthly options—helping you, the buyer, mitigate risks. There is no need to get “married”; you can just live together and part ways amicably when the mood strikes.

Still “moving in” is a big step, as it requires both training and business process adaptation. Training can be informal or formal, but always involves some time investment. And process adaptation often includes creating and implementing procedures that optimize how new software is actually used during the market research process.

Too often, companies rush to implement new software, and then realize they are not satisfied with its features or functionality.  But they are loathe to abandon it because of the training and process investments they have made to get it in place! They stay married to the “devil they know” rather than risk the aggravations of going out into the software dating pool again.

So before you take the next step with new market research software—whether it is marriage or cohabitation—consider these three steps to minimize the risk of an ugly break up.

1. Create & prioritize your feature requirements. It sounds obvious, but it is a step often skipped. Sometimes the feature requirements just seem so apparent. If I am evaluating survey software, the features are kind of a “duh”, right? Same for text analytics software features, right? Wrong. If you don’t document your feature requirements and prioritize them into categories (must have, nice to have, optional), you risk selecting a product that has “sold” you, versus you “selecting” it.

2. Start with a trial phase. A trial phase allows you to try market research software before you buy it. In some cases, it even makes sense to start with a trial and then do a “pilot.” If you want to get super precise, you can distinguish between a trial and a pilot as follows:

  • A trial is when you are testing the software, most likely in a “mock” situation (not to support a client project). A trial is usually fairly short, seven to fourteen days in most cases.
  • A pilot is when you have actually deployed the software on a limited basis, in real research, to “stress test” its viability. A pilot will typically occur after a successful trial phase, and is organized around a specified set of success criteria. For example, “We will consider the pilot a success if it meets criteria A, B and C.” Pilots are often a little longer, typically ranging from fourteen days to a month or more (or longer for more complex products).

3. Trial at least 2 products. Yes, this means more time (and aggravation) but you will find that evaluating one product at a time leads to some bias. You trial one product, get to know it, and it is easy to just accept its warts and go ahead—even if you didn’t really love it.

  • Tip: if possible, divide and conquer. Have one team member evaluate one product while another is trying a competitive option. Not only does this make trialing two products easier, it reduces the risk of bias. We tend to like that with which we become familiar, so having two “equal” software trials help ensure an objective comparison of two products.

About Free Market Research Software Trials

Many companies offer free trials. Take advantage of them. Not only is it good for your budget, it says something about the company. I am always more inclined to trust software companies that have enough confidence in their software features and ease of use to offer trials. In contrast, I am wary of companies that say it isn’t an option. Software is scalable; the incremental cost of supporting free software trials is low for market research software providers. The exception being products that can’t be used without extensive one-on-one training, and in market research, that is fairly rare these days.

If you are interested in a software product that does not promote a free trial on its website, call and ask. You may be surprised how many companies are willing to offer a seven-day trial once they know you are a legitimate researcher.  And this way you can date before marriage.

 

[SPSS is another popular type of market research software. Want to know more about using SPSS? Consider our LIVE 4-session Introduction to SPSS course. Now PRC approved!!]

 

Apr
0

The Lost of Art of Pre-testing Questionnaires: Don’t Let Your Market Research Crash

I am stunned at how many experienced market researchers don’t bother pre-testing before they start data collection for survey projects.

Stunned.

It is the market research equivalent of a pilot who decides not to bother with the pre-flight checklist before takeoff.

I have had two recent experiences where I had seasoned researchers working with Research Rockstar clients, and they had assumed pre-tests were not required.  Really? That’s the assumption? I wonder how many pilots assume pre-flight checklists don’t apply to them.

There are certainly varying opinions about many market research best practices, but this really shouldn’t be one of them. Unless the survey research you are doing is a tracking study or an ongoing transactional study (in these cases the questionnaire has been tested, standardized, and assessed over time), pre-testing is critical.

Semantics: Pre-testing or Soft Launch?

I use the phrase “pre-test” and that is what I teach in Research Rockstar classes on project management and questionnaire design. Some people use the term “soft launch.” I am not hung up on the language, but there are some elements that are required in professional research regardless of your preferred lexicon:

  • Collecting responses from real research participants. A pre-test is not asking your Uncle Stan to take your survey and give you feedback. Sure, get Stan’s feedback—but before the pre-test, not in lieu of it. A real pre-test needs to be done with people from the actual sample source.
  • Using the final questionnaire. The pre-test must be done with the final instrument. Not a draft you know you will be editing anyway.
  • Using the intended data collection methodology. If it is an online survey, collect it online. “Phone” testing an online survey isn’t a true pre-test. Maybe it can be a pre-pre-test. For example, if you need to get feedback on answer options for a particularly jargon-full questionnaire, fine, do some phone work so you can find out how people are responding to answer options and wording. But that is not a pre-test.
  • Analyzing the results. It isn’t a pre-test if you don’t actually look at the results. There are several things we look for in a pre-test, but the most important one for many people is survey duration. This is a huge market research budget consideration—and can either hurt or help. So why not be precise? Especially for researchers who work with panel providers.  What if you told your panel provider the average duration would be 10 minutes, but your pre-test says 7? That’s real savings for you.

Pre-testing: Is Your Questionnaire Cleared for Take-off?  

For every 10 projects I pre-test, I may only make post-pre-test changes in three of them. Seven go forward, no changes needed. But the three that do get changes? Those are important. I have had pre-tests catch duration issues, programming logic errors, drop-out risks, and more. So yes, even though I have been doing this for 25 years, I still do pre-tests. Does it mean I don’t ever make questionnaire mistakes? Sure I do (in fact I had a doozey just recently, which I will post about soon). But pre-testing minimizes my risks.

Bottom line? Pilots have a re-flight check list that has 50 or more steps. We researchers don’t have quite that many on our pre-launch list, but pre-testing should be right at the top.

 

[Interested in more free market research tips and news? Want it delivered right to your inbox? Sign up for our newsletter here.]

 

Jan
0

Best Market Research Articles of 2013: Third in a Series of 10

[Research Rockstar interns have written synopses of 2013’s best market research articles, as selected by Kathryn Korostoff. This is the third in our series. This synopsis was written by Research Rockstar intern, Audra Kohler.]

Article: Are you thinking what I’m thinking?

Originally published in: research.

July 30, 2013

Rob Egerton and Jeanette Kaye

Have you ever bought something because all of your friends had it?  While we may be loath to admit it, our actions are swayed by friends, groups, and the public. Perhaps even more so than what we realize.  Because of this reality, the authors of “Are you thinking what I’m thinking?” argue that market researchers need to go beyond the individual to truly understand consumer behaviors.  The authors state that two particular theories should be used more in research to explore the dynamics of influence.

Wisdom of Crowds for Market Research

The author’s first cited theory, wisdom of crowds, was the theme of a popular 2004 book of the same title by James Surowiecki.  The basic premise is that group decision-making or estimation is more accurate than individual decision making.  An example: a group would be more accurate at estimating the number of candy corn in a jar at your annual Halloween get-together, rather than each individual guesstimating separately.  Another researcher, Martin Boon, took this conclusion one step further.

This is where the meat and potatoes lie in this article.  Boon reworks this theory to predict elections.  Based on his research with actual election results, he concludes that averaging a randomly selected sample’s guesses is more accurate than traditional polling methods.  The use of the wisdom of crowd’s theory had two clear distinctions:

  • Individuals were not asked how they were going to vote.  The sample was asked how they thought others would vote.
  • Previous election results were provided to each respondent, which provided a useful context.

Overall, this method proved to be more accurate than traditional polling.

The Theory of Group Behavior for Market Research

In his book “I’ll have What She’s Having,” Mark Earls makes the claim that in determining decisions, the influence of other people is more significant than the actual individual decision maker.  But if you think about it, as market researchers, we are great at knowing the individual and their thought process.  Rarely do we research how individuals behave in a group and how they are influenced by that group.

According to Egerton and Kaye, “…recent behaviors to which we can all relate point to how individuals can be encouraged into actions not by their own assessment of what they should do next, but by the actions of those around them.”  In Earls’ book, he cites the London riots of 2012, laying flowers at traffic accidents or at significant events as examples of group dynamics.

A Powerful Combination for Market Research

By integrating lessons from these two powerful theories, the authors create key market research lessons:

  • Acknowledge.  Realize that there are limitations to looking at only an individual’s behavior.  Behavior of the individual is influenced by group dynamics, the authors argue.
  • Explore.  Although this is difficult, the authors encourage beginning to map out how others influence an individual.
  • Categorize.  Egerton and Kaye cite a TED talk by Dereck Sivers, which gave a high importance to breaking down the behavior of early adopters versus followers. This is one way to start categorizing consumer behaviors by group.