Are Market Researchers Creating the Functional Equivalent of genetically modified food?

CoverScreenshot-rev-210x261Have you ever had the nagging feeling that a research project collected data that was simply not very good? That the respondents weren’t adequately engaged when supplying answers? Or that the method didn’t gather enough meaningful context?

In market research, our convention is to conduct studies that employ imposed calibration.  Our studies often capture and measure attitudes and behaviors, as if they could all be sorted into neat packages. We carefully structure our questions, and in the case of survey research, even our answers. We use quotas, we use weighting.  But are we creating the functional equivalent of genetically modified food?

For some research needs (some, not all), it is time to think about options other than surveys, IDIs and focus groups. New methods such as webcam research, idea voting sites and online projective methods, to name just a few.

But newer market research methods are unpredictable and can compromise on demographic profiling—just like organic produce sometimes has a few more blemishes or less consistent coloring.  And these differences can make us research types uncomfortable. Heck, it makes me uncomfortable. But now that I have had the opportunity to test some of these new methods, I am also excited about their value.

My experiences have also let me to author a short white paper, “Organic Market Research: Avoiding Overly Contrived Data.” Please click here to download it.



SANTA DOESN’T LIVE HERE: Don’t Oversell Market Research

Imagine a six year old who truly believes Santa grants all wishes. Even though he lives in a sixth-floor Chicago walkup, this child firmly believes Santa will come down the (nonexistent) chimney and leave a pony under the Christmas tree. Imagine the tears on Christmas morning when there’s no pony.

Most of us have experienced the feeling of being oversold, whether in business or in our personal lives. It’s painful. And it’s the last thing you want your internal colleagues, team members, or other associates to feel at the end of a market research project. Unfortunately, it happens. Maybe the results were unexpected, maybe the scope or methodology ended up being somewhat different than what they’d pictured. For whatever reason, the people who should have joyfully embraced your research results are unhappy. And after you’ve invested weeks or months of effort, time, and money, unhappy internal clients are the last thing you want.

How can we avoid this outcome? Be careful not to oversell market research. Those of us who do market research professionally tend to get enthusiastic. We are typically people who enjoy designing and implementing research methodologies, who like to dive into mounds of data and extract meaningful results. In our enthusiasm for research, we have to be careful not to over-promise. Realistic expectations are the key to satisfied clients, especially for riskier types of projects.


Product Concept Testing Example: What We Can and Cannot Promise

Product concept testing can be designed to help identify product ideas or feature combinations that have the highest potential share of the market. Key word: potential.

But there are challenges with product concept testing research. Sometimes people can’t respond to particular product concepts that are too new or innovative. They don’t have a frame of reference, so it is just too hypothetical. Henry Ford once said, “If I’d asked people what they wanted, they would have said a faster horse.”

Even in more mature categories, product concept testing research has limits. After all, a lot of research depends on asking people about their attitudes and behaviors, and even well-intentioned participants cannot report such information perfectly.

What can we promise about product concept testing that is realistic? We could design the project and promise we will:

  • prioritize features
  • prioritize among multiple new concepts
  • identify likely tradeoffs between features and price
  • estimate a demand elasticity curve
  • weed out blatantly bad ideas
  • identify potential demand deterrents or sales objections to a particular product concept

We also need to let colleagues know that there are many factors beyond our control. A major competitor may come out with a new product that influences purchase criteria. A new entrant may come out with a big splash and disrupt brand preferences. A shift in the economic environment could change price sensitivities.

Product concept testing research is still very worthwhile (and far better than not doing any at all), but it is an example of the type of project that has inherent limitations. It’s always better to do some research than none, but we don’t want to over-sell it.


Bottom line

If you’re doing market research on a project that is known to be risky in terms of how well it can predict actual customer attitudes and behaviors, you have two key steps to take:

  1. Consider augmenting survey research with other methodologies.
    Social media monitoring? A prediction market? Ethnography? Rapid prototype testing?
  2. Set realistic expectations. First, be certain that internal clients have realistic expectations about what they can and can’t expect. Let them know that inconclusive, contrary, or contradictory results can come up. If they do, it may require further research or the use of other methods—all of these things are part of the real world.

With realistic expectations, people will have the right mindset. We won’t have to worry about disappointing them in the end, and that saves everybody a lot of aggravation.

So don’t even try to be an all-powerful Santa. If that pony won’t be trotting out from under the tree, make it clear in advance. That city child would be happier with a shiny new bicycle anyway.


For more on Social Media Research, sign up for Research Rockstar’s White Paper on the subject.

Want access to more market research articles and training materials? Sign up for the Research Rockstar newsletter: SIGNUP


Computer-Based Training for Market Research Excellence

Computer-based training (CBT), also known as eLearning, is a time-efficient, cost-effective training option for busy professionals.

Self-paced Learning

Research Rockstar classes are self-paced. Watch when you want, pause when you want, re-start when you want.

Easy Access

Anyone with internet access and a browser can access a class. No travel!

Low-cost alternative to in-person seminars

Not everyone has the time, or money, to attend 2-day seminars out-of-state. CBT removes geographic boundaries to learning.


Learn what you want, when you want. So many seminars seem to throw in a mind-numbing mix of tangential content to fill up their 2 or 3 day agendas. With CBT, we can offer very precise topics so you can get what you need, and quickly.

Market Research Training

Whether you think of it as computer-based training, eLearning, or whatever, the idea of accessing learning material over the Internet is powerful. Research Rockstar is dedicated to offering convenient access to practical content. Plus, because the material is all online, it can be easily customized if your team has special interests.

[We offer classes on market segmentation, product concept testing and many more. Buy a single class, or sign up for an annual membership]


When Good Enough is Good Enough: Seeking Balance in Product & Pricing Research

The difference between good market research and great market research can be significant.

But sometimes the incremental time, cost and sweat of that extra effort simply doesn’t make sense. Sometimes, “good” is just perfect.

I was reminded of this last week at the Launch Camp conference in Cambridge. The event, for entrepreneurs seeking social media wisdom, had some interesting speakers; the one from whom I learned the most was Dharmesh Shah, Chief Technology Officer and Founder of HubSpot (on Twitter as @Dharmesh).

In three years, this company has gone from start-up to 2,000+ customers, most of whom pay a monthly fee. Dharmesh shared his start-up success insights at Launch Camp and advised the attending entrepreneurs to focus on practical marketing. Selling stuff. Tracking key metrics to understand what sells stuff. And in his case, this clearly works.

He observes that many entrepreneurs get bogged down by over-analyzing their decisions—ultimately missing their window of opportunity. Key areas for such analysis paralysis? Product optimization and pricing.

ACK! Product concept testing and pricing research are two key pillars of market research practices around the world! But of course, he is correct. Especially in the context of new or rapidly evolving product categories.

Product Concept Testing

Market research offers proven methods for testing new product concepts—methods that can prioritize features or optimize feature-price combinations. And that’s great.

But I have seen companies completely miss windows of opportunity because they kept adding on less-than-critical features before they would launch. Kept conducting more and more research to inform (or justify) their decisions. Their leaders traded early market feedback for an over-engineered product. Dharmesh chastised this approach and emphasized that while market research is useful, at some point you need actual market feedback in order to inform further improvements. The ultimate feedback: will people buy it? If they buy it, will they return it?

Of course, these days, there are ways to simulate actual product releases to do this—although that is not a realistic option for all categories.

Pricing Research

Look, if you are talking about mature consumer product categories (like toothpaste and laundry detergent), pricing research is a very defined, concrete sort of practice. But in many B2B markets, emerging markets, and new product categories, it simply isn’t perfect. Yes, do some research. Do some primary research, analyze competitive/substitute pricing, understand your target market’s overall budget, know your expected ROI. But at some point you have to take a leap with pricing. And as Dharmesh said, despite long-held tenets to the contrary, you CAN adjust your pricing down the road.

Imperfect Data is Better Than No Data

Yes, it is true—imperfect data is better than no data. And sometimes, directional data sooner is better than quantitative data later.  In any case, knowing when to stop conducting market research in order to price and release new products can be tricky. Luckily for busy professionals seeking to inform product and pricing decisions, there are many options along the continuums of research speed and exactitude.

BTW, Dharmesh has a book out—I ordered my copy and can’t wait to read it: Inbound Marketing.

[Would you rather take one market research class for $2000 or get unlimited access to 12 online for $600/year? Or how about 5 for FREE? I thought so!  Sign up for a Research Rockstar membership today: http://is.gd/87vvd]

[For more info on Launch Camp search #LaunchCamp on Twitter for great links to blogs, RTs and even videos from the event]


Using Customer Feedback to Inform Product Design Decisions

bigstockphoto_Choise_Concept_5652119So you’re planning to develop a new product, and want to know which features will be most important to potential buyers. And maybe which features could be nice-to-have, but not critical. Or maybe you want to estimate how adding a specific attribute could change potential market adoption.

These are obviously important questions. So, how to get the answers?

In many product categories, the best choice is to conduct primary market research, to get direct feedback from people in your target market. In some cases, qualitative feedback is fine—depending on your budget, analysis needs, and so on. But more commonly, in order to make firm decisions about product design, quantitative research is the best choice. If you want reliable conclusions about the priority ranking, for example, of 10 potential product features, you will want hard data.

[Do exceptions exist? Yes. There are some product categories and contexts in which primary market research is unlikely to yield reliable results. If you are wondering if you might be in that kind of situation, call me and I’ll be happy to discuss it with you.]

If you are thinking about using market research to inform product design decisions, you may be sending out an RFP to some market research agencies. And when their proposals come back to you, you will likely start hearing about data analysis techniques such as conjoint analysis (or discrete choice, which is a type of conjoint) and MaxDiff. You may get different recommendations from different market research agencies about which will be best—and that can get confusing.

In fact, one question I have heard many times from people in these situations is, “what is the difference between MaxDiff and Conjoint?” I was speaking recently with Brett Jarvis, a real expert on this topic from Sawtooth Technologies Consulting group, and he offered to write an article on the topic. Don’t panic: it’s not an article for stats geeks. It’s very friendly and includes great examples. The full article is being released in the September Research Rockstar newsletter, which will be sent out Monday September 21. So if you are not currently a newsletter subscriber, please sign up for free at [SIGN UP] to make sure you get this important article.

In the mean time, here is an excerpt from Brett’s piece:

“The reasons some people might get confused between conjoint and MaxDiff are two-fold. The first reason is that they both involve trade-offs to some extent. The respondent is effectively told that they can’t have everything and is forced to make choices. However, in a MaxDiff study the respondent evaluates a single list of items, whereas in conjoint the respondent evaluates complete products made up of various features. This brings us to the second reason. Both techniques can tell you how customers value different features. However, if you are focusing on a single list of items only, conjoint is likely more complex than is needed, whereas if you want to understand customer preferences across features, conjoint is essential.”

After you read this article, you will feel a lot more comfortable reading proposals from market research agencies that recommend these techniques.

And remember, no matter what techniques you are considering, always keep your research participants in mind. Some research designs can lead to longer, more cognitively challenging questionnaire designs—will your target audience be ok with that? Or will they balk at any surveys that take over 10 minutes? Sometimes a research design can be ideal from an analysis point of view, but if your survey takers won’t comply, a simpler approach will be a better choice.

Be sure to get the full article by signing up for our free newsletter here.