Using Customer Feedback to Inform Product Design Decisions

Choice ConceptSo you’re planning to develop a new product, and want to know which features will be most important to potential buyers. And maybe which features could be nice-to-have, but not critical. Or maybe you want to estimate how adding a specific attribute could change potential market adoption.

These are obviously important questions. So, how to get the answers?

In many product categories, the best choice is to conduct primary market research, to get direct feedback from people in your target market. In some cases, qualitative feedback is fine—depending on your budget, analysis needs, and so on. But more commonly, in order to make firm decisions about product design, quantitative research is the best choice. If you want reliable conclusions about the priority ranking, for example, of 10 potential product features, you will want hard data.

[Do exceptions exist? Yes. There are some product categories and contexts in which primary market research is unlikely to yield reliable results. If you are wondering if you might be in that kind of situation, call me and I’ll be happy to discuss it with you.]

If you are thinking about using market research to inform product design decisions, you may be sending out an RFP to some market research agencies. And when their proposals come back to you, you will likely start hearing about data analysis techniques such as conjoint analysis (or discrete choice, which is a type of conjoint) and MaxDiff. You may get different recommendations from different market research agencies about which will be best—and that can get confusing.

In fact, one question I have heard many times from people in these situations is, “what is the difference between MaxDiff and Conjoint?” I was speaking recently with Brett Jarvis, a real expert on this topic from Sawtooth Technologies Consulting group, and he offered to write an article on the topic. Don’t panic: it’s not an article for stats geeks. It’s very friendly and includes great examples. The full article is being released in the September Research Rockstar newsletter, which will be sent out Monday September 21. So if you are not currently a newsletter subscriber, please sign up for free at [SIGN UP] to make sure you get this important article.

In the mean time, here is an excerpt from Brett’s piece:

“The reasons some people might get confused between conjoint and MaxDiff are two-fold. The first reason is that they both involve trade-offs to some extent. The respondent is effectively told that they can’t have everything and is forced to make choices. However, in a MaxDiff study the respondent evaluates a single list of items, whereas in conjoint the respondent evaluates complete products made up of various features. This brings us to the second reason. Both techniques can tell you how customers value different features. However, if you are focusing on a single list of items only, conjoint is likely more complex than is needed, whereas if you want to understand customer preferences across features, conjoint is essential.”

After you read this article, you will feel a lot more comfortable reading proposals from market research agencies that recommend these techniques.

And remember, no matter what techniques you are considering, always keep your research participants in mind. Some research designs can lead to longer, more cognitively challenging questionnaire designs—will your target audience be ok with that? Or will they balk at any surveys that take over 10 minutes? Sometimes a research design can be ideal from an analysis point of view, but if your survey takers won’t comply, a simpler approach will be a better choice.

Be sure to get the full article by signing up for our free newsletter here.

  1. Kathryn Korostoff Kathryn
    |    Reply

Post A Comment

Your email address will not be published.