Jul
0

Are Market Researchers Creating the Functional Equivalent of genetically modified food?

CoverScreenshot-rev-210x261Have you ever had the nagging feeling that a research project collected data that was simply not very good? That the respondents weren’t adequately engaged when supplying answers? Or that the method didn’t gather enough meaningful context?

In market research, our convention is to conduct studies that employ imposed calibration.  Our studies often capture and measure attitudes and behaviors, as if they could all be sorted into neat packages. We carefully structure our questions, and in the case of survey research, even our answers. We use quotas, we use weighting.  But are we creating the functional equivalent of genetically modified food?

For some research needs (some, not all), it is time to think about options other than surveys, IDIs and focus groups. New methods such as webcam research, idea voting sites and online projective methods, to name just a few.

But newer market research methods are unpredictable and can compromise on demographic profiling—just like organic produce sometimes has a few more blemishes or less consistent coloring.  And these differences can make us research types uncomfortable. Heck, it makes me uncomfortable. But now that I have had the opportunity to test some of these new methods, I am also excited about their value.

My experiences have also let me to author a short white paper, “Organic Market Research: Avoiding Overly Contrived Data.” Please click here to download it.

 

Aug
0

When Breaking Up (Market Research Interviews) is Hard to Do

sunk costsIn economics there is a term known as “sunk cost.”  Investopedia defines a sunk cost as, “A cost that has already been incurred and thus cannot be recovered.” A cost does not have to be monetary either; it can be thought of in terms of time, resources, or anything else of value to a company. Business decisions are made independent of sunk costs. At first glance, this may seem a little ridiculous. If you have put hundreds of hours or thousands of dollars into a project, then you should work tirelessly to make it succeed, right? Well, the brutal reality is that sometimes we just have to accept that a project has gone bad, and it is time to move on. It can be best to just cut your losses and not lose any more money.

This idea of sunk costs applies to market research interviews (or in-depth interviews, IDIs) as well.  As does the concept of knowing when to walk away.

In-depth interviews can be an immensely valuable research methodology, and many market researchers use this tried-and-true approach (see some of the reasons why in this article from Quirk’s Marketing Research Review). But when conducting IDI research, a lot of time goes into planning before the actual interview is conducted.

Once a firm has spent all kinds of money and devoted countless hours to developing the IDI guide and screening participants, they should not waste a single opportunity right? Shouldn’t every scheduled IDI be completed fully? Not always: money and time at this point is a sunk cost. The goal now is to conduct as many good in-depth interviews as possible.  Wasting time on a bad interview just frustrates the interviewer and wastes time that could be better used elsewhere, so why bother? Unfortunately, in the quest to meet sample size goals and “not waste” sunk costs, too many researchers end up completing bad interviews.

So here is the critical question: how does one determine what is a bad interview and what is not? How long into an interview is it before it is possible to tell it is not worthwhile? What is the best way to end a bad interview? Each interview will be different and you will have to make a judgment call, however, Research Rockstar’s class on Conducting Research Interviews can provide you some valuable guidelines and tips for handling these unfortunate situations.

[Want to learn 12 valuable steps for stress free interviewing?  Visit Research Rockstar and take their self-paced, online class on Conducting Research Interviews. This class is available as both a self-paced and an instructor-led format: click here for dates!]

 

Mar
0

Market Research Policies

Do you cringe when you hear the word “policies”? Most people do. After all, policies often mean bureaucracy.  But in the case of market research, clear policies will minimize the risk of data quality headaches, customer over-surveying, ethical breaches and more.

Indeed, a thoughtful, well-communicated set of policies is more critical today than ever before, with so many people conducting ad hoc or “DIY” research. Well-intentioned individuals often make mistakes that could be avoided through awareness of a simple set of company-wide market research policies. Even organizations with central market research departments find it challenging to control “rogue” research—but promoting a set of policies will help minimize the risks.

Below are examples of market research policies that will promote basic, best practices:

  1. Frequency. Over-surveying can lead to customer frustration and ultimately, poor response rates. Thus, a key policy is to specify how many times a year a single customer can be invited to participate in research.  Three times? Five times? There is no “right” answer for all organizations—it varies by customer type. But a rule should be in place. In this way, employees can avoid inundating customers with volumes of survey requests.  Of course, this also requires having a mechanism in place to track this.
  2. Quality. All direct communications coming from your company are indicators of your brand’s quality, and surveys are no exception.  You must ensure that a kind of “quality control” resource exists to ensure that nothing sub-par gets released.  This job includes checking grammar and questioning content and logic.  For example, one common complaint about colleagues who do ad hoc research is that they may ask too many intrusive questions (a big turn off for customers). This resource could be a person, a team, or a defined process.
  3. Permissibility. The best way to prevent unsanctioned surveys is to make sure everyone knows how to request and get approval for market research projects.  Your company can specify what types of research must be done through central market research (if it has such a department) and what can be done by other functional areas.  A simple research request process should be in place so that employees can submit a standard form that can be used to trigger an assessment and approval process.  Too onerous? Then how about a simple policy stating, “Any surveys over 10 minutes in duration must be approved by the central market research (or if none exists, marketing) department– no exceptions.”
  4. Methods. Company guidelines should state policies for both qualitative and quantitative methods. For example, “All online surveys must be fewer than 30 questions.” Or, “Recruiting customers for in-depth interviews must be coordinated with the VP of sales at least two weeks ahead of time.”  These are just two simple examples, but you get the idea.
  5. Incentives. An incentive policy should include guidelines for types of incentives and under what circumstances they can be given out.  Inform your employees ahead of time about whether or not your company restricts cash incentives or any type of “gifts” to customers.
  6. Solicitation. A strict non-solicitation policy must be in place. Selling “under the guise of research” is entirely unethical and must be avoided. Even the appearance of solicitation can lead to big problems for your company. Surveys must not be used as thinly veiled lead generation mechanisms. [Click HERE to get more tips on survey design.]
  7. Confidentiality. A confidentiality policy will ensure your employees understand how to use research information responsibly and will show your clients that you value their privacy.  Obviously, it is essential that confidential information is protected, so train people on what information is confidential, how it should be stored, and how it should be treated (internally and externally). Another realm of confidentiality lies in what company information is shared in a research study.  Consider rules that will avoid unwanted leaks. For example, a policy may be that any research related to new product concepts must be approved by the VP of marketing.

Market Research Training Via Policies

While these simple policies may appear obvious to an experienced researcher, it is important to present them to all research-related colleagues. Include policies in employee orientation materials and provide reference materials for all employees who may in any way touch market research—whether it’s the DIY kind or not. Just by raising awareness that there are policies, you will be providing subtle training on best practices.

[Do you have staff that could use some market research training? Check out our online classes; most are under an hour, and all can be viewed conveniently from any web browser.]

Mar
0

What If?

Man With Head In SandAs the ancient proverb I just invented says, “Person with head buried in sand may well get kicked in butt.” So I’ve come up with a few scenarios that could result if online surveys with non-customer populations became impossible tomorrow. Imagine: you can still reach your customers for research, but what about the rest of the world? What if you could no longer reach qualified, non-customer groups in a quantitative way? If the lists and panels were no longer available or the response rates dropped to .0005 percent, what would the impact be on your market research needs and investments?

OPTION 1: GO QUAL.

Your quantitative research may simply be restricted to current customers. For non-customer populations, you’ll use observational or listening techniques like social media monitoring and ethnography, or qualitative techniques like focus groups and interviews. Recruiting for those qualitative methods will be hard, but finding 50 or 60 non-customers is easier than finding hundreds or more.

OPTION 2: GO TO THEM.

In-person surveys resurge. Intercept customers, at stores if you sell that way, through online shopping sites if not. If you’re in the B2B space, find non-customers at trade shows, conferences and other brand-neutral territories. Yes, it takes serious manpower, and there are limitations, but it works.

OPTION 3: TAP THE MIDDLE MAN.

Collecting feedback from your salespeople, outbound call center staff, and sales channels will become more critical than ever before. They may be your only conduit for reaching non-customer populations. Training these folks in how to ask questions (yes, really) and how to record feedback will be key.

OPTION 4: BRUTE FORCE.

You can try to force online surveys by using ad-based recruiting (survey ads posted to social media groups, banner ads on trade association sites, or ads in relevant online or print magazines). This is an expensive option, because response rates will be dismal….but better than nothing, you hope.

OPTION 5: BACK TO THE (PAPER) FUTURE.

There are plenty of lists available for postal mail—and if online surveys flounder, why not test it? We just may see a resurgence in paper-based surveys. The twist is that we may not have to mail actual surveys, just survey invitations.

OPTION 6: ALTERNATIVE WAYS OF GATHERING QUANTITATIVE DATA.

This will vary greatly by application. Here are two examples:

  • For product concept testing, it may mean putting actual mock products on your web site with different configurations and price levels to test market response.
  • For brand perception and awareness research, it could be posting one-question polls on social networking sites (like Facebook). Of course, such sites don’t gather perfect information about demographics. And how do we interpret poll results that lack precise geographic information? Still, it’s an option.

HAVE I MISSED ANY OTHERS?
I’d love your feedback. What do you think? If online surveys with non-customers became logistically impossible, what would your best option be? The future will belong to those with an arsenal of creative ideas ready to roll out.

[Have you seen the Research Rockstar paper on Market Research industry predictions? Get it here.]

Nov
6

Market Segmentation, Southwest Airlines Style

The TMRE session was titled, “Segmentation 2.0: Optimizing a Segmentation Model Using a Range of Tools and Stages.” And sorry to be blunt, but “2.0” was misleading.

Or was it?

The session started off benign enough. A classic segmentation study. Start with some qualitative, proceed to quant. SOP.

Key pointers from the session included:

  • Be sure to spend sufficient time planning the project
  • Be sure to have clarity on objectives (how the segmentation model will be used)
  • Include stakeholders in the process
  • Start with qual as a phase 1

All good, basic points, but certainly not 2.0.

But they did do two things not currently done in all segmentation studies.

  1. For the qualitative Phase, Southwest used ethnography. Tammy Sachs was their partner for this phase, and she shared some great video snippets from their ethnographic interviews.  I must say, it was very compelling to hear customers talk about their attitudes and perceptions of Southwest as well as of other airlines.  Those who felt strongly about getting miles—and listening to their passion about it was impressive. Those who value a good deal were also very articulate and compelling. And so on. There is nothing like hearing—and seeing—people  talk ad lib to really get a sense of their attitudes and values.  So ethnography is cool, and applied very well here…but is it “2.0”? Debatable.
  2. They used an “a priori” segmentation model. Yup, that’s right. They went into the study with a hypothesized set of segments in mind. The segments were based on behavioral data from their existing customer database.  During the presentation, this confused me. We were, after all, in a session on conducting segmentation. The process was defined as qual, leading to quant. But the speaker occasionally referred to the segments they started with. Isn’t a segmentation study usually used to derive segments?  Well, not in this case.

Southwest was concerned about having a model that would be actionable with its existing customer database. So they opted to create a segmentation model based on variables they already have, and build from there.  The market research was then designed to do two things:

  1. See if they missed any important segments
  2. Profile the segments they had created from the database

Now I confess, upon hearing this, I was stunned. This is not 2.0 in my mind…this is 1.0.  But after my initial reaction, I digested a bit. And there is some important merit in their approach.

Consider this:

  • They have a model that allows them to easily tag customers into segments (so no risk of having a model that is academically interesting but hard to apply to real business tactics)
  • They have a model that will likely resonate with their decision makers (since it uses variables that are familiar)

So is it 2.0? I don’t think so. But it brazenly defies a lot of current thinking about segmentation. And that is refreshing.

Southwest is often described as a low frills airline that delivers great value. Perhaps this also describes their segmentation approach.

[For more on segmentation, check out this video preview: Video Link]