Article Synopsis: Quantitative or Qualitative Research Methods, Let’s Go Back to the Basics

Quirk’s October 2014
“Quant or qual, let’s go back to the basics”
By Kevin Gray

Kevin Gray’s article is chock full of tips, reminding market researchers to pay as much attention to “how they think” as they do to what research methods they use. He offers his thoughts on what he calls “research thinking.”

Gray breaks research thinking into specific parts: verifying data, defining relationships, understanding and avoiding data interpretation traps, and probabilities versus categories.  In verifying data, not only must researchers be sure to uncover flaws in the raw data, but also be aware of inferring cause and effect relationships. Additionally, when investigating relationships within data, different statistical methods and models can give different readings.  Gray states, “Causation requires correlation of some kind but correlation and causation are not the same.”

When looking at probabilities and categories, Gray cautions the researcher to, “Avoid confusing the possible with the plausible and the plausible with fact. It’s also not difficult, though, to miss something of genuine practical significance that lies hidden beneath the surface of our data.”

Additional tips from the author:

  • Do your homework. Many phenomena have more than one cause.
  • When designing research, first consider who will be using the results, how the results will be used and when they will be used, and then work backward into the methodology. Don’t let the tools be the boss. 
    • This point really resonates; in today’s world, researchers can get distracted by technology that may or may not have merit.  So it is easy to select the shiny new tool even if it is not the right fit.

Two more great tips:

  • Develop hypotheses, even rough ones, to help clarify your thinking when designing research.
    • This  may sound obvious but it is often overlooked. As a result, we have all seen cases of muddy thinking resulting in weak research.
  • Take care not to over-interpret data.
    • Or, as some researchers say, “don’t beat your data to death.”

Gray’s tips are a good reminder to market researchers to be aware of their “research thinking.”

This synopsis was written by Lynn Croft, independent marketing and market research consultant. With 15 years of experience at companies such as Genzyme, Bayer Corporation, Shire, and Eli Lilly, Lynn has expertise in market research and market analysis regarding product launches, pricing and lifecycle management. 


Are Market Researchers Creating the Functional Equivalent of genetically modified food?

CoverScreenshot-rev-210x261Have you ever had the nagging feeling that a research project collected data that was simply not very good? That the respondents weren’t adequately engaged when supplying answers? Or that the method didn’t gather enough meaningful context?

In market research, our convention is to conduct studies that employ imposed calibration.  Our studies often capture and measure attitudes and behaviors, as if they could all be sorted into neat packages. We carefully structure our questions, and in the case of survey research, even our answers. We use quotas, we use weighting.  But are we creating the functional equivalent of genetically modified food?

For some research needs (some, not all), it is time to think about options other than surveys, IDIs and focus groups. New methods such as webcam research, idea voting sites and online projective methods, to name just a few.

But newer market research methods are unpredictable and can compromise on demographic profiling—just like organic produce sometimes has a few more blemishes or less consistent coloring.  And these differences can make us research types uncomfortable. Heck, it makes me uncomfortable. But now that I have had the opportunity to test some of these new methods, I am also excited about their value.

My experiences have also let me to author a short white paper, “Organic Market Research: Avoiding Overly Contrived Data.” Please click here to download it.



When Breaking Up (Market Research Interviews) is Hard to Do

sunk costsIn economics there is a term known as “sunk cost.”  Investopedia defines a sunk cost as, “A cost that has already been incurred and thus cannot be recovered.” A cost does not have to be monetary either; it can be thought of in terms of time, resources, or anything else of value to a company. Business decisions are made independent of sunk costs. At first glance, this may seem a little ridiculous. If you have put hundreds of hours or thousands of dollars into a project, then you should work tirelessly to make it succeed, right? Well, the brutal reality is that sometimes we just have to accept that a project has gone bad, and it is time to move on. It can be best to just cut your losses and not lose any more money.

This idea of sunk costs applies to market research interviews (or in-depth interviews, IDIs) as well.  As does the concept of knowing when to walk away.

In-depth interviews can be an immensely valuable research methodology, and many market researchers use this tried-and-true approach (see some of the reasons why in this article from Quirk’s Marketing Research Review). But when conducting IDI research, a lot of time goes into planning before the actual interview is conducted.

Once a firm has spent all kinds of money and devoted countless hours to developing the IDI guide and screening participants, they should not waste a single opportunity right? Shouldn’t every scheduled IDI be completed fully? Not always: money and time at this point is a sunk cost. The goal now is to conduct as many good in-depth interviews as possible.  Wasting time on a bad interview just frustrates the interviewer and wastes time that could be better used elsewhere, so why bother? Unfortunately, in the quest to meet sample size goals and “not waste” sunk costs, too many researchers end up completing bad interviews.

So here is the critical question: how does one determine what is a bad interview and what is not? How long into an interview is it before it is possible to tell it is not worthwhile? What is the best way to end a bad interview? Each interview will be different and you will have to make a judgment call, however, Research Rockstar’s class on Conducting Research Interviews can provide you some valuable guidelines and tips for handling these unfortunate situations.

[Want to learn 12 valuable steps for stress free interviewing?  Visit Research Rockstar and take their self-paced, online class on Conducting Research Interviews. This class is available as both a self-paced and an instructor-led format: click here for dates!]



Market Research Policies

Do you cringe when you hear the word “policies”? Most people do. After all, policies often mean bureaucracy.  But in the case of market research, clear policies will minimize the risk of data quality headaches, customer over-surveying, ethical breaches and more.

Indeed, a thoughtful, well-communicated set of policies is more critical today than ever before, with so many people conducting ad hoc or “DIY” research. Well-intentioned individuals often make mistakes that could be avoided through awareness of a simple set of company-wide market research policies. Even organizations with central market research departments find it challenging to control “rogue” research—but promoting a set of policies will help minimize the risks.

Below are examples of market research policies that will promote basic, best practices:

  1. Frequency. Over-surveying can lead to customer frustration and ultimately, poor response rates. Thus, a key policy is to specify how many times a year a single customer can be invited to participate in research.  Three times? Five times? There is no “right” answer for all organizations—it varies by customer type. But a rule should be in place. In this way, employees can avoid inundating customers with volumes of survey requests.  Of course, this also requires having a mechanism in place to track this.
  2. Quality. All direct communications coming from your company are indicators of your brand’s quality, and surveys are no exception.  You must ensure that a kind of “quality control” resource exists to ensure that nothing sub-par gets released.  This job includes checking grammar and questioning content and logic.  For example, one common complaint about colleagues who do ad hoc research is that they may ask too many intrusive questions (a big turn off for customers). This resource could be a person, a team, or a defined process.
  3. Permissibility. The best way to prevent unsanctioned surveys is to make sure everyone knows how to request and get approval for market research projects.  Your company can specify what types of research must be done through central market research (if it has such a department) and what can be done by other functional areas.  A simple research request process should be in place so that employees can submit a standard form that can be used to trigger an assessment and approval process.  Too onerous? Then how about a simple policy stating, “Any surveys over 10 minutes in duration must be approved by the central market research (or if none exists, marketing) department– no exceptions.”
  4. Methods. Company guidelines should state policies for both qualitative and quantitative methods. For example, “All online surveys must be fewer than 30 questions.” Or, “Recruiting customers for in-depth interviews must be coordinated with the VP of sales at least two weeks ahead of time.”  These are just two simple examples, but you get the idea.
  5. Incentives. An incentive policy should include guidelines for types of incentives and under what circumstances they can be given out.  Inform your employees ahead of time about whether or not your company restricts cash incentives or any type of “gifts” to customers.
  6. Solicitation. A strict non-solicitation policy must be in place. Selling “under the guise of research” is entirely unethical and must be avoided. Even the appearance of solicitation can lead to big problems for your company. Surveys must not be used as thinly veiled lead generation mechanisms. [Click HERE to get more tips on survey design.]
  7. Confidentiality. A confidentiality policy will ensure your employees understand how to use research information responsibly and will show your clients that you value their privacy.  Obviously, it is essential that confidential information is protected, so train people on what information is confidential, how it should be stored, and how it should be treated (internally and externally). Another realm of confidentiality lies in what company information is shared in a research study.  Consider rules that will avoid unwanted leaks. For example, a policy may be that any research related to new product concepts must be approved by the VP of marketing.

Market Research Training Via Policies

While these simple policies may appear obvious to an experienced researcher, it is important to present them to all research-related colleagues. Include policies in employee orientation materials and provide reference materials for all employees who may in any way touch market research—whether it’s the DIY kind or not. Just by raising awareness that there are policies, you will be providing subtle training on best practices.

[Do you have staff that could use some market research training? Check out our online classes; most are under an hour, and all can be viewed conveniently from any web browser.]


What If?

Man With Head In SandAs the ancient proverb I just invented says, “Person with head buried in sand may well get kicked in butt.” So I’ve come up with a few scenarios that could result if online surveys with non-customer populations became impossible tomorrow. Imagine: you can still reach your customers for research, but what about the rest of the world? What if you could no longer reach qualified, non-customer groups in a quantitative way? If the lists and panels were no longer available or the response rates dropped to .0005 percent, what would the impact be on your market research needs and investments?


Your quantitative research may simply be restricted to current customers. For non-customer populations, you’ll use observational or listening techniques like social media monitoring and ethnography, or qualitative techniques like focus groups and interviews. Recruiting for those qualitative methods will be hard, but finding 50 or 60 non-customers is easier than finding hundreds or more.


In-person surveys resurge. Intercept customers, at stores if you sell that way, through online shopping sites if not. If you’re in the B2B space, find non-customers at trade shows, conferences and other brand-neutral territories. Yes, it takes serious manpower, and there are limitations, but it works.


Collecting feedback from your salespeople, outbound call center staff, and sales channels will become more critical than ever before. They may be your only conduit for reaching non-customer populations. Training these folks in how to ask questions (yes, really) and how to record feedback will be key.


You can try to force online surveys by using ad-based recruiting (survey ads posted to social media groups, banner ads on trade association sites, or ads in relevant online or print magazines). This is an expensive option, because response rates will be dismal….but better than nothing, you hope.


There are plenty of lists available for postal mail—and if online surveys flounder, why not test it? We just may see a resurgence in paper-based surveys. The twist is that we may not have to mail actual surveys, just survey invitations.


This will vary greatly by application. Here are two examples:

  • For product concept testing, it may mean putting actual mock products on your web site with different configurations and price levels to test market response.
  • For brand perception and awareness research, it could be posting one-question polls on social networking sites (like Facebook). Of course, such sites don’t gather perfect information about demographics. And how do we interpret poll results that lack precise geographic information? Still, it’s an option.

I’d love your feedback. What do you think? If online surveys with non-customers became logistically impossible, what would your best option be? The future will belong to those with an arsenal of creative ideas ready to roll out.

[Have you seen the Research Rockstar paper on Market Research industry predictions? Get it here.]