10 Simple Questions for Interviewing Quantitative Research Job Candidates

Getting a lot of people applying for a quantitative research job? Want a fast way to weed out those that lack basic quant knowledge? Here are ten interview questions you can use to quickly, and even by phone, weed out quantitative research fakers:

  1. Using examples, what’s the difference between ranking and rating questions?
  2. In questionnaire design, what is randomization and why do we use it?
  3. What’s the difference between branching and piping?
  4. What is an example of nominal data?
  5. Using an example, what’s the difference between median and mode?
  6. Why might you use both unaided and aided questions in a questionnaire?
  7. What is weighting?
  8. Consider this scenario: You have just completed data collection for a survey about vacation trends. Your client wants to know how vacation interests vary by gender. How might you accomplish this?
  9. Consider this scenario: A colleague has done a survey of 50 people to measure satisfaction with auto leasing terms. He reports that overall satisfaction is 2.375. What questions or concerns might you have?
  10. Consider this scenario: You have collected survey data from 500 people to learn about their frozen pizza shopping behaviors. Your want to see what combination of demographic variables predict higher frozen pizza purchase volume. What type of data analysis might you use?

If they get all 10 correct, your job candidate has some solid quantitative research knowledge and just may be a research rock star. You may still want to test them for hands-on data analysis (if that is required). But at minimum, you know they have appropriate knowledge and will be able to hold their own when working with colleagues, clients, and data analysis partners.

If they get items 1 to 7 correct, they likely have a strong grasp of questionnaire design, and some light data analysis skills. For some positions, this may be adequate—especially if your research team includes dedicated data analysts.

If they only get items 1 to 3 correct, this is a person qualified for a junior-level position. This job candidate has some quantitative research knowledge but is not yet able to manage an entire project. They will likely be able to contribute to questionnaire design and programming, but they will need data analysis support.

If they get none of these questions correct, hirer beware. This may be a candidate aspiring to be a quant researcher, but they will need significant training and support to get them there.


[Did any of these questions stump you? Would they stump your team members? Then maybe it’s time to brush up on quant skills. Try our Intro to Quantitative Data Analysis or our 10 Point Checklist for Questionnaire Design.]





Video Will Rock the Market Research World in 2015

I’ve been reading a lot of predictions for market research—the typical pontification we see at this time of the year. Some of it has been very inspiring, but too many just rehash the obvious.

Personally, I think there are a lot of interesting theories, a lot of long-term shifts taking place. But as for something we will truly experience in 2015? Something that will really change what we do, how we do it? It’s simple: video. Specifically, video-based methods and video-based reports.

In 2015, we will see a notable spike in use of video IDIs, video focus groups, video ethnography, and video diaries. These methods are superior to others in terms of truly discovering and gauging consumer emotions, aspirations and values. Why is that important? We are more aware than ever before of the limitations of self-reporting such things (thanks to the popularity of research on irrational decision making and behavioral economics). Yet these are exactly where market researchers are often most needed, especially in the face of big data, which increasingly owns the questions “what” and “how”—we market researchers find ourselves increasingly tasked with “why.”

And the video momentum isn’t just about methods, it’s about reporting as well. Video-based reporting will transition from rare to common. Research buyers will increasingly expect video deliverables including video reports, montages and supplemental deliverables.

Remember, “Video killed the radio star?” Well, for market research, video is killing projects and deliverables that don’t capture emotions, convey authenticity, or tell a compelling story. In 2015, video will be the star.


[We are so convinced of this trend, we are putting our money on it. Research Rockstar will soon be launching its first class on video-based research methods. Want to take part by being a tester? Email and request “Video Class Testing”]


Leave a Comment



Article Synopsis: A Unique Value Proposition “Designed with You in Mind”

Quirk’s November 2014

“Designed with you in mind”

By Scott Garrison and Jet Kruithof

Swiffer® had a problem. Although its cleaning tools were well loved in America, the brand’s Italian market was doing poorly. What was the problem? After a bit of digging, Swiffer’s researchers found that the unique value proposition (UVP) of fast and easy cleaning which made the brand so popular in the States was rejected in Italy. There, when it comes to cleaning, easy means lazy. Only when Swiffer advertising focused on the ability to clean deeply, did Italians start buying.

In their article, authors Scott Garrison and Jet Kruithof discuss the global truth that product success depends on a compelling UVP. Yet, as we saw above, a UVP that succeeds in America may fail in another culture. So can we craft marketing messages which resonate with all sectors of our global market? Or must they always be localized?

In the end, the authors suggest that certain components of a unique value proposition may hold true around the world, but may still need local adjustment. Based on a meta-analysis by research firm SKIM, here are the authors’ four principal cross-cultural factors for an effective UVP:

  1. Tie value propositions to regionally specific needs. The promise of value is the key to any successful marketing message across the globe. However, like Swiffer discovered, this may need different messaging in different markets.
  2. Emphasize key benefits first. In this age of micro attention spans, marketers must begin messages with the top benefit their product can offer. This seems to hold true across the world.
  3. Be very precise. “2X faster” is more compelling than “faster.” Messages should incorporate words, numbers and descriptive adjectives (but not too many!) for maximum effect.
  4. Create differentiation. Not surprisingly, differentiation is key, but the best method of doing so varies by region. In the US, Europe and Latin America, consumers tend to prefer comparative marketing messages (e.g. “…compared to Tide, Cheer detergent is…”). Conversely, in Asia, using a competitor’s product as a benchmark is seen as rude.

What about the tone of marketing messages: does this vary from culture to culture? The researchers found that positive message framing (e.g. keeps clothes shining bright) was preferred in the USA as opposed to Latin America and Asia, where avoidance of negative situations (e.g. cleans 99% of dirt and stains) was more persuasive.

Inclusion of reasons to believe into marketing messages also differs among cultures. Use of technical terms is more appreciated by Asian consumers, whereas it tends to be viewed as jargon by Americans. In Latin America & Asia, expert opinions are respected—if they are affiliated with pertinent organizations.

While the importance of regional and cultural factors when crafting marketing messages is a well-known consideration, the authors do a persuasive job of showing that marketers can mitigate risks by using the structured guidance of their four factors. As for market researchers, these four factors can also be used to test variations of messaging options in different geographic markets.

This article was written by Research Rockstar intern Sarah Stites. Sarah is a student at Grove City College, and is a member of the Research Rockstar Scholarship program for college students.


[Interested in more market research tips? Subscribe to our blog via RSS or email, or subscribe to our newsletter to get all the latest news delivered straight to your RSS feed or inbox.]


Leave a Comment



Article Synopsis: “Tips on Measuring Crucial Social Factors in New Product Research”

Quirk's CoverQuirk’s August 2014

By Briana Brownell

Let’s compare plastic and reusable grocery bags. There are several differences, but one is this: your choice of bag says something about you. Although you might prefer getting them at the store over carrying your own, reusable bags communicate, “I’m eco-friendly! I care about the earth! I recycle!” In other words, when it comes to determining which type of bag people will carry—and really, most consumer behavior in general—social factors are important to consider. For example, some would even argue that we cannot completely trust survey results reflecting self-reported socially desirable behaviors.

Consumer behavior about new products or product categories are often hard to anticipate for many reasons, but one that is often overlooked is social factors. According to author Briana Brownell, the success of some new products heavily depends on social factors which are often undetectable during concept-testing. Failure to recognize these factors may blindside companies, causing unnecessary overspending on advertising or inventory.

However, not all products are equally influenced by social effects. So, as market researchers, how do we know when these factors do play an important role? Brownell says that, “the qualities of the product being launched determine which social elements may be relevant to integrate into a research program.” Here are four important guidelines:

  • Is seeing someone else use or recommend the product the key to adoption? Awareness and word of mouth are most important for certain kinds of products. In her article, Brownell explains which types are most likely to fall under this category.
  • Does adoption depend on the number of users? For example, social networks will only catch on if people sense that there will be widespread adoption. If no one is using Facebook, what is the point of joining?
  • Is adoption socially acceptable? If not, perceptions must be changed before the product is adopted. Often, adoption by change agents and opinion leaders gives the right signal to the masses.
  • Do product reviews aid adoption decisions? People often use others as shortcuts in their purchase decisions. Brownell discusses when peer or expert opinions are sought.

Social factors are always important to consider, yet when it comes to new product research, the stakes of ignoring them are even higher. To minimize the risk of overly-rosy market research, researchers should work to uncover which specific social factors play into the situation at hand and address them during the research process—or at least consider them when interpreting and reporting key findings. For more of Brownell’s tips, read her article here:

This article was written by Research Rockstar intern Sarah Stites. Sarah is a student at Grove City College, and is a member of the Research Rockstar Scholarship program for college students.


Leave a Comment



Recipe for eLearning: A Letter from Research Rockstar’s Director of eLearning Curriculum

One day when my nephew was 8 years old, he brought cupcakes to elementary school.  The teacher asked him who made the cupcakes.  Unflinchingly, he replied “Betty Crocker.”  While his classmates may not have chuckled, his teacher certainly did.  Similarly, whether you “bake” from scratch or out of a box, the recipe for good eLearning starts with preparation.

When I joined Research Rockstar in October 2014, the President had a goal to upgrade the self-paced classes offered from its website,  The company already had over twenty excellent courses offered as “live” instructor-led virtual classes, and as self-paced ones—covering a wide range of qualitative and quantitative market research topics.

But the President, Kathryn Korostoff, was clear: it was time to raise the bar on self-paced learning for the market research industry. She gave me a mission: continue delivering the same high-quality content while improving student comprehension and retention. Oh, and keep it fun.

I knew this would be harder than “just add water.” Still, at least I had a library of existing, great content with which to work, and for eLearning developers like myself, that’s not always the case.

To get started, I did three things:  (1) audited existing classes, (2) reviewed current documentation (workbooks and related student reference materials) (3) and developed a set of selection criteria I would use to make sure we chose the best tool for Research Rockstar’s goals (the company had outgrown the software previously used to develop its self-paced classes).

Selecting an eLearning Authoring Tool

There is no shortage of eLearning authoring tools and each one has an array of features, enhancements, templates and options. And ultimately, they are all aimed at supporting instructional design, allowing the eLearning developer to create a storyboard and give the course a “voice” so that the learner will be engaged.

After analyzing various products, I narrowed our short list to two: Lectora and Storyline. I then did trials of each, so that I could get hands-on and see which one would best meet our selection criteria. I created samples, and quickly identified some differences.

While both products are excellent, we settled on Lectora. It’s a popular, robust eLearning authoring tool which provides a wide range of features such as importing existing PowerPoint slides, designing custom animations, sectioning segments within a course, supporting an assortment of quiz question types (multiple choice, hot spots, drag and drop, true/false) and generating a Certificate upon successful course completion. As a developer, I found it powerful and easy to work with. Kathryn was also particularly happy with Lectora’s collaborative reviewing tool, ReviewLink.

Preparing Self-paced eLearning for Research Rockstars

The key ingredient to good eLearning is engaging the learner by way of activities.  Through various learning modes—audio, video, visual and kinesthetic—the learner becomes involved in the process. Some studies indicate that interactivity can boost the overall knowledge retention rate to anywhere from 50% to 90%. Compare this to the training industry’s average knowledge retention rate of 5% to 30%, and you see how interactivity can improve results.  So over the next few months, I’ll be working to transition Research Rockstar’s self-paced classes to the Lectora platform while adding multimedia content and animations to showcase scenario-based learning.  Thus, I intend to improve comprehension and retention rates for our market research training students1.While keeping it fast, fun and convenient, of course.


As the newest member of the Research Rockstar team, I welcome any feedback and suggestions.

Contact me at or at 508.691.6004 ext 706.

Debra Mascott’s recipe for good eLearning:

  1. One package of good content (substantiated by Subject Matter Experts)
  2. One solid, robust, authoring tool (to provide multimedia, interactivities, questions and flexibility).
  3. One instructional designer to give the course a voice, follow adult learning theory, highlight clear performance objectives, create engaging activities, and reinforce concepts with  summaries.
  4. Sprinkle activities, multimedia, audio, video, characters and knowledge checks to bolster the material.

Mix all four ingredients and it may come out just as delicious as my nephew’s Betty Crocker cupcakes.