Self-Reported Data is Problematic, or Worse

Self-reported information is not perfect. But it is even less perfect in some cases than in others.

  • Point: I can tell you from having conducted 100s of studies with IT professionals that certain things get over-reported. Plans to invest in new hot, technologies are a classic example. Not intentional, perhaps, but it happens.
  • Point: Social desirability is a known issue as well. For example, asking people directly about exercise and dental hygiene is known to be problematic. Some studies suggest people over-report voting (saying they voted when they did not). Some research even suggest that the impact of social desirability on survey responses may vary by country, further complicating the interpretation of research results (Journal of Marketing Research, 2009, Vol XLVI, authors Steenkamp, de Jong, Baumgartner).
  • Point: the survey process itself, according to some research, has an impact on behaviors—making self-reported data iffy. Surveying  people about purchase intentions may actually change their behavior. After all some people only form an intention when asked about it (you may not have thought about whether you planned to buy a new PC this year until you got a survey asking you about it).  Further, some research suggests that a surveyed group that reports plans may be more likely to behave that way—sort of a self-validating effect, and making them less representative of the non-surveyed population (Journal of Marketing, April, 2005, authors Chandon, Morwitz, and Reinartz).

So at minimum, it appears that purchase intent and socially desirable items are at particular risk of inaccurate self-reporting.

What does this mean for researchers? When designing research projects we have to be vigilant:

  • Are we asking about items that we can expect most people to answer accurately?
  • Are we ok for a given topic, and research objective, knowing that there may be a big gap between perception and reality?

Given the limitations of self-reported data, survey research (especially about sensitive topics and purchase intent) may simply be the wrong methodology for some projects. Luckily, there are alternatives. For example, there has been a huge increase in the amount of actual behavioral data available to researchers in recent years. Increasingly sophisticated CRM databases, purchase data, and observational data (such as Internet behaviors) provide access to actual behavior—what people are buying, what they are looking at, the sequences that precede a purchase, and more.

Another option is ethnography.  Some researchers find that observing people can be more reliable, and insightful, than asking them to self-report.

The bottom line: Intentions aside, survey respondents simply can’t accurately self-report some items of interest to researchers. Can they get us “close enough”? Sure, if we are aware of the limits and apply the research with appropriate caveats. Go ahead and ask people if they plan to purchase the latest techno widget in the next 6 months. The results says something about openness to marketing messages. But I wouldn’t use it for sales forecasting.

Comments
  1. Kathryn Korostoff Kathryn Korostoff
    |    Reply
  2. Kathryn Korostoff Kathryn Korostoff
    |    Reply
  3. Rob Arnett
    |    Reply
  4. Kathryn Korostoff Kathryn Korostoff
    |    Reply

Post A Comment

Your email address will not be published.