Many market research, UX, and CX teams now use AI for brainstorming, writing, and various research tasks. But it’s not always easy, especially when using AI to help researchers draft surveys and discussion guides. The recurring issue isn’t effort, it’s uneven quality from ad hoc prompting.
In "Prompt Engineering for Researchers," instructor Kathryn Korostoff demonstrates two well-known, structured approaches that keep rigor intact while reducing rework.
First, Prompt Chaining: design the AI’s deliverable step by step, using short review loops. For example, instead of the instruction, "Write a discussion guide for one-hour IDIs with pet owners to gather their perceptions of pet food brands", we’d start with, "Outline a discussion guide for one-hour IDIs with pet owners to gather their perceptions of pet food brands. Name the sections and estimate durations." Upon receiving the outline, the researcher would review and provide feedback, essentially catching early errors during interim steps before requesting the actual guide itself. Many research tasks can be broken down to three or more steps. This will usually lead to a much better first draft than would be produced with the original one-shot prompt.
Second, Reflexion (yes, with an "X"): asking the AI to critique its own draft for bias, confusion, or logic errors. Example: the AI has output a questionnaire draft for you, and then you instruct it to, "Review the draft and revise any questions that may be leading, biased or confusing. Then list the changes you made and why you made them." And it will!
Check out this episode for examples of using effective AI prompting for qualitative and quantitative research tasks.
#MarketResearch #QualitativeResearch #UXResearch #CXResearch #SurveyDesign #AIforResearch










