How to write good survey questions for online research: 6 simple rules
How do you write good survey questions for online research? That is, ironically, the question.
To the untrained eye, the way you design questions in a research study might not seem like it matters much at first. But there are a whole host of ways that question phrasing can introduce bias and other complications which will affect your findings.
Don’t let yourself fall into these traps. Follow these six simple rules to ensure your research questions reap the best data quality.
Asking questions clearly is critical to getting the high-quality data you want. Some words mean different things to different people, and vague language can be easy to misinterpret.
Let’s take an example. Say you ask the question:
In the past month, how often did you shop for clothes?
What does “often” mean to you? Twice a month? Once a week? Once a day? One person’s idea of “often” is another person’s “occasionally”. A shopaholic might not bat an eyelid at a daily browse of clothing retailers, while a more budget-conscious participant might balk at the idea of shopping for clothes more than twice a month.
In this instance, you can remove the ambiguity by making the answers more measurable and defined:
In the past month, how many times have you shopped for clothes?
Be as specific as possible in your questions and avoid ambiguous terms that could have subjective interpretations. Other examples include “fast”, “nice”, “good”, “big”, and “high”.
This might seem like a simple point, but it’s one that’s easy to overlook. When you ask a question, your participants must be able to answer it in a way that gives you the most relevant and valuable data.
Pay careful attention to the wording of questions and answer choices. If you ask a question like: “How interested are you in learning a new language”, followed by Very Likely, Somewhat Likely, Somewhat Unlikely, and Very Unlikely, then the wording of your question and answer doesn’t align. Offering Very Interested, Somewhat Interested, and so on as choices instead make the question easier to answer.
Ensure the answer format aligns with the question as well. If you’re asking how strongly a participant believes something, a Likert scale will work better than a free-form text response. If you’re looking for more in-depth answers to a question, a free-form text response offers more space to capture detail than a multiple-choice answer. Think about the type of data you need and the most effective way for your participants to submit it.
A participant can’t give you a satisfactory answer to a question they can’t understand. This isn’t rocket science - don’t ask questions that sound like rocket science!
Still, it’s all too easy to fall into the trap of making questions more complex than necessary. Every industry or specialism has its share of jargon, acronyms, and technical terms that mean nothing to the average layperson. When you live and breathe this language every day, it can creep into your language as you design your study questions.
That’s why it’s important to take a step back and put yourself into the headspace of your participants. To keep your questions accessible:
A double-barrelled question is when a question covers two or more topics, but only has one response.
Here’s an example:
How would you rate the speed and accuracy of our software?
This is trying to capture a response relating to two distinct qualities of the software - speed, and accuracy - with one question. The problem with this is the participant might feel that the software was fast, but not very accurate. Or perhaps the software was bang on in terms of accuracy but took forever to deliver results.
The confused participant has no way of giving an answer that reflects their true feelings. And you get inaccurate data as a result.
A simple fix for this is to break double-barrelled questions out into two separate questions. So, the example above would become:
This makes the questions easier to answer and the results much simpler to interpret.
Do you think that the order of your survey questions doesn’t matter? Think again!
Question-order bias can have a significant effect on the quality of survey responses. This happens when a question earlier in the survey sets a context or triggers an emotion that influences how a participant answers questions later in the survey.
For instance, if you were to ask how a product could be better, then ask the participant to rate it, they might score the product lower. This is because they were just thinking about not-so-great features or experiences when answering the previous question.
An effective way to avoid this bias is to start with your general, top-level questions, then ask more specific questions afterward. Avoid asking any overly personal or sensitive questions up front - lead in with broader questions to get the participant in the right mindset first.
You can tweak the design of your questions as long as you like, but the only way to know if they’re ready for action is to pilot-test them on real participants.
Can they understand your questions clearly? Are the answer formats too constrictive or not specific enough? Is the order of questions introducing any form of bias?
Use insights and observations from your pilot test to refine your design, then keep testing and refining until your questions are honed to give you the high-quality data you need. You can also include a free-text response box at the end of your study, asking for further comments and feedback from participants.
Now you know how to phrase research questions that are clear, well-structured, and minimize bias. But do you have attentive, honest, and engaged participants to answer them?
If not, Prolific can help. Connect to over 130,000 active and trusted participants in minutes via our easy-to-use platform. Prolific pairs seamlessly with all the survey tools you love, including SurveyMonkey and Typeform. Just enter your survey link and go - it’s that easy.
Log in or sign up to get started now.
Today Prolific is turning 5 years old – Happy Birthday to us! 🥳 It's been a remarkable journey so far. 3000+ researchers from science and industry have used Prolific last year, we have 45,000 quarterly active participants, and we've seen 200% year-on-year growth. But we're only getting started. In this post, I'll tell you a little bit about our journey, give credit where it's due!, and tell you about our exciting plans for the future.
Fresh out of YC's Summer 2019 batch, we want to share some of our most interesting learnings. If you're a startup founder or enthusiast and want to learn about product-market fit, growth experimentation and culture setting, you're in the right place!