How to recruit high-quality survey participants
Online research platforms, panels, and providers are steadily growing in popularity. As a result, it’s becoming increasingly easy to find survey participants for a given research project. However, research professionals understand that sample size does not equate to sample quality.
Therefore, it’s important to view the participant recruitment process itself as the first and, arguably, most important part of ensuring online survey results (i.e., data) will be of the highest possible quality. Here are three simple yet impactful ways to help do exactly that.
It’s true that, traditionally, the idea of paying people to complete a survey or questionnaire was controversial. The thinking was that, unwittingly, compensation could negatively influence how participants would respond to questions. But the methodologies involved in designing research surveys and questionnaires now can, with care, account for any potential influence compensation can have. Good thing too. Because, increasingly, survey participants expect to be paid for their time. In fact, not paying participants is proving to have a larger consequence in contemporary research.
As Sabrina Trinquetel explains as part of a recent ResearchLive article, “... in reducing these individuals to a number [we] forget about their treatment in our ecosystem...with the downward creep of budgets and commoditization of online research, we’ve created a process that means people are not afforded the treatment they should be receiving.”
Remember, it takes time to participate in a survey. And this is time that, especially in a gig economy, could be spent earning money some other way. So, by paying less than a fair rate, you may actually be asking survey participants to sacrifice money for participating in your research. That’s morally questionable on its own. But when the goal is high-quality respondents, setting up revenue-negative situations will certainly work against you.
This isn’t to suggest researchers need to “overpay” when recruiting participants. Research actually shows that, in general, employees care less about how much they’re specifically paid and much more that they feel they’re being fairly compensated. Do we expect survey participants to feel any different? This is why, ideally, the research platform you use will help you determine equitable rates for your project. But, no matter what, you should factor in the following when determining survey pricing:
Done ethically and with care, fair pricing means anyone within reason will want to participate in your survey, which is a good problem to have. Because not everyone who’s willing to be part of your research should be.
Curious about Prolific’s own audience of potential participants? Click here for details.
As the academic and market research industry continues to grow, the number of potential participants grows too. This is a good thing. But with more people regularly participating in online research, more are, naturally, less than a perfect fit for your needs. This means prescreening (i.e., preemptively filtering) survey participants is an increasingly crucial part of ensuring online surveys deliver quality responses.
Unlike the controls afforded by in-person research studies, participants sourced through online providers will, likely, engage on a first-come, first-serve basis. With less (or no) control over the order your survey responses return to you, it’s especially wise to ensure that anyone who gets to your survey first will be a qualified responder, not simply a quick one.
Online research platforms and providers should, at their most basic, allow any age bracket to be selected and applied to a potential pool of survey participants. However, when a representative sample is required, age may be broken down into 9-year brackets (e.g., 18-27, 28-37, etc.).
Other basic, sortable demographics typically include sex, ethnicity, and nationality. As online research platforms get better, an array of useful prescreening options is becoming available. For instance, by using Prolific, researchers can also filter potential survey participants for more specific factors, including:
In general, the more prescreening options you have at your disposal, the healthier your samples will be. And more options mean you’ll have more granular control in relation to the specific needs of your research project. Finally, it’s just a matter of making sure potential survey participants, fairly paid and prescreened, actually do pay attention to what they’re doing.
However, in saying you should test the attention of your survey participants, we, in no way, advocate the testing of their patience. These are two very different concepts. (In fact, if you’re experiencing any confusion here, take a look at our Researcher Help Centre. Helping folks find research participants they can trust is kinda our thing 🙌 )
Attention plays an important role in online research. Because when a respondent is paying attention, their responses can be deemed more reliable. When we’re soliciting first-party data anonymously through the cloud, any way to determine reliability becomes that much more important. A common way to gauge participant attention as they fill out a survey is the use of ACQs, or attention check questions.
At their most basic, ACQs are engineered to gauge whether or not participants are paying close attention as they take a survey. ACQs should:
Consider the following:
EXAMPLE SURVEY QUESTION
The following question is simple: when asked what the best in-person networking event is, you need to select ‘LinkedIn.’ This is an attention check.
Based on the text you read above, which of the below is correct?
The question above is clear and simple, and the correct response is clearly defined. However, there is another approach to attention checks that’s worth noting.
Nonsensical questions are questions that have only one objectively correct answer. Participants get no explicit direction on how the question should be answered. However, by placing all potential responses on a scale, the correct answer should be clear, to everyone who’s paying attention.
Again, consider the following example:
The statement above has an objectively correct answer (no one achieves this task on a daily basis to commute to work). Answering this question correctly requires no prior knowledge. And even though we’d hope attentive survey participants would choose “Strongly Disagree,” the choice of “Disagree” would still be an indicator of attention.
Whether they be ACQs or nonsensical questions, it’s beneficial to study both good and bad examples side-by-side. Understanding the differences and being able to put them into practice ensures participant attention can be easily measured alongside more straightforward metrics, like response rates and survey completion percentages.
Fairness, filtering, and...attention check questions.
Dang. Okay. Well, we may not be able to call these the “Three Fs” of recruiting high-quality survey participants. That said, they do serve as a solid foundation to set your research up for success.
And this approach should be beneficial no matter which online research platform you choose to work with. But remember, different platforms employ different criteria for how they let participants participate in the first place.
To build on the foundation detailed above, make sure the ways participants are vetted match your needs and the needs of your research.
Today Prolific is turning 5 years old – Happy Birthday to us! 🥳 It's been a remarkable journey so far. 3000+ researchers from science and industry have used Prolific last year, we have 45,000 quarterly active participants, and we've seen 200% year-on-year growth. But we're only getting started. In this post, I'll tell you a little bit about our journey, give credit where it's due!, and tell you about our exciting plans for the future.
Fresh out of YC's Summer 2019 batch, we want to share some of our most interesting learnings. If you're a startup founder or enthusiast and want to learn about product-market fit, growth experimentation and culture setting, you're in the right place!