How to get the best results from your custom screening study
It goes without saying that when you’re conducting research on a particularly niche topic, you need particularly niche study participants. And that the more aligned your subjects are to your subject matter, the more accurate your data will be.
For example, it’s no good simply searching for men aged 30-50 if your research is specific to male, 30-to-50-year-old trombone players who can code.
Finding the latter – without scouring the country’s open mics and computer labs – sounds rather tricky. But it is possible, thanks to custom screening.
Custom screening is a two-step process that sees participants take a short, lower-paying survey to establish whether they meet your exact criteria before you invite them onto your main, higher-paying study. And while it adds an extra step to your research, it’ll make all the difference when you come to analyze your data.
If you’re going to do things the proper way, you must get your questions right, too. In fact, your data could depend on it. So, here are our six steps for better custom screening:
Even considering custom screening means you’re likely looking for the type of participants you don’t come across day-to-day.
For example, how many hundreds of people do you think you’d need to go through before you found enough middle-aged, male, coding trombone players to draw significant conclusions from?
The more people you quiz, the higher your chance of finding those who can really help you. So, recruit more participants than you think you need, starting with a broad screener as relevant as possible to your study – say, those men aged 30-50 we talked about earlier.
At best, you’ll find precisely the number of participants you’re looking for. And at the very worst, you’ll have a larger – and therefore more representative – sample size.
Participants are passionate about helping with studies, and some will want to take part in as many as possible. But as a researcher, your priority is finding the most suitable people for your main study.
By not revealing the follow-up study until the initial study is over, you ensure you can invite the most relevant people to the study that really matters.
So, don’t tell your participants this study is a precursor to another until they’ve already answered all your screening questions. (And don’t give them the option to go back and edit their input once they’re aware.)
There are, of course, advantages to being open and honest with participants from the start. By letting them know what they’re getting themselves into, you could increase your show rate for your main study. But is it worth the risk of marring your data? Probably not.
There’s another sure-fire way to let on that you’re screening them for something greater: asking yes-or-no questions.
If you ask people, “Do you play the trombone?”, swiftly followed by, “Can you write code?”, the chances are, they’ll have an inkling you’re looking for trombone players who code.
So, instead, mask your subject matter by providing lists of possible answers and inquiring: “Which of the following musical instruments do you play, if any?”. Then, perhaps, “Which of the following computer skills are you proficient in?”.
Participants will, of course, guess you’re seeking someone music- and tech-savvy – but that’s where their knowledge, and ability to wrongfully gain access to your more specific study, ends.
Ditching yes-or-no questions is also useful in helping you gather enough information about your participants to ascertain how much they can help you.
Instead of merely asking whether someone plays the trombone or can code, include contextual questions – aiming to learn things like how long they’ve played the musical instrument, or how advanced their computer skills are.
Another method you can use to avoid rogue ‘yeses’ is implementing tests – essentially, asking questions that only truth-tellers would be able to answer.
In our trombone-playing coder example, you could ask participants to identify parts of a trombone or write a line of code that performs a certain function.
You could argue that people can look up answers online to cheat – but it’s unlikely they’ll bother. Adding this extra layer of effort seems to be an effective deterrent for people tempted to take shortcuts.
Ideally, you want to place your test as early on as possible in the study, so as not to waste participants’ time – or yours. And don’t make it too intrusive, as it’s still critical to preserve everyone’s anonymity.
That brings us onto one of the most asked custom-screening queries: how many questions should a screening study have, and in what order should they appear?
Ultimately, you want your survey to act as a series of sieves, with the widest ‘holes’ at the top and the narrowest at the bottom. Your first question should separate out the most unsuitable people. Your next should catch a few more. Your final question should leave you with only the most relevant, valuable participants.
Still, your first question should always be one where if it was the only question asked, you could tell whether your participants were suitable or not. So, make it count.
It’s not uncommon for custom screening studies to only feature this one question. We’d recommend a minimum of two, allowing you to hopefully reach that ideal sample and get the most accurate data. Many more, and you’ll put people off taking part altogether, losing a chunk of potentially fantastic participants.
Not only do we have a huge database of varied participants who are sure to meet your sampling goals – our people have already been carefully vetted, so you can rest assured they’ll respond truthfully at screening.
Sign up to get started today. And if you're looking for more tips and tricks that will help you get the best results from your studies, download our complete best practice guide to online research.
Fresh out of YC's Summer 2019 batch, we want to share some of our most interesting learnings. If you're a startup founder or enthusiast and want to learn about product-market fit, growth experimentation and culture setting, you're in the right place!
Today Prolific is turning 5 years old – Happy Birthday to us! 🥳 It's been a remarkable journey so far. 3000+ researchers from science and industry have used Prolific last year, we have 45,000 quarterly active participants, and we've seen 200% year-on-year growth. But we're only getting started. In this post, I'll tell you a little bit about our journey, give credit where it's due!, and tell you about our exciting plans for the future.