Articles

10 tried and tested tactics for boosting participant engagement

Jo Evershed
|October 24, 2023

To get the best data quality, you need quality participants who are engaged and motivated. But just how do you find these people and tempt them onto your study?

Offering fair pay is a critical part of it, of course. But to drive participant engagement, you've also got to build a positive relationship with participants and put effort into their study experience. Plus, you need to give them the opportunity to feel valued and have pride in their work.

And how do you achieve all this?  With the data-quality framework!

At a recent Gorilla Presents, Jo Evershed, CEO of Gorilla Experiment Builder, shared her insights into participant engagement. Regardless of the types of research methods used on Gorilla – whether it’s a simple survey, reaction time research, gamified research, or multiplayer studies – all the researchers share a core need for high quality data. Read on to discover her ten top tips for keeping participants engaged and motivated so that you’re not left analyzing rubbish data.

Introducing the data-quality framework

Three aspects of data quality are within your control during the participant experience:

  • The relationship you form with the participant
  • The experience you create for them  
  • The quality controls you put in place to identify low-quality data

This data-quality framework is an essential trifecta to consider when designing your research study – from the moment participants join it to the second they leave.

How to engage participants with 10 simple tips

Building strong scientist-participant relationships

To boost participant engagement, you need to treat them like the valued research collaborators they are – not simply like a cog in the machine of your research-gathering process.

Tip #1: Create respect, gain respect

It’s vital you trust your participants, and show them this sentiment through every element of your study design. Assume the best in people, while preparing for the worst, and everything should run as smoothly as possible.

Essentially, if you set up an antagonistic relationship – where you trick people, or assume they’re trying to cheat you – you’ll set yourself up for failure (in the form of lower-quality data and, potentially, grumpy participants).

So, where to start? The answer: from the jump, with your introduction. Every good relationship begins with one – and your participants will definitely want to know who you are and what your study aims to find before they commit to taking part in it.

Tip #2: Write the ideal introduction

A perfect introduction will include your name, university, lab name, area of expertise, and research goals – written in layman’s terms, describing how you hope to make the world a better place.

That last bit is key. Clearly explain why your research matters, and participants will know they’re contributing to something meaningful.

You’ll also want to thank your participants, and express how your research wouldn’t be possible without their time and attention. (It’s true, after all.)

Creating the perfect study experience

You’ve set the groundwork for a great working relationship with your participants. Now it’s time to consider their experience as they move through the following stages of your study. And after introductions come instructions.

Tip #3: Be conversational, not clinical

Your participants are people, not robots. (More on protecting yourself from those who actually are bots later.) So, write your instructions in plain English, using easy-to-understand language free from jargon and gibberish.

Or, even better, use video. Video instructions are handy, as participants must watch them at the pace you set while you show them exactly what they need to do. We’d go as far as to say that video instructions are essential for complex study setups, like eye-tracking.

Tip #4: Gamify your instructions

Practice trials are another fantastic way to teach participants what you need them to do. Think of your study tasks as mini video games, and your practice trial as the tutorial level, with on-screen prompts or auditory narrative.

These tutorial levels give the participant a chance to better understand your needs and get familiar with in-study controls. And you can use them to assess performance. If someone repeatedly fails to give meaningful input during their trial, you probably won’t get quality data from them in the real study. So, you can choose to remove them at this point, or to mark their data for exclusion at a later stage.

Tip #5: Pay your way

Remember: if you get poor data, it doesn’t automatically mean the participant was shirking. It’s possible that they couldn’t perform the task well (or didn’t grasp it properly).

Give them the benefit of the doubt, and pay them for the work they’ve done regardless – it’ll cost more emotional energy, time, and, therefore, money to deal with the fallout. You can then choose to not hire them again – platforms like Prolific give the option to exclude participants from future studies.

Tip #6:  Signpost, signpost, signpost

Now, onto your study itself.

Have you ever walked somewhere for the first time, and found the route seemed substantially shorter on your way back? This is because you no longer felt lost. By this logic, if you want your participants to feel like your study flies by, you need to make sure their journey is well-signposted throughout.

We recommend including a map of your study, giving an idea of its overall flow without compromising your experimental controls. For long blocks of trials, a progress bar can be helpful, too. This stops participants from experiencing this ‘lost’ feeling and gives a frame of reference for when the study will end.

How you give participants feedback – on their answers and the pace at which they give them – is also an essential consideration. Animations, particle effects, theming, and scoring can make feedback more fun and exciting, and they’re easy to incorporate using a tool like the Gorilla Game Builder.

Gone are the days of spending £20-80k and skilling up in coding to develop gamified study tasks; check out Gorilla’s site to learn more about how you can create them in a click (and within your budget).

Tip #7: Build in functional breaks

We all need a break from our work occasionally, and participants are no exception. If a task takes longer than 10 minutes, build one in – either to give people time to get up from the computer and grab a drink, stretch, or to help them refocus.

Simply want your participants to give their eyes a rest and stretch their legs? Get them away from their screen by adding a timed ‘take a break’ page, which forces them to click ‘continue’ after five minutes before picking up where they left off – perhaps popping in a minute-long countdown after those five minutes are up in case people get delayed coming back to their desk.  We’ve heard that with carefully managed breaks, some researchers are getting participants to do hour long studies online.

Just need your participants to refresh their concentration? Try including a game-based distraction task to keep their brain working (and entertained).

Taking control of your data quality

The final part of the data-quality framework is your quality controls.

And there are plenty of measures you can put in place to ensure you recruit the best participants – getting the best data quality as a result.

Tip #8: Consider different types of controls

These include attention checks to weed out bots and bored participants, which shouldn’t be antagonistic; internal control methods to screen out participants who shirk or misunderstand tasks, like we discussed earlier; and a debrief or feedback section. (More on that shortly.)

One effective control is asking participants to describe specific tasks in their own words, warning them you’ll be doing this ahead of time and making sure they don’t simply copy and paste from your instructions. This will give you feedback on whether your task instructions were clear enough, and allow you to exclude data from participants who clearly don’t understand what they’re supposed to be doing.

We recommend pre-registering unambiguously criteria for removing participants, so you can fairly exclude participants who provide low-quality data. It’s also a good idea to overrecruit by 10-20% to compensate for this sample-size shrinkage.  

Wondering whether to pay these participants? Our general rule is to do so, then not hire those who didn’t do good work again. This can be easily done on Prolific with blocklists.

Tip #9: Ask for feedback

Coming back to that debrief or feedback questionnaire. By asking participants about their experience, you can make your studies better and better each time.

And by including direct questions like, ‘What do you think we were researching?’ or, ‘Did you take any notes during our memory test?’, you can find out whether participants guessed your hypothesis and experimental conditions or cheated.

You must clarify that participants will still get paid for their time if they answer questions like this honestly. Just make a note not to invite them onto your next study – and to remove their data before you analyze your results.

The best question to ask is “Do you have any other feedback for me?” Engaged participants will answer something.  Unengaged participants will leave it blank.  Annoyed participants will give you feedback.  Whatever responses you get, you’ll learn something.

Tip #10: Pilot progressively

Another brilliant way to get feedback on your study before your participants even take part in it is with progressive piloting.

A typical progressive pilot plan involves the following steps before you deploy your final experiment:

  • Personal testing
  • Supervisor review
  • Testing by colleagues
  • A small online pilot
  • A power sample online

This may take a little more time and a little more money, but the additional investment is tiny within the context of the overall fixed cost of research (salaries and university overheads). The indisputable truth is that when it comes to behavioral research, a lot of the research value is in the original data you collect and the original thoughts you have about it. Piloting ensures you collect high-quality data that’s robust and reliable, so we’d say the investment in data quality is worth it.

To learn more about boosting participant engagement and ensuring quality data, watch the latest webinar from Gorilla Presents… on improving participant engagement. Jo Evershed, CEO & Co-Founder at Gorilla Experiment Builder, joins Andrew Gordon from Prolific to discuss the checks you can build into your tasks to filter out disengaged participants.