Articles

Studying auditory distraction using online samples: An interview with Professor Emily Elliott

Andrew Gordon
|July 14, 2023

Being able to focus our attention is a crucial cognitive skill. It allows us to function in complex environments full of distractions.

A seminal finding in cognitive psychology is that irrelevant background sounds can disrupt our short-term memory for information, referred to as the irrelevant sound effect. This effect has been extensively studied to better understand attention and memory. However, most of this research has been conducted with in-person testing in laboratories.

A new study by Professor Emily Elliott and colleagues examined whether this well-established effect can be reliably demonstrated with online testing methods. I recently got the chance to speak to Professor Elliott about this work that has been published in the Journal of Cognitive Psychology. You can read the full paper here.

Can you give me a brief introduction to yourself and your area of research?

I am a Professor of psychology at Louisiana State University, and I’m a cognitive psychologist by training. This means that I work on problems related to how we think, reason, and remember information.

My particular area of interest is the intersection of attention and memory in an immediate sense. For instance, as you listen to me speaking right now, you are paying attention and hopefully retaining some of the information we are discussing. This type of memory is distinct from retrieving childhood events or episodes from long-term memory. My specialty area is working memory, attention, and the development of these skills in children.

My interest in this area started as an undergraduate, where I was interested in how students around me seemed to have very different patterns of studying. I would go to the university library and observe different characteristics and ways in which people thought they would be more or less successful in trying to learn new information, and what kind of environment they used.

So, I did an honors thesis on the effects of music with and without lyrics. It turns out that when you're doing a task like reading and you have music with lyrics playing, it causes a decrease in your performance. You may be slower, you may be less accurate.

Then, in Graduate School at the University of Missouri, I worked with a fantastic mentor, Nelson Cowan, a well-known expert in working memory. He helped me to deepen my understanding of this area and started me thinking about how these abilities develop in children.

Talk to me a bit about the background to this piece of research? What led you to write a modality comparison piece?

Generally, my focus is on understanding how sounds influence our cognition. I'm interested in doing that work across the board, age wise, so I've done some lifespan work with some of my colleagues here at LSU where we've looked at comparisons of college-age students to middle-aged and older adults.

Then for this particular project, I connected with John Marsh who was a graduate student at Cardiff University, working under the direction of Dylan Jones who spent his career trying to understand the effects of sounds on our cognitive processes and ways in which they can be good and bad. I also connected with a number of other researchers around the world, including Simon Gorin and Raoul Bell. Together we were looking for a collaborative piece to write.

One of the key ideas behind this collaboration was to expand this line of research beyond the traditional university student population. We usually test college students because they're available and they’re doing it for course credit or for a very small fee. But tools like Prolific offer the ability to recruit a much broader and representative group of participants. So, what we wanted to do was to test both our traditional methods and newer online methods to determine whether the type of effects we are interested in can actually be studied online.

Can you talk me through the design of your study?

We set out to test three distinct samples, an ‘in-person’ group of students that were tested in the lab at Louisiana State University (LSU), a similar group of LSU students who were tested online, and a group of participants on Prolific. We limited the age of the group on Prolific to a maximum of 30 to make it as comparable as possible to the other samples, but also so that we didn’t introduce any bias into our data from having older participants - who may have hearing problems - included in the sample.

We created what we call a ‘serial recall paradigm’ to administer to our participants. In this case, it's visually presented stimuli, such as the sequence of letters or a sequence of numbers, and participants are asked to remember them in order. They are also told to ignore anything they might hear. Their only job is to memorize the stimuli in order and then reproduce it.

This paradigm has been investigated very thoroughly because it has a lot of real-world implications. We're often asked to memorize sequences of numbers. For example, if you're in the front yard and you see a car hit a trash can you might want to remember that license plate, so how do you accomplish that?

This memorization is a behavior that a lot of people engage in, and it can be easily disrupted. For example, if you are trying to remember that license plate and someone starts talking to you, you might struggle to remember the letters, numbers, or order of them, as you are distracted from your memorization efforts.

For our specific study, we wanted to investigate what is called the ‘changing-state effect’. This is the finding that if the distractor sounds you hear while trying to memorize a sequence are random (i.e., change from item to item) then it has a bigger negative impact on memory for the sequence than if the distractor sounds are consistent (i.e., they don’t change, such as a, a, a, a). So, for our design, we included a changing condition (distractor sounds are mixed, such as l, h, r, m), a steady condition (distractor sounds are consistent), and then quiet control (where participants did not hear a distractor).

Our question was actually quite simple - we've done a lot of this work in the past in the US, in the UK, and in Germany, but what happens if we run it online?

How did you ensure that participants had the correct setup for your study?

For us, it was critical to ensure that participants had the right setup as we were specifically studying audio distraction. So, if a participant was in a noisy café, or on the bus, our experiment likely wouldn’t work.

For the lab-based group, it was easy to ensure the correct procedure because we had a researcher there who could monitor them and see that they were wearing headphones. For the other two groups we used a software program that was able to tell if a participant was wearing headphones or using speakers. We wanted them to wear headphones. The software plays a series of white noise bursts and one of them is quieter than the other two. If you're wearing headphones, and you're in a quiet environment, then you can very easily hear which one is quieter. This allowed us to filter to only those participants who were actually wearing headphones and were in an environment that would be conducive to our study.

What did you find?

In terms of what we found, the largest effect (i.e., the largest difference between the effect of sounds that changed relative to sounds that were steady), was found in the psychology students regardless of modality, in the lab or online. But that same effect, albeit slightly smaller, was also found for the online Prolific sample. The effect was clearly there and clearly significant. Even though it was a little bit of a smaller difference in the Prolific sample, it was still plenty big enough.

So it's a question of if you're looking for the largest possible effect, it was observed in the psychology students, but the difference between the three modalities was small. However, there are so many reasons that could have been happening, including the characteristics of the participants or the fidelity of their equipment.

What was great to see was that the effect is so robust, even if it's not being measured in the most absolute pristine conditions, it's still there and measurable! These data show us that this effect can definitely be studied online, which is a game-changer for this field of research as it allows us to open up this type of work to entirely new demographics and much larger samples.

In your discussion, you say the following: “Perhaps somewhat counterintuitively, our data suggest that careful inspection and preprocessing of the data from online testing, guided by extensive exclusion criteria based on participants’ self-reports, may be ineffective (or even counterproductive) in improving online hypothesis testing. One may thus refrain from excluding large proportions of data prior to data analysis even when analyzing the data of online studies.” I was wondering if you could expand on this statement?

The practical realities of research are that sometimes it costs money, and different research teams or researchers who are at different points in their careers may have access to different amounts of money or resources. And so from an extremely practical standpoint, the more participants you have to pay for, the more your study is going to cost.

What we found here was that the effect we are interested in is so stable and robust, that it will remain even with slight imperfections in your sample. So, if it doesn't benefit your data to make exclusions based on software and hardware fidelity for example, how can you justify the cost to recruit more participants?

What are the key recommendations that come from this work?

For researchers, I think the key takeaway is that the technology that's available today has revolutionized what we can do with online audio studies. For example, having a headphone check is no longer difficult to include in your research, and it really has changed our ability to have trust in our data. You used to have to set up an expensive test proctoring program that would take control of the participants' computers to check their hardware, but not anymore!

Also, these data show that using online samples to study audio distractions is a viable option - this expands our ability as researchers to sample larger swathes of the population and start to determine when these effects are present, and when they might not be.

But there are also practical implications for the public. This study showed that when the sounds you hear are of the same type of processing that you are being required to complete, it has a detrimental effect on that processing. So, for example, if you’re an art student and you're painting, maybe listening to music with lyrics won’t interfere with performance, but if you’re writing, doing math, or anything where you're verbally thinking about words then listening to music with lyrics will have a negative effect on your ability to complete that task. So in those cases, it is better to listen to something like white noise.

Do you have plans to follow this line of research?

This is still under discussion between me and my co-authors. Currently, I have two graduate students who are exploring a different direction, so this particular project has not been revisited.

However, I would be interested in further exploring the nuanced factors that influence the slightly smaller-sized effect observed in the online panel. This could help us better understand the implications of different types of training and how it affects cognition.

Where can our readers follow your work?

I’m on Twitter (@lsuemily), or they can visit our lab website.

This research was carried out using the Prolific platform and our reliable, highly engaged participants. Sign up today and conduct your best research with our powerful and flexible tools.