Articles

What does the low prevalence effect mean for forensic science? Insights from a forensic psychologist

Andrew Gordon
|October 24, 2023

Fingerprint analysis has long been a crucial tool for forensic scientists and law enforcement. As our society grapples with issues of systemic bias and false convictions, the accuracy of forensic techniques like fingerprint comparison is under scrutiny.

While advances in machine learning and AI are enabling new automated methods for fingerprint matching, fingerprint examiners argue their years of training and experience give them an edge that computers cannot replicate. But are these experts still likely to come to incorrect conclusions?

To find out more about this question, I spoke with Dr. Bethany Growns, a forensic psychologist at the University of Canterbury, about her recent paper The low prevalence effect in fingerprint comparison amongst forensic science trainees and novices, published in PLOS ONE. You can read the full paper here.

Thanks for taking the time to talk with me today. Can you give us a brief introduction to yourself and your area of research?

I'm a forensic psychologist. Broadly speaking I study the intersection of decision-making and forensic science. It's quite a new area, but most people don't realize that most forensic science disciplines are actually based on decisions made by humans. I’m interested in finding out what biases forensic scientists, what causes them to make errors, and also what they do well, so that we can reverse engineer that to make tailored training programs.

I did my PhD in Sydney, Australia looking at the cognitive mechanisms in forensic science decision-making. I then did two postdocs at Arizona State University and then at the University of Exeter in the UK. I’m now a lecturer at the University of Canterbury in New Zealand furthering my research program in forensic science decision-making.

This might seem like a bit of a basic question, because I feel like intuitively most people feel they know what forensic science really encompasses, but I'm not entirely sure I could define it. So could you define what forensic science actually is?

That's a great question actually. From a conceptual perspective, forensic science is just anywhere science is applied somewhere in the criminal justice system. It covers everything from chemistry to crime scene analysis to DNA.

But the type of forensic science that I look at specifically is called forensic feature-comparison. This encompasses all of the pattern matching disciplines which are a subsection of the broader forensic science discipline. For example: fingerprint matching, firearms matching, and face matching.

Today we’re going to be discussing your paper that sought to examine the ‘low prevalence effect’ in fingerprint comparison. Before we discuss the other aspects of the paper I was wondering if you could describe what the low prevalence effect is?

The low prevalence effect is a very robust, basic psychological phenomenon. The original research came from visual search tasks where participants are asked to look for a target stimuli in an array.

What was found was that when the target is rare, people are more likely to miss it. So the effect is that rare events are harder to detect compared to more common ones.

A good applied example of this is baggage screening at an airport. You've got people looking at thousands of bags every day trying to scan for weapons. But the prevalence of weapons in bags is actually very rare. There are a whole series of studies that show that when what you're looking for is rare, you're more likely to miss it, meaning that a baggage screener may “miss” a weapon in a bag because it happens so rarely.

Can you talk to me about what led you to write this current paper and where the idea came from?

It’s an area I had done some work on before while I was a postdoc at Arizona State University. Essentially, that earlier work had looked at how base-rates bias fingerprint analysis. The idea here was that if you change the base-rates of “matching” and “non-matching” fingerprints (i.e., you make them more or less prevalent) then you may find that lower prevalence leads to less accurate identification. This is an area that has been researched thoroughly in face-matching, but not in fingerprint matching.

Our data showed that with novice participants the base-rates did have an impact on their accuracy - they were less accurate with “non-matching” fingerprints when they were rare. With this latest paper, we wanted to replicate that finding but also extend it by examining whether the effect persisted in participants who had received forensic science training.

Can you tell me a little bit about how you designed the study?

We created a study on Qualtrics where participants were asked to compare pairs of fingerprints and determine if they matched or not. We tested novice participants through Prolific. We also tested forensic trainees recruited from various forensic organizations and forensic training programs. In total, we had approximately 200 participants, 100 in each group, and we tested whether they were both susceptible to the low prevalence effect.

We also randomly assigned participants to one of two conditions - high prevalence, which meant that 90% of the trials they saw were matches, or the control group where matches and non-matches were evenly distributed.

Finally, we also wanted to test whether specific strategies helped to ameliorate the low prevalence effect. Specifically, half of the participants were asked to perform a ‘feature comparison’ strategy by being directed to look at specific areas of each fingerprint to perform the comparison. This is a strategy that has been shown to be effective in face-matching, but had never been tested in a fingerprint-matching context.

Is there any data kind of showing how good novices are at this task as a baseline?

My PhD focused on this area and showed that there is a big benefit to training in this type of matching task.

I want to preface this by saying that every forensic scientist who uses their subjective human judgment to make decisions, makes errors. But studies have shown that fingerprint examiners are significantly better at novices in fingerprint-matching across nearly a decade or two worth of studies.

Trained fingerprint examiners are much more accurate at fingerprint matching than novices. They aren’t perfect, but fingerprint examination is one of the more reliable subjective forensic sciences, especially when compared to other areas such as bite mark analysis, or bloodstain analysis.

What were your findings?

We replicated the low prevalence effect by showing that in the control condition, participants had the error rates that we typically see in novices. But when matches were high prevalence, participants were significantly more likely to make false alarms, i.e., identify a non-match as a match.

When looking at the two different groups of participants (trained vs novice) we saw the exact same pattern of data. Both forensic science trainees and novices were more likely to make a mistake when matching pairs were common. It is very important to note however, that the trainee sample could not be considered ‘experts’ in fingerprint matching. It's very, very hard to get expert practicing professionals, so this discrepancy may explain why the trainees didn't perform better than the novices.

Unfortunately, specifying that participants should use a feature comparison strategy did nothing to mitigate the low prevalence effect.

What real-world implications do you think this has for both forensic casework and also other areas?

One of the issues in real-world implementation of fingerprint matching is that we don’t actually know what the base rate a professional experiences is. For example, a professional matcher may see far more matching pairs than non-matching pairs in their day-to-day work, and if this is the case, it may be that they are more likely to identify a non-matching pair as matching. Worst case scenario, this could lead to an innocent person being found guilty of a crime and sent to jail, or worse. However, it’s impossible to quantify the actual “base-rates” because the ground truth of whether crime-scene fingerprints match or not cannot ever truly be known.

Our work suggests that there is a need to start inserting more known non-matching cases into an examiner’s casework so they've got more exemplars in their work of what a non-match looks like. This should lessen the chance of an incorrect match judgment across the board.

However, in the real world that's very hard to do. Most forensic laboratories across the world are backed up with casework, and in terms of money and time it would be hard to add a lot of non-matching samples to that casework. So while this suggestion works from a theoretical point of view, it’s unclear whether it could transfer to a real-world situation easily.

Do you have any plans for future research on this topic?

We want to continue this line of work by recruiting expert participants to see if the effect is apparent even with that level of training. These participants are very difficult to source though so we need to do some work to locate and recruit them.

There is also evidence from visual search tasks that intermittent bursts of non-match prevalence can reduce the low prevalence effect. So, this may be worth testing in fingerprint-matching. Similarly, there's some research showing that when you're being monitored you're less likely to be susceptible to the low prevalence effect. But again, that's in visual search, and it hasn't been tested in any kind of pattern-matching setting.

Where can our readers follow your work?

I regularly post on my Twitter account (@BethanyGrowns). And I have a website where readers can stay up-to-date with my research, and also find out if they are a "super-matcher” (someone who is significantly better at fingerprint matching than the rest of the population).

This research was carried out using the Prolific platform and our reliable, highly engaged participants. Sign up today and conduct your best research with our powerful and flexible tools.