Your Trial Message

Your Trial Message

(formerly the Persuasive Litigator blog)

Don’t Ride with ‘Frequent Flyers’ in Your Mock Trial Research

By Dr. Ken Broda Bahm:

5478215570_526ff657d3_b
So, you’re conducting a mock trial and eyeballing the participants as they check in. Gradually, it strikes you: Some seem to know the drill a little too well. Later in the day during deliberations and interviews, it is all the more clear: Some have the wide-eyed and uncertain look of actual jurors, while others appear to be veterans, familiar with the research norms and the facilitator’s expectations. The latter group may be what we call the “frequent flyers,” or those who have participated in many focus group projects and see such participation as an important source of income. They’re also a group that, for many reasons, you want to avoid. Since the entire point of conducting small group legal research like mock trials or focus groups is to hear from individuals who are substantially similar to your potential jurors, it skews your results and reduces the utility of the exercise. Instead of hearing from those who are like the true cross section of jurors, you are hearing from those who are more like professional survey takers. 

Lawyers, especially those who are trying to cut costs in pretrial research, might be prone to believe that feedback is feedback and the reactions of any available warm bodies ought to be enough. But that could not be further from the truth. The choice to engage in jury research is already a kind of compromise to the extent that you are probably not bringing in the kind of numbers that Gallup would rely on. To jeopardize the representativeness even more by using ringers as participants is definitely a bad idea. In this post, I lay out the case for avoiding “frequent flyers” and other opt in participants and, instead, basing your research on those who are recruited using the most systematic and randomized techniques available.

 A Big Problem (That May Get Bigger)

There is no question that the urge to use canned research participants is driven by economic considerations since randomized recruiting takes more time and is somewhat more expensive. Certainly it makes sense to look for ways to trim the mock trial and focus group research costs so it is more accessible across a broader spectrum of cases and clients. Messing with the participants, however, cuts at the heart of what makes the research useful. Nevertheless, the practice of using respondents drawn from a survey company’s database of volunteers seems to be very common. While most responsible researchers will still try to find ways of screening out the frequent flyers, the practice dramatically increases your chances of having mock jurors who are fundamentally dissimilar to your eventual jurors.

As research increasingly moves online, the problem can be expected to increase. In an article in The Jury Expert, Brian Edelman recently wrote about the special challenges the format poses to reliability. “Most online surveys use a nonprobability sampling technique based on ‘opt in’ panels,” Edelman writes, and “when the individuals who join such panels are different in important ways from those who do not, samples are not representative of the jury pool.” Based on a task force report produced by the American Association for Public Opinion Research (AAPOR, 2010), there are some key differences that are troubling from a research perspective. Edelman, for example, reports that many online survey vendors, driven by increasing demand for survey takers, are recruiting in less traditional ways, incentivizing participants not with plain old cash, but with online gaming credits. And a research participant who is there just to build their stock of imaginary animals for Farmville is likely to differ from the typical juror in some important ways. And because they’re gamers, the idea of gaming the system is not out of the question. Edelman quotes one message posted on an online message board for “Survey Crack Heads,” or hard-core survey takers:

Yo guys listen up!!! … ALLWAYS MARK YES IN THE FIRST QUESTIONS cuz if not, you will not be able to qualify for the survey” (Sic).

That may, or may not, be an extreme example, but there are some important differences between frequent research participants and the general population.

What Sets Frequent Flyers Apart? 

If using database participants and other volunteers was just a matter of getting the same type of people more quickly and easily, then it wouldn’t be a problem. But it is a problem because the participants differ in a number of ways.

Demographically and Psychologically Distinct

As you might expect, those who step forward and say, “I’d like to be hired for research studies” are not a demographic cross section of the population. The AAPOR report mentioned earlier AAPOR, 2010 summarizes, “A large number of studies have compared results from surveys using nonprobability panels with those using more traditional methods, most often telephone. These studies almost always find major differences.” The report goes on to say that differences may be due to the way the surveys are administered (computer versus phone) or they may be due to differences in samples. There is a need for recent and mock jury specific research in this area, but I did find two studies suggesting the sample may be the dominant part of the problem. Hwang & Fesnmaier in 2002 looked at differences between those who were willing or unwilling to provide online contact information – often a precondition to becoming part of a recruiting or online survey database – and found dramatic differences in demographics, behavior, and psychological characteristics. Another study (Ganguli et al., 1997) directly compared random recruits to volunteers for a community health study and found educational, cognitive, gender, and health behavior differences. So the intuition that those who step forward differ from those who are picked does have some research behind it. The recruiter could respond to these differences by using demographic quotas to ensure the recruited sample matches the population, but that won’t catch the behavioral and psychological differences that also set the volunteers apart. And, more practically, there are other problems.

Less Motivated (and Less Likely to Show)

One might presume the opposite, that those who volunteer or work with some regularity as a research participant might be more motivated and reliable — they’re pros after all. But based on the recruiting experience, the opposite seems to be the case. Anecdotally, we have heard from our own recruiters that random recruits tend to have a slightly higher show rate than database respondents, and that may be because those on a database feel that they will have other opportunities in the future. If the first-time flyers are going to be not only more representative, but more reliable as well, then this has a direct effect on the quality of your research.

More Savvy (and More Likely to Role Play)

Though they may be a little less likely to show, they may try harder and that presents different problems. Nonrandom recruits, Dale Hanks notes, “sometimes are also much more aggressive in trying to qualify. They have experience with the screening process and through the years may have been through dozens of screeners.” That creates a savvy that could be dangerous to your confidentiality and screening needs. “We’ve even heard them ask the agents to tell them what the correct answer is so they don’t disqualify.” Once they get past the door, they may also participate differently. Someone with prior experience may develop a sense that they know what the facilitators are looking for, and in a group discussion that kind of person could be prone to disagree just based on the perception that the researchers like it when you mix things up a bit. If they’re thinking, “How can I get this gig more often?” then a desire to please the researcher could end up infecting the research results.

The Ineffable Difference: Self-Selection

Ultimately, though, the key problem with repeat responders and others from opt in panels is something that may not be easy to measure or define: They’re simply different because they volunteered. That isn’t what happens in actual jury duty where your average citizen’s space is interrupted by a summons. They don’t choose it, it chooses them. Now, mock trials can’t try to mandate attendance the way a court can, but to me it still makes a difference that participants are being found rather than stepping forward. Even if participants turned out to be the same on every metric we could measure, that difference in self-selection would still be a disqualifier for me.

So How Do You Avoid Frequent Flyers? 

Probably the most common way consultants have of weeding out the frequent flyers is through screening, by asking, “Have you participated in a mock trial or a legal focus group before?” Based on a question like that, many consultants will take only research “virgins” in the sense that they’ve never participated in a mock trial or legal focus group before. That solves some problems but not others, and the savviest among frequent flyers may even learn to downplay their prior experience in order to preserve their ability to participate. The best option is to follow the court’s practice and draw randomly from the jury-eligible population in your venue. As Dale Hanks argues, “Simple science shows the random group is the closest simulation which obviously provides the most accurate science in your data.” The main reason consultants and clients have for not recruiting using a random method like “Random Digit Dial,” is cost, but Dale Hanks, having routinely recruited using both random and nonrandom methods, shares the experience that the “difference in the recruiting costs rarely exceed $1000,” and such a relatively minor portion of your trial preparation budget is generally going to be worth it in order to get the closest simulation to the actual venue. The practice of recruiting by random digit dial can also work for online research. Even though the project takes place via the internet, participants can still be selected and screened via telephone. That helps to ensure that your mock jurors actually live in your venue and to also have the benefit of personal contact during the critical screening stage.

The bottom line is a conclusion that applies to many of the choices you make in conducting pretrial research: Your results are only as good as your methods. Garbage in, garbage out.

______

Other Posts on Mock Trial Method: 

______

AAPOR Report on online Panels (2010). Prepared for the AAPOR Executive Copunsil by a Task Force Operating under the auspices of the APOR Standards Committee. URL: http://www.aapor.org/AM/Template.cfm?Section=AAPOR_Committee_and_Task_Force_Reports&Template=/CM/ContentDisplay.cfm&ContentID=2223

Hwang, Y. H., & Fesenmaier, D. (2002). Self-selection biases in the Internet survey: A case study of a tourism conversion survey. Unpublished manuscript. http://fama2. us. es8080.

Photo Credit: cogdogblog, Flickr Creative Commons