Your Trial Message

Your Trial Message

(formerly the Persuasive Litigator blog)

Teach the Difference Between Science and Junk

By Dr. Ken Broda Bahm:

2652953996_76184c9ef4_b

Even good science can sometimes be a tough sell in the court of public opinion. Take, for example, the moment in Republican Presidential nominee Mitt Romney’s acceptance speech last week where he mocked the President’s concern over rising sea levels. The crowd of delegates in Tampa cheered wildly, though the fact of sea level rise, if not its exact cause, is scientifically uncontroversial. 

In the actual courts, the fate of good scientific information can be just as uncertain, as can the fate of bad science. Even in this age of Daubert, separating the valid from the invalid can be a challenge for both jurors and judges. In the upcoming trial of Fort Hood shooter Major Nidal Hasan, for example, there is a current controversy over the testimony of Evan Kohlmann, a terrorism expert who has classified Hasan as a “homegrown terrorist” on the basis of a six-factor model he developed, which may or may not be falsifiable or replicable. In the more typical case relying on expert testimony, the need to convince judges and jurors to critically evaluate the methods and the reasoning that under-girds research conclusions can be critical. In this post, we’ll take a look at one study showing that jurors are able to identify some but not most methodological flaws in research, and draw some conclusions on the best practices for separating the good science from the bad in litigation.

The Study: You Can Only Partially Trust Jurors to Separate the Science from the Junk

As the ultimate evaluators of expert claims once they’re allowed in trial, jurors can often be in the position of deciding whether scientific testimony is or isn’t valid. Two CSU Northridge professors, McAuliff and Duckworth (2010), looked into the question of whether jurors are up to the task. The research builds off previous studies showing a limited ability by juror-elligible participants to identify basic weaknesses in research, namely the lack of a control group. McAuliff and Duckworth wanted to see whether participants could identify more subtle problems such as experimenter bias or a “confound” (more than one factor potentially causing an observed difference). The short description of the results is that, as in prior studies, the potential jurors did assign less credibility to testimony based on a study lacking a control group, but failed to apply discounts based on the research showing the more subtle flaws.

The article also looks at the role of publication status (whether research appears in a peer-reviewed publication or not), but instead of finding that it serves as a tie-breaker in cases of questionable validity, this study found an published status only made a difference in the case of a missing control group, and led to the research being perceived as less rather than more credible. This is a mixed result, yet it points in the direction of a few good reminders for those presenting scientific testimony to jury or judge.

1.  Teach Jurors the Better Science

Studies such as this one can often be the starting point for the argument that jurors are ill-equipped to be fact finders in highly technical cases such as those that rely on complex scientific testimony. But it must be remembered that McAuliff and Duckworth looked at participants’ ability to discern research flaws on their own, without the benefit of cross examination, opposing experts, or attorney argument. In the context of trial, the question is not whether the jurors are able to apply an intuitive reaction to the science as a default, but whether they can follow explanation and argument in order to differentiate the worse research from the better. On that score, a team of attorneys and expert using the best tools for teaching — examples, analogies, and demonstrative visuals — should be able to bring a jury to a reasonable understanding of even complicated scientific cases. But it is never a matter of simply pointing out the scientific advantages or flaws. Instead, it is a matter of convincing jurors to think about the foundations of the research: what it means and why it matters.

2. When Necessary, Assess Potential Jurors For Analytic Style

When your case depends on a jury being able and motivated to spot scientific weaknesses, there is one step that should come before persuasion, and that is identifying and striking those jurors who are least likely to apply careful reasoning to the task. That isn’t simply a matter of looking at educational level. A background in science, research, or analytic disciplines is helpful, but it is also true that there can be lazy thinkers among the educated, and meticulous thinkers among the relatively unschooled. The most direct route is to look at the venire member’s cognitive style. We’ve written before about “need for cognition” (or the tendency to engage in and enjoy effortful thought) as an important trait, and recommended several questions for inclusion in a juror questionnaire. In the McAuliff and Duckworth (2010) study, they didn’t look specifically at cognitive style, but they do cite previous research (McAuliff & Kovera, 2008; Levett & Kovera, 2008) that found that those individuals who were highest in need for cognition were also the group most likely to discount scientific research suffering from a missing control group. This adds support to the idea that you want to identify and avoid “low effort thinkers” when your case requires fact finders who are willing to cast doubt on particular scientific methods.

3. Don’t Assume Your Judge Will Be Much Better at Judging Science

You might think that complex science would fare best in the hands of a legally trained audience, like a mediator or judge. Legal training, however, rarely includes training in the tools that are necessary to evaluate science. What this means is that a judge fulfilling a gate keeper role in a Daubert hearing, for example, may not possess enough understanding to fully assess the science. Also cited in the McAuliff and Duckworth article is research (Gatowski et al., 2001) showing that when asked to discuss how they would apply the Daubert standing, only 5 percent of judges understood the notion of “falsifiability” while only 4 percent accurately understood the notion of an “error rate.” Given that both are essential standards derived from the Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) case, this suggests judges may need as much help as jurors. The article also points to research indicating that attorneys may also suffer from the same blind-spots (Kovera & McAuliff, 2009), suggesting that at all levels, it can be a challenge to understand and apply the knowledge allowing the court to fulfill its gate keeping role in science.

4. Defend Your Daubert Challenge Against the Claim that “Jurors Will Figure It Out.”

Of course, the ultimate tie-breaker on motions to exclude and Daubert is to say, “let it in, make your attacks, and the jurors will sort it out.” While the McAuliffe and Duckworth study doesn’t exclude the possibility that good teaching by attorneys or opposing experts can lead jurors to reliably dismiss the bad science, the study does at least cast doubt on the assumption. For this reason, the study is well worth citing in briefs seeking to exclude research without a valid scientific basis prior to trial.

Of course, apart from the ability to understand science, it helps to understand the motivation as well. In that context, there is one more noteworthy study. According to a review of data collected between 1974 and 2010, those who self-identify as politically “conservative” began the period with the highest trust in science and ended that period with the lowest (Gauchat, 2012). While liberal’s and moderate’s trust in science remained stable, the conservative’s trust declined fully 25%. That stark attitude change reflects the beliefs of some (certainly not all) conservatives, shaped by the issues of evolution and global warming. So that is probably what lies beneath Mitt Romney’s sea level comments at the recent national convention. Still, it is an important reminder that to some citizens and some jurors, science — even good science — isn’t established truth, but is just another argument that can be set aside based on personal experience or individual conviction.

______

Other Posts on the Role of Scientific Testimony: 

______
ResearchBlogging.org

McAuliff BD, & Duckworth TD (2010). I spy with my little eye: jurors’ detection of internal validity threats in expert evidence. Law and human behavior, 34 (6), 489-500 PMID: 20162342

 

 

Photo Credit: gds, Flickr Creative Commons