By Dr. Ken Broda Bahm:
Emerging from the dust and debris of a long campaign, there is one man who can claim a clear victory and a solid mandate: Nate Silver. If you don’t know who Nate Silver is, you haven’t been paying attention to the polling running up to the presidential election. But if you do, then you know that the FiveThirtyEight blogger was close to exact in his prediction of an election result that nearly every other media source considered too close to call. For fans of the quantitative analysis of public opinion, the man is a rock star. In 2008, he correctly predicted forty-nine of fifty states, and last Tuesday he upped his game by correctly predicting all fifty plus D.C. Nate Silver did that by systematically separating the actual voting behavior from the sampling error and the “house effect” differences between various polling organizations. While other analysts looked at the polls, saw that the swing state difference between the candidates was generally within the statistical margin of error, and called it a “toss up,” Silver used a proprietary statistical model to run multiple simulations of the election and to quantify the probability of each outcome.
In other words, Silver was able to focus on the signal, not the noise. That concept, “signal to noise ratio,” measures the extent to which some transmission, like a cell phone call, consists of both “signal” (the voice you’re trying to hear) and “noise” (that roaring sound you’re trying to hear over). Other forms of research carry the same dynamic: a “signal” of actual opinion or behavior, and the “noise” of idiosyncratic variation. In this post, I’m going to look at a setting for research that is much smaller in scale than Silver’s mammoth modeling project, but one that is just as susceptible to variability and distraction. That setting is the mock trial or focus group research project conducted prior to trial. Specifically, I’m going to share three ideas on keeping your focus on signal rather than noise.
Nate Silver has recently written a book on the subject, The Signal and the Noise: Why So Many Predictions Fail, But Some Don’t. It is an accessible read, even for the non-statistically inclined. It is broader in focus than you might expect, focusing not on the details of statistical models, but on the ever-present human tendency to forecast events. Drawing from examples in weather, business, games, and of course, politics, Silver tracks many of the problems — cognitive bias, information overload, underdeveloped theory, or noisy data — that cause even confident and repeated projections (recent comments from Newt Gingrich, Dick Morris, and Karl Rove come to mind) to be off, sometimes dramatically off.
Applying some of these thoughts to the mock trial or legal focus group context, I see three important takeaways:
1. Know Your Purpose
When conducted honestly, a mock trial is almost never designed to forecast your result in trial. Sure, it can often help you understand the spectrum of possible outcomes and that can help to guide mediation, but its main purpose is not to predict. Instead, research with mock jurors serves the heuristic function of providing a setting where the team can ask questions, test out approaches to the message, and hear from a representative sounding board. Similar distinctions on purpose apply to the election data, and groups like Pew and Gallup are careful to point out that they’re just measuring opinion at the moment and not, like Silver, forecasting the election. But in the case of the election data, as well as mock trial projects, a clear focus on purpose should, but doesn’t always, guide how the information is interpreted and used. In the aftermath of a mock trial, for example, attorney and consultants alike can fall victim to giving the project more predictive power than it merits. When the mock trial is your only example of how things might turn out, it can be tempting to see it as a window into the future, but its best to remember that the research is a tool, a way of working on your case, not a crystal ball.
2. Use Multiple Groups
One other source that turned out to be accurate in calling all the states in the presidential election is the Huffington Post “Pollster” model. The strength of that source is that it relied on multiple polls, and applied weights and averages to minimize the influence of any one poll. Similarly, your focus group or mock trial projects should include multiple jury-sized groups. The more the better, but preferably at least three. And if you are testing different scenarios (e.g., some jurors get to hear from the challenged expert and others don’t), then there should be multiple groups within each condition. The problem in relying on just one group, as is fairly common practice especially on the plaintiffs’ side of the bar, is that you have no way to tell the difference between findings that emerge as an idiosyncratic effect of the composition of an individual jury and the more reliable results that cut across several groups. In other words, there is no way to separate the signal from the noise.
3. Downplay the Outliers
During the final weeks of the presidential campaigns, the Gallup poll was consistently different from most other polls in putting Mitt Romney well ahead of Barack Obama. That difference made Gallup an outlier, and it turns out, wrong. Of course, it is possible that the outlier could be right, but on the whole, there is a “safety in numbers” when it comes to different projections. Models like Huffington’s downplayed the outliers, and that same practice should apply to interpreting focus group or mock trial results. But often, the opposite can happen: Our attention is drawn to the most vocal mock jurors who have the most extreme views. What sticks in the head at the end of the day is the “zinger” statements that are the most unexpected. But what makes these statements the most interesting is what also makes them the least representative. As a practice, it is better to focus on views shared by multiple mock jurors across multiple groups. Those views are less likely to be idiosyncratic, and more likely to be representative of the way a future jury might see the case.
As far as forecasting what is next for Nate Silver, my prediction is that he’ll be selling more than a few books, perhaps renegotiating his contract with the New York Times, and that he’ll be watched even more closely in 2014 and 2016.
______
Other Posts on Pretrial Research Methods:
- Don’t Be Entranced By Statistical Claims From MockTrial Research
- Know Your Constraints: A Conversation on Mock Trial Design
- Put Your Jury Selection on Steroids by Leveraging Pretrial Research: Lessons from the Barry Bonds Trial
______
Silver, Nate (2012). The Signal and the Noise: Why So Many Predictions Fail, But Some Don’t. Penguin Press.
Image Credit: Michaelstyne, Flickr Creative Commons