Your Trial Message

Your Trial Message

(formerly the Persuasive Litigator blog)

Know the Perils of Polls

By Dr. Ken Broda Bahm:

3071058527_334ce41d78

Litigators are interested in public opinion because cases are played out on that stage. Attitudes about lawyers, about corporations, and about case-specific issues like consumer product responsibility, serve as an important backdrop and starting point for jurors’ opinions and reactions to a wide variety of civil cases. For that reason, trial teams will often investigate attitudes by measuring the initial and general opinions held by participants selected for mock trials or focus groups, and by conducting more sophisticated public opinion polls when better chances for statistically significant conclusions are sought. We are currently at a high watermark of attention to the measurement of attitudes, since there is no time at which public opinion polling receives as much focus as it does during a political campaign. But anyone paying close attention to the daily drama of “up one, down two” attitude surveys could be forgiven for wondering, “What is up with the polls?” and further, what does it say about our ability to accurately and reliably measure public opinion. 

Especially as we move toward the end of the campaign cycle, the volatility of the polling in the wake of the presidential debates has challenged the conventional wisdom that the debates don’t change election results. Taking advantage of this brief moment in the sun for survey research, this post will use the example of polling on this year’s presidential race to consider the most important messages that attorneys should bear in mind when interpreting surveys to assess their case or venue.

Polls are obviously catnip for the media. Analyzing positions, discourse, and policy takes time and a healthy attention span. But here in the poll numbers, one finds a single, simple, and easy-to-communicate number that tells you who is winning and who is losing at any given moment. Based on that, it is no surprise that the myriad of polls became a centerpiece of media, campaign, and public attention. With the aid of current sampling methodologies, the polls tend to be pretty accurate as well, and as sure as falling leaves are a sign of Fall, when one campaign or the other starts telling you that polls aren’t important, it’s a pretty clear sign that campaign is losing.

At the same time, the closeness of the races this year have brought into sharp relief a number of issues that prevent polls from being used or understood in simple terms. In this post, I’m not going to try to unravel the details of polling at the electoral level…because they’ll change in the next hour (and Huffington Post has a great interactive map, reminding us that what matters is not national opinion but state-level opinions). Instead, what I’d like to do is call out two points where the challenges of tracking the national race call out important lessons for litigators who rely on public opinion in assessing or making their own case in trial.

Know Your Sample

As we’ve written before, the sample and the methods used to get it lie at the foundation of your ability to get results that can be generalized from those you surveyed to those you didn’t. In presidential polling, it has long been known that the attitudes of the general population, or even the voting-eligible population, don’t matter. Instead, the Holy Grail is the “likely voter.” In order to avoid skewed results, pollsters need to exclude those who have an opinion but aren’t likely to pull a lever or mail a ballot by election day. So they use various complex and often undisclosed methods of figuring out who those likely voters are. Indeed, according to some analyses, it is the differences in the “likely voter” methodologies that account for many of the differences between polls released by, say, Gallup, and those released by Pew or by the media networks.

For litigators, the key test is to measure not just the general attitudes within your venue, but the general attitudes of likely jurors within your venue. At a minimum, that means jury qualifying the respondents to your survey. A better method is to locate data, as we have for USDC Denver and a variety of other venues, telling you not only the demographic breakdown of those who are called for jury duty, but the demographic breakdown of those who show up and serve. In other contexts, the need to know our sample has meant hardship qualifying survey respondents to make sure that we are only talking to that relatively small subset of individuals who would be likely to serve, for example, on a three month trial.

Know What You’re Measuring

The field of statistics calls it the question of “validity:” Are you actually measuring what you think you’re measuring? In the political race, of course, what pollsters need to focus on is voting behavior. Looking at Romney’s substantial bounce after the first debate, as well as the more moderate bounce for Obama just now emerging after the second debate, it is fair for us to wonder, “Are there really that many voters who are picking a candidate or switching sides based on who appeared more presidential in the last debate?” And, indeed, some commentators have raised the possibility that what is being measured is not necessarily voting behavior as much as voter enthusiasm. As statistician Nate Silver writes in his New York Times “FiveThirtyEight” blog, for every ten people polling firms try to contact by phone, even the best firms only complete a survey with one. The assumption that the nine incompletes are like the one completed survey, contains some measure of hope. As Silver writes, “The willingness to respond to surveys may depend in part on the enthusiasm that voters have about the election on any given day.” So, based on that, post-debate bounces for the candidate seen as turning in a better debate performance may just reflect a greater willingness of that candidate’s supporters to take part in a survey as much as it reflects an actual change in attitude.

Litigators need to draw similar distinctions when looking at the attitudes measured in mock trials and in surveys. The most important answer to the “what are we measuring” question is to remember that when asking the baseline attitudinal questions that matter to jurors’ views of a trial, we’re measuring attitudes that are abstract, preliminary, and passive. The attitudes are abstract in the sense that they are removed from the context of a specific case. If you find in a survey, for example, that a certain percentage feel that “people need to take more responsibility for their own actions,” that figure may shift dramatically when that issue is framed in a particular personal injury story. In addition, the attitudes are preliminary in the sense that, at best, they reflect a tendency to lean in one direction or the other upon first hearing the story of a case. After reviewing all of the evidence, jurors may be in an entirely different place. Finally, the attitudes we’re measuring are essentially passive, reflecting a tendency to agree or disagree, but not necessarily a tendency to actively advocate or defend an attitude. Because verdicts are the product of deliberation and not just a vote of the panel, we care most about the attitudes that jurors don’t just “have,” but “uphold,” and you can only see that in a mock trial.

Of course, it is still very useful to understand what attitudes are likely to be coming in the courthouse door at the beginning of the case, and filing into the jury room at the end of the case. For that reason, measuring baseline attitudes during a focus group or conducting larger-scale attitudinal surveys within your venue is a wise move that helps you plan for opening statement and voir dire. But the caution from the current election’s poll storm is this: Don’t assume that a survey measures something essential in the population you’re targeting. What you’re getting is a useful snapshot, not an enduring portrait.

______

Other Posts on Public Opinion: 

______

Image Credit: Twirlop, Flickr Creative Commons