By Dr. Ken Broda Bahm:
For those of us in the social sciences, data is our currency. And of course, we would prefer that our window into attitudes and trends gives us an unbiased view. The reality, though, is that the “how you collected it” matters as much as the “what you collected.” A simple survey, for example, can be administered by an interviewer (as in a telephone survey), or it can be self-administered by the respondent (as in an online survey). That latter “self-service” mode isn’t new of course, since a paper survey is also self-administered. But as online survey approaches are swiftly moving to displace the traditional telephone survey, the question of influence exerted by the way the data is collected, the “mode effect,” is becoming more important. To the extent that litigators rely on that kind of data — for venue motions, community attitude surveys, focus groups, and mock trials — then it’s a question that matters to litigators as well.
A new Pew Research Center investigation focuses on that difference between telephone and online data collection. Looking at 3,003 survey respondents who answered the same questions either by telephone (interviewer administered) or online (self-administered), Pew concluded that mode differences are “fairly common, but typically not large.” The mean difference in the answers obtained via the two methods averaged 5.5 percent across a broad set of 60 questions. That difference is nothing to sneeze at, and it is worth noting that for some of the questions the mode effect difference ranged as high as 18 percent. That by itself is a big deal. But the more important finding is that the differences weren’t random, but followed a pattern. While we are tempted to wonder which, telephone or online, is the true answer and which is skewed, there probably is no good answer to that question. Instead, researchers and those who rely on that research need to keep those differences in mind. In this post, I will report on the Pew research on mode effect, and draw out some implications for litigators, including some advice on the differences between oral voir dire responses (interviewer administered) and supplemental juror questionnaire responses (self administered).
The Pew ‘Mode Effects’ Study: Self-Service Equals Lower Social-Desirability Effects
Participants in Pew’s studies were randomly assigned to take the same survey either by telephone or online. By controlling the questions, the demographics of the respondents, and other survey conditions, Pew was statistically able to take the remaining differences in the ways the two groups answered and attribute those to the collection mode. Here is what they found:
Compared to those answering questions by telephone (interviewer administered), those answering the questions online (self-administered) were…
-
- Less likely to report satisfaction with family life and social life.
- Less likely to say gays, lesbians, hispanics, and blacks face a lot of discrimination.
- Less likely to say their community and their health is excellent.
- Less likely to say they often talk to their neighbors.
- Less likely to say they frequently go to church.
- More likely to report “very unfavorable” views on specific political figures.
- More likely to personally report a need for food or for medical care.
Looking at that list, a clear pattern emerges: All can be explained through social desirability bias, or the tendency in some contexts to lean toward the answer that seems more socially appropriate, or expected. When you are talking to an interviewer, even if that interviewer is just reading off the questions and the response options, that interviewer is still a social reminder, a representative of the public. Of course, the online or paper survey taker also knows that their responses will be read by someone, but because that person is not on the other end of the phone line, they are less salient.
What That Means in Court
Because litigators rely on community attitude or change of venue surveys, the Pew survey results are important. To the extent that the questions probe for answers that might be less socially desirable (e.g., asking someone to admit to negative attitudes toward a party or group, or to acknowledge having prejudged a case without hearing all the evidence), then litigators may see more honest responses from a self-administered survey.
In addition, it helps to remember that oral voir dire is a form of interviewer-administered data collection, while supplemental juror questionnaires are self-administered. The Pew research provides another reason to ask for a questionnaire in addition to oral voir dire. That mode effect difference means that the supplemental juror questionnaire is likely to provide data that is less constrained by the panelists’ implicit expectations about the socially preferred answer.
Another Difference: Recruited Versus Volunteered
It is predictable that the Pew research will be used by online data collectors, particularly the sentence reporting that when using online rather than telephone data collection, the mode effect difference is “typically not large.” But that glosses over a critical question: Where do these online data collectors get their respondents? What Pew did was use members of their own vetted list: the “American Trends Panel.” These are individuals who have taken prior surveys on the web, and were randomly assigned for this survey to take on web or to take on phone. But Pew knew their respondents to begin with. There is a problem with those who are repeat survey takers, or “frequent flyers,” but the greater problem is likely the unknowns who click an online link or the professionals who fill out as many surveys as possible in exchange for cash and prizes. Those people will differ substantially from the general population. So one important caveat is that the Pew data does not provide a general defense of those who use bad sampling techniques. Those who are recruited for a survey (for example, through random digit dialing) will differ from those who volunteer, and a stratified demographically-representative sample will differ from one that is comprised by the luck of the draw.
The bottom line is that at this stage, neither telephone surveys nor online surveys is the gold standard, and neither is to be avoided at all contests. Some might assume that telephone surveys are better, if only because they are more expensive. But the social desirability dynamic will provide a good reason in many settings to prefer a self-directed measurement, whether that means online data collection or a paper or pencil test, as long as the respondents are still well-selected. The main consideration is this: How you collect influences what you collect.
______
Other Posts on Surveys:
- Conduct Discovery on Your Trial Audience
- Compare the City and Country Juror
- Account for the Media’s Effect (Even in Civil Cases)
______
Image Credit: 123rf.com, used under license.