Your Trial Message

Don’t Look for AI to Pick Your Jury…Yet

By Dr. Ken Broda-Bahm:

I don’t know if anything has ever in such a short time moved from being a pretty esoteric science topic to being a concern on everyone’s lips… but ChatGPT could probably tell me. With literally billions of uses per month, the artificial intelligence-driven chatbot has captured our attention and created a mix of excitement and dread. It has become a cliche to talk about how dramatic the shift will be as the technology moves toward displacing tasks like writing and customer service. It is also a cliche to talk about how the “jury is still out” on what that shift will be. But, speaking of juries, might AI now or soon be playing a role in selecting them? On face, the technology does seem suited for tasks that require a summary and interpretation of a large volume of information, so maybe.

A recent article in the ABA Journal suggests that there may be role for AI in quickly finding and processing information on the venire. The piece quotes my colleague Daniel Wolfe, who notes the potential to use algorithm-guided data-mining to move through large volumes of online information. Another colleague, my co-editor Richard Gabriel, however, raises the point that a person’s self-presentation through this online data may or may not match up with the individual’s psychology and group participation, suggesting that there will still be a role for the uniquely human factors of observation and insight. One big problem that isn’t mentioned in the article is the recent news that AI seems prone to simply making up information in legal context, as one now-sanctioned New York attorney realized after asking ChatGPT to write a brief that was then submitted without fact-checking. In my view, there are three very broad and very basic hurdles that need to be overcome before embracing a tool like artificial intelligence in a task like voir dire. Writing as a jury selection practitioner and not as an AI expert, I’m aware that these solutions could conceivably be found now, or soon, or perhaps never. But I do believe that the nature of the legal task imposes its own conditions.

Here are the hurdles that I think need to be overcome before we can count on routine and practical use of AI in a task like jury selection.

Verification

The tendency for AI to confidently create and share false information has been observed in a number of contexts, and to me the problem is absolutely fascinating. It is unusual because, as I understand it, even the newly minted experts of AI don’t seem to know why it — to use their words — “hallucinates” in this way. And it isn’t like they can just pop the hood and look at the code (as they would be able to do with pre-AI programming). Obviously, in most cases, including law, you need information that you know to be true, and if AI can invent laws and legal cases (as observed in a previous post) then it could make up a factual background for possible jurors. Without onerous fact-checking, which would seem to defeat the time savings in using AI in the first place, the tool could prove to be worse than useless — at least until there are guardrails that require truthfulness, or at least a known error rate.

Interpretation

The goal in evaluating a potential juror is not just to accumulate information, it is to develop a picture of the person and the ways they may or may not hold a bias when it comes to your case. In the search for information, not everything matters. Indeed, it is fair to say that most of a person’s public expressions are likely to have no practical bearing on how they might see a given case. Every case will have its own high-risk profile, and the task of all researchers — be they human or AI-based — is to rationally and creatively interpret what information might matter in any given litigation.

Bracketing

There is another way in which ‘not everything matters’ when it comes to jury selection: protected categories. Based on a sordid history of the racist use of peremptory challenges, particularly in criminal cases, there are now several categories that cannot legally be a reason for a strike. Based on the Batson case and its lineage, these categories include race, sex, sexual orientation, and in some venues, other traits as well. Now, those of use with a social science background generally welcome those prohibitions, because demographics don’t tend to be reliable predictors of the attitudes that matter the most in determining juror reactions. Historically, however, some commercially-marketed products have factored demographics into their data-based analyses of potential jurors, in my view making their use highly susceptible to challenge. When using AI, good intentions would not be enough: the burden would be on the user to be able to explain that their selections are not driven by protected demographic factors.

When it comes to using AI for legal tasks including jury selection, we are very much still in a “wait and see” moment.

____________________
Other Posts on Artificial Intelligence: 

____________________

Image credit: Shutterstock.com, used under license.