Today, we begin a new subject in our ongoing analytics study of the Court’s decision making – oral arguments.  Although the academic community has been producing analytics studies of appellate decision making for a century, the analytics study of oral arguments is a much more recent development.

The earliest study appears to be Sarah Levien Shullman’s 2004 article for the Journal of Appellate Practice and Process.  Shullman analyzed oral arguments in ten cases at the United States Supreme Court, noting each question asked by the Justices and assigning a score from one to five to each depending on how helpful or hostile she considered the question to be. Once seven of the ten cases had been decided, she divided her observations according to whether the Justice ultimately voted for or against the party. Based upon her data, she made predictions as to the ultimate result in the three remaining cases. Shullman concluded that it was possible to predict the result in most cases by a simple measure – the party being asked the most questions generally lost.

John Roberts addressed the issue of oral argument the year after Shullman’s study appeared. Then-Judge Roberts (at the time, two years into his tenure on the D.C. Circuit) noted the number of questions asked in the first and last cases of each of the seven argument sessions in the Supreme Court’s 1980 Term and the first and last cases in each of the seven argument sessions in the 2003 Term. Like Shullman, Roberts found that the losing side was almost always asked more questions. So apparently “the secret to successful advocacy is simply to get the Court to ask your opponent more questions,” Judge Roberts wrote.

Professor Lawrence S. Wrightsman, a leading scholar in the field of psychology and the law, took an empirical look at U. S. Supreme Court oral arguments in a 2008 book. Professor Wrightsman chose twenty-four cases from the Supreme Court’s 2004 term, dividing the group according to whether they involved what he called ideological or non-ideological issues. He then analyzed the number and tone of the Justices’ questions to each side, classifying questions as either sympathetic or hostile. Professor Wrightsman concluded that simple question counts were not a highly accurate predictor of ultimate case results unless the analyst also took into account the tone and content of the question.

Timothy Johnson and three other professors published their analysis in 2009. Johnson and his colleagues examined transcripts from every Supreme Court case decided between 1979 and 1995 – more than 2,000 hours of argument in all, and nearly 340,000 questions from the Justices. The researchers isolated data on the number of questions asked by each Justice in each argument, along with the average number of words used in each question. The study concluded, after controlling for other factors that might explain case outcomes, all other factors being equal, the party asked more questions generally lost the case.

Professors Lee Epstein and William M. Landes and Judge Richard A. Posner published their study in 2010. Epstein, Landes and Posner used Professor Johnson’s database, tracking the number of questions and average words used by each Justice. Like Professor Johnson and his colleagues, they concluded that the more questions a Justice asks, all else being equal, the more likely the Justice will vote against the party, and the greater the difference between total questions asked to each side, the more likely a lopsided result is.

In Table 1665 below, we show the year-by-year court-wide total number of questions for appellants versus appellees.  In civil cases from 2008 through the end of 2020, the Supreme Court has asked 11,046 questions – 6,005 to appellants and 5,041 to appellees.  That might appear that appellants always get more questions, but in fact appellees have gotten more in five of the past thirteen years.  Furthermore, we’re mixing results here between cases appellants won, cases they lost and mixed results.

Now we divide the civil case data into affirmances and reversals (for these purposes, we’re counting mixed results – affirmed in part and reversed or modified in part – as reversals).  In Table 1666, we report the year-by-year data for affirmances.  The result is in line with the research reviewed below: in every year since 2008, appellants who end up losing average more questions in civil cases than appellees do.

Now, let’s look at reversals.  Here, the data is a bit more equivocal – although there are many cases each year in which appellees who will wind up losing the case get more questions, losing appellees wind up outdistancing appellants in only seven of the past thirteen years.  Curiously, four of these seven years where losing appellees got more questions were 2017, 2018, 2019 and 2020 (so far).

Join us back here next time as we turn our attention to the data for criminal cases.

Image courtesy of Flickr by brokinhrt2 (no changes).