Our latest repost:

We begin our analysis by addressing the foundation of the entire body of data analytic scholarship on appellate judging: competing theories of judicial decision making.

The oldest theory by far is generally known in the literature as “formalism.”  This is the theory we all learned in law school, according to which every decision turns on four factors, each completely extrinsic to the background and ideology of the individual judge: (1) the case record on appeal; (2) the applicable law; (3) controlling precedent; and (4) judicial deliberations (at least in the appellate world).  As Judge Richard Posner of the Seventh Circuit has pointed out, Blackstone was describing the formalist theory when he described judges as “the depositories of the laws; the living oracles, who must decide in all cases of doubt, and who are bound by an oath to decide according to the law of the land.”  In Federalist Paper No. 78, Alexander Hamilton was expounding the same theory when he wrote that judges have “no direction either of the strength or of the wealth of the society; and can take no active resolution whatever.  {The judicial branch] may truly be said to have neither force nor will, but merely judgment.”

Much more recently, Chief Justice Roberts endorsed the formalist theory when at his confirmation hearing he compared a Supreme Court Justice to a baseball umpire – merely calling balls and strikes, never pitching or hitting.  For decades, politicians have promoted the formalist ideal when they insist that judges should merely interpret or discover the law rather than making it (such comments seem to be most often made in the context of complaints that one judge or another has fallen short of that ideal).

The adequacy of formalism as an explanation for how judicial decisions are made has been questioned for generations.  As I noted two posts ago, Charles Grove Haines showed in 1922 that magistrate judges in New York City appeared to be imposing widely varying sentences in factually indistinguishable DUI cases.  Many observers have pointed out that if formalism (which posits that there is one correct answer to every case, entirely extrinsic to the judges) best explains how appellate courts actually operate then dissent should be exceedingly rare, if not unheard of.  In fact, dissent is quite rare at intermediate appellate courts, if you consider both unpublished and published decisions.  But at appellate courts of last resort, and in all appellate courts when you consider the published decisions which shape the law, dissent typically runs anywhere from 20 to 45%.  Other observers have suggested that strict formalism cannot explain the importance of diversity in the judiciary if one assumes that individual judges’ judicial or political ideologies and personal backgrounds are entirely irrelevant.

Still others have pointed out that even the politicians who like to endorse the ideal of formalism have never actually believed that it explains judicial decision making.  As Professors Lee Epstein and Jeffrey A. Segal point out in Advice and Consent, their book on the politics of judicial appointments, 92.5% of the 3,082 appointments to the lower federal courts made between 1869 and 2004 have gone to members of the President’s own party.  Surely that number would be far lower if the philosophy of an individual judge had no impact on judicial decision making.

Most of all, critics of formalism have argued that in fact, it is possible to predict appellate decision making reasonably well over time based upon factors unrelated to the facts of any specific case and legal doctrine.  For example, in a 2004 study performed by Theodore W. Ruger and others, the professors attempted to predict the result in every case at the U.S. Supreme Court during the 2002 term using a six factor model: (1) the circuit of origin; (2) the issue involved; (3) the type of petitioner; (4) the type of respondent; (5) whether the lower court decision was liberal or conservative; and (6) whether the petitioner challenged the constitutionality of a law or practice.  They compared the model’s predictions to the results of independent predictions by legal specialists.  The statistical (and decidedly non-formalist) model predicted 75% of the Court’s results correctly; the legal experts were correct 59.1% of the time.

Image courtesy of Flickr by Ken Lund (no changes).

Our short series of contextual reposts continues:

Although the state Supreme Courts have not attracted anything near the level of study from academics engaged in empirical legal studies that the U.S. Supreme Courts and Federal Circuits have a number of different researchers have attempted to compare how influential the various state courts are for the development of American law. One of the first efforts was published in 1936 by Rodney L. Mott, “Judicial Influence” (30 Am. Pol. Sci. Rev. 295 (1936)). Using several different proxies for influence, including law professors’ rankings, reprinting of a court’s cases in casebooks, citations by other state Supreme Courts and citations by the U.S. Supreme Court, Mott concluded that the most influential state Supreme Courts between 1900 and 1936 were New York, Massachusetts, California and Illinois.

In 1981, Lawrence Friedman, Robert Kagan, Bliss Cartwright and Stanton Wheeler published “State Supreme Courts: A Century of Style and Citation” (33 Stan. L. Rev. 773 (1981)). Friedman and his colleagues assembled a database consisting of nearly 6,000 cases from sixteen state Supreme Courts spanning the years 1870-1970. Among other things, the authors counted the number of times each case had been cited by out-of-state courts as a rough proxy for the author court’s influence. As far back as the 1870-1880 period, California ranked third among all state Supreme Courts in the sample for out-of-state citations, behind only New York and Massachusetts. By the 1940-1970 period – not coincidentally, a period when the California Supreme Court was developing a national reputation for innovation with a string of landmark decisions under the leadership of Chief Justices Gibson, Traynor and Wright – California had moved into first place in out-of-state citations. Fully 92% of all California Supreme Court decisions in the sample were cited at least three times by out-of-state courts, and 26% were cited more than eight times.

Two years later, Professor Gregory Caldeira published “On the Reputation of State Supreme Courts” (5 Pol. Behav. 83 (1983)). Using a database limited to cases published in 1975, Professor Caldeira focused on citations by other state Supreme Courts to each state’s decisions as a proxy for influence. He concluded that the top-performing courts were California, New York and New Jersey. Professor Scott Comparato took a somewhat similar approach in 2002 with “On the Reputation of State Supreme Courts Revisited,” using a random sample of thirty cases from each state Supreme Court. Professor Comparato concluded that the Supreme Courts of California and New York were cited by out-of-state courts significantly more often than the Supreme Courts of any other state.

In 2007, Jake Dear and Edward W. Jessen published “‘Followed Rates’ and Leading State Cases, 1940-2005” (41 U.C. Davis L. Rev. 683 (2007)). Dear and Jessen attempted to determine which state Supreme Court’s decisions were most often “followed” by out-of-state courts, as that term is used by Shepherd’s. Dear and Jessen concluded that the California Supreme Court is the most often followed jurisdiction in the country by a significant margin, with 33% more decisions between 1940 and 2005 which were followed at least once by an out-of-state court than the second highest finisher, Washington. California’s lead lengthens when the authors limited the data to cases followed three or more times by out-of-state courts, or five or more times – California leads Washington 160 to 72 in terms of decisions followed three or more times, and 45 to 17 for five or more.

Two years after Dear and Jessen’s paper was published, Professors Eric A. Posner, Stephen J. Choi and G. Mitu Gulati published their effort to bring all the various measures together, “Judicial Evaluations and Information Forcing: Ranking State High Courts and Their Judges” (58 Duke Law Journal 1313 (2009)). The authors compared the state Supreme Courts by three standards: productivity, opinion quality and independence, using a database consisting of all the Supreme Courts’ decisions between 1998 and 2000. Proposing out-of-state citations as a proxy for “opinion quality,” the authors determined that California was the most often cited court by a wide margin, with 33.76 “citations per judge-year,” as compared to 22.40 for Delaware, the second-place finisher.

Image courtesy of Flickr by Ken Lund (no changes).

I’m always surprised when I encounter litigators who dismiss litigation analytics as a passing fad.  In fact, as shown in the reprint post below, it’s a century-long academic enterprise which has produced many hundreds of studies conclusively proving through tens of thousands of pages of analysis the value of data analytics in better understanding how appellate decisions are actually made.  Here’s the second in our reprint series, both here and at the California blog:

The application of data analytic techniques to the study of judicial decision making arguably begins with political scientist Charles Grove Haines’ 1922 article in the Illinois Law Review, General Observations on the Effects of Personal, Political, and Economic Influences in the Decisions of Judges. (17 Ill. L. Rev. 96 (1922)). Reviewing the records of New York City magistrate courts, Haines noted that while 17,075 people had been charged with public intoxication in 1916 – 92% of whom had been convicted – one judge discharged just one of 566 cases, another 18%, and still another fully 54%. Haines argued from this data that results in the magistrate courts were reflecting to some degree the “temperament . . . personality . . . education, environment, and personal traits of the magistrates.”

Two decades later, another political scientist, C. Herman Pritchett published The Roosevelt Court: A Study in Judicial Politics and Values, 1937-1947. Pritchett became interested in the work of the Supreme Court when he noticed that the Justices’ dissent rate had sharply increased in the late 1930s. Pritchett argued that the increase in the dissent rate necessarily weighed against the formalist view that “the law” was an objective reality which appellate judges merely found and declared. In The Roosevelt Court, Pritchett published a series of charts showing how often various combinations of Justices had voted together in different types of cases (the precursor of the some of the analysis we’ll publish later this year in California Supreme Court Review).

Another landmark in the data analytic literature, the U.S. Supreme Court Database, traces its beginnings to the work of Professor Harold J. Spaeth about three decades ago. Professor Spaeth undertook to create a database which classified every vote by a Supreme Court Justice in every argued case for the past five decades. In the years that followed, Spaeth updated and expanded his database, and additional professors joined the groundbreaking effort. Today, thanks to the work of Professors Harold Spaeth, Jeffrey Segal, Lee Epstein and Sarah Benesh, the database contains 247 data points for every decision the U.S. Supreme Court has ever rendered – dating back to August 3, 1791.  The Supreme Court Database is a foundational tool utilized by nearly all empirical studies of U.S. Supreme Court decision making.

Not long after the beginnings of the Supreme Court Database, Professors Spaeth and Segal also wrote one of the landmarks of data-driven empirical research into appellate decision making: The Supreme Court and the Attitudinal Model, in which they proposed a model arguing that a judge’s personal characteristics – ideology, background, gender, and so on – and so-called “panel effects” – the impact of having judges of divergent backgrounds deciding cases together as a single, institutional decision maker – explained a great deal about appellate decision making.

The data analytic approach began to attract widespread notice in the appellate bar in 2013, with the publication of Judge Richard A. Posner and Professors Lee Epstein and William M. Landes’ The Behavior of Federal Judges: A Theoretical & Empirical Study of Rational Choice. Drawing upon arguments developed in Judge Posner’s 2008 book How Judges Think, Posner, Epstein and Landes applied various regression techniques to a theory of judicial decision making with its roots in microeconomic theory, discussing a wide variety of issues from the academic literature.

Today, there is an enormous academic literature studying the work of the U.S. Supreme Court and the Circuits from a data analytic perspective on a variety of different issues, including case selection, opinion assignment, dissent aversion, panel effects, the impact of ideology, race and gender. That literature has led to two excellent anthologies just in the last few years: The Oxford Handbook of U.S. Judicial Behavior, edited by Lee Epstein and Stefanie A. Lindquist, and The Routledge Handbook of Judicial Behavior, edited by Robert M. Howard and Kirk A. Randazzo.  The state Supreme Courts have attracted somewhat less study than the federal appellate courts, but that has begun changing in recent years, and similar anthologies for the state courts seem like only a matter of time.

Image courtesy of Flickr by Jamison Wieser (no changes).

The Illinois Supreme Court Review recently marked its sixth anniversary.  In April, the California Supreme Court Review turns five.

So I thought it was time for a first: cross-posted reprints from the earliest days of the blogs.  My early attempts to provide context for the work and to answer the question I often heard in those days: “Interesting, but what difference does it make?”

So for the next 2-3 weeks, we’ll be reprinting those context posts – with minimal revisions – both here and at the sister California blog.  For readers who follow both blogs, be warned – the two posts reprinted each week will be largely identical (and don’t worry – it’ll be easy to tell when we resume our regularly scheduled programming . . .).  So here we go:

One of the primary reasons why appellate lawyering is a specialty is because appellate lawyers must contend with persuading a collective, institutional decision maker. An appellate panel isn’t like a jury. The members of a jury come together for the first time for a particular case, and part forever when it’s over. Members of an appellate panel have generally been on the Court for months if not years and will be there for years after a particular case is over. Members of a jury don’t share anything akin to the “law of the Circuit” or the “law of this Court” as a collective enterprise built over a span of years. And although historically, there’s been considerable pressure on jurors to find unanimity – although less so in recent years on the civil side – they almost always are trying to reach a binary decision: yes/no, one side wins, one side loses. An appellate panel, on the other hand, is attempting to reach unanimity on a collective reasoned, written argument. Decision making by appellate panel rather than individual judges has all kinds of potential effects on the outcome, and therefore on appellate lawyers’ task of persuasion – from making judges more reluctant to dissent from a decision they disagree with, to causing judges to vote in a more (or less) liberal or conservative direction than they otherwise would because of the panel’s deliberations.

Over the past few generations, political scientists, law professors, economists and statisticians have developed a host of tools for better understanding the dynamics of group decision making. These include game theory, organization theory, behavioral microeconomics, opinion mining and data analytics. Some researchers have used game theory to develop important insights about everything from the inner workings of the U.S. Supreme Court[1] to why Federal Circuits follow Supreme Court precedent.[2] Others have used traditional labor theory in an attempt to develop a unified theory of judicial behavior.[3] With the rise of widely available massive computerized databases of appellate case law, the most fast-growing and widely varied area of research has applied sophisticated statistical and “big data” techniques to understanding the law.

Data analytics is revolutionizing litigation. Several different companies are offering such services at the trial level. Lex Machina (acquired in 2015 by LexisNexis), Ravel Law (acquired two years later, also by LexisNexis) and Premonition each offer detailed analytics about trial judges, courts and case types based on databases of millions of pages of case information. ALM has also expanded its judicial profiles services to increase their focus on judge analytics.

In 2015, I started this blog to bring rigorous, law-review style empirical research founded on data analytic techniques to the study of appellate decision making. A year later, I expanded the project to the California Supreme Court Review. Both blogs are based on massive databases consisting of 125-150 data points (depending on the year) drawn from every case, civil and criminal, decided by the Illinois and California Supreme Courts, respectively.

Why?  Simple.  Litigators, no matter whether they’re usually in the appellate or trial courts, frequently find themselves predicting the future.  This jurisdiction or this judge tends to be pro-plaintiff or pro-defendant.  Juries in this county tend to return excessive verdicts, or they don’t.  Trial or appellate litigation in this jurisdiction takes . . . this long.  What does it mean that the state Supreme Court just granted review?  Or what does it mean that the Supreme Court asked me way more questions at oral argument than they did my opponent?

Every one of these questions has a data-driven answer.  Not just in Illinois and California, but in every jurisdiction in the country.  Sometimes the data confirms the traditional wisdom – and sometimes it proves that the traditional wisdom is dead wrong.

Want a more high-flown answer?  Try this one from Posner, Epstein and Landes’ The Behavior of Federal Judges:

The better that judges are understood, the more effective lawyers will be both in litigating cases and, as important, in predicting the outcome of cases, thus enabling litigation to be avoided or cases settled at an early stage.

So that’s what we do here.  For everyone who’s been with us for most or all of the six years since we started, thank you.  And for first-time visitors: we hope you’ll join us.

Image courtesy of Flickr by Jim Bowen (no changes).

————————————————————————-

[1]               James R. Rogers, Roy B. Flemming, and Jon R. Bond, Institutional Games and the U.S. Supreme Court (2006).

[2]               Jonathan P. Kastellec, “Panel Composition and Judicial Compliance on the U.S. Courts of Appeals,” The Journal of Law, Economics & Organization, 23(2): 421-41.

[3]               Judge Richard A. Posner and Professors Lee Epstein and William M. Landes, The Behavior of Federal Judges: A Theoretical & Empirical Study of Rational Choice (2013).

Yesterday, we showed that Justice Garman has voted with the minority in 6.84% of her civil cases since joining the Court, slightly below Chief Justice Burke’s percentage.  Justice Theis’ percentage is almost identical: she has voted with the minority in 6.83% of her civil cases since joining the Court in 2010.  There are no strong time trends in her data.  She was above baseline in 2012 (7.69%) and 2014 (11.11%), but below in 2013 (6.06%).  She was below in 2016 (3.57%) and 2018 (4.55%), but above it in 2017 (11.54%) and 2019 (11.76%).  Justice Theis was well below her career percentage once again in 2020, voting with the minority in only 3.13% of her civil cases.

Join us back here next week as we examine the data for two more members of the Court.

Image courtesy of Flickr by artistmac (no changes).

 

Last time, we looked at how often Chief Justice Anne Burke voted with the minority in civil cases – a proxy for how closely in sync with the philosophy of the other Justices she has been throughout her career.  Today, we’re addressing the same number for Justice Garman.

The Chief Justice has been in the minority in 6.91% of civil cases since joining the Court in 2006.  Justice Garman’s overall percentage is nearly identical – 6.84%.  Looking at time trends, she was below that baseline from 2004 through 2011 except for 2006 (11.11%).  Between 2014 and 2017, she was above baseline in three of four years.  After a one-year dip, she has again been above baseline in 2019 (8.82%) and 2019 (9.38%).

Join us back here tomorrow as we address the minority percentage for Justice Theis.

Image courtesy of Flickr by William Murphy (no changes).

 

 

With this post, we’re addressing a new question in our ongoing review of the Justices’ voting records: how often each Justice is in the minority.  The question serves as an indication of how closely in sync with the majority of the Court an individual Justice is philosophically, and during a Justice’s term as Chief Justice, it offers some indication of how much influence the Justice has over her or his colleagues.

Since joining the Court in 2006, Chief Justice Anne Burke has voted in 463 civil cases.  She has been in the minority in only 32 of those cases – 6.91% of the total.  Before becoming Chief Justice, Chief Justice Burke had voted in the minority in 7.11% of civil cases.  Since taking the center seat, she has been in the minority in only 2 of 41 civil cases – 4.87% of the total.  This is reflected in the year-by-year data below.  She voted with the minority in 15.38% of civil cases during 2017, but only 4.55% in 2018, 0% in 2019 and 6.25% in 2020.

Join us back here next week as we continue our review of the Justices’ minority percentages in civil cases.

Image courtesy of Flickr by Kate Brady (no changes).

This time, we’re beginning our review of the voting record of Justice Michael Burke, who took his seat on March 1, 2020, replacing the retired Justice Robert R. Thomas.  Previously, Justice Burke had served for twelve years as a Justice of the Second District Appellate Court.

During 2020, Justice Michael Burke voted in 19 civil cases.  He voted to affirm in 10 of those cases – 52.63%.  He voted to reverse in 7 cases, or 36.84%.  He cast one split vote to affirm in part and reverse in part and cast one vote to deny.

Join us back here next time when we discuss how often Chief Justice Anne Burke is in the minority in civil cases.

Image courtesy of Flickr by paul_p! (no changes).

Today, we’re examining the voting record of one of the newer members of the Supreme Court, Associate Justice P. Scott Neville, Jr.  Justice Neville took his seat on June 15, 2018, succeeding Justice Charles Freeman.  Prior to joining the Court, Justice Neville sat on the First District Appellate Court from 2004 to 2018. During his tenure, he served as Presiding Justice of the Second, Third and Fourth Divisions.

Since joining the Court and through the end of 2020, Justice Neville has voted in 68 civil cases.  His votes to affirm and reverse are nearly evenly split.  He has voted to affirm in 27 cases (39.71%) and to reverse in 29 cases (42.65%).  Justice Neville has cast split votes in 9 cases.  Justice Neville’s remaining three votes are evenly split – one each to deny, vacate and for “other.”

Join us back here next week as we continue our review of the individual Justices’ voting records.

Image courtesy of Flickr by Gary Todd (no changes).

 

Today, we’re beginning our examination of the voting record of Chief Justice Anne M. Burke.  Chief Justice Burke took her seat on July 6, 2006.  Through the end of 2020, she had voted in 463 civil cases.

It’s reasonable to suppose that the distribution of a Justice’s votes between affirmance and reversal might tell us something about what a vote to hear a particular case from that Justice might mean.  Does she see the Court’s function as reining in one or more Appellate Courts?  Does a vote from that Justice to allow a petition for leave to appeal suggest that she is likely to vote to reverse?

Justice Garman’s votes were almost perfectly split: 40.13% to affirm, 40% to reverse.  Justice Theis has been a bit more inclined to reverse: 37.58% to affirm, 42.55% to reverse.

The Chief Justice has been significantly more inclined to reverse than Justice Theis.  She has cast 163 votes to affirm in civil cases – 35.21% of her total – and 204 votes to affirm, or 44.06%.  There are no particular time trends in her voting patterns.  She has cast 70 split votes in civil cases – affirm in part and reverse/modify/vacate in part.  She has cast 11 votes to vacate, 9 to deny, 5 “other” and 1 to grant.

Join us back here tomorrow as we review the voting record of one of the newer Justices, P. Scott Neville.

Image courtesy of Flickr by Dan (no changes).