This time, we’re looking at the second part of our analysis for the years 2000 through 2009 – excluding unanimous affirmances, how many votes to affirm the Appellate Court did each District and Division average?

In 2000, decisions from the Second District fared best, averaging 2.5 votes to affirm.  The Fourth District and the Second and Fourth Divisions of the First averaged 2 votes.  In 2001, Division Three of the First District averaged 4.5 votes to affirm.  In 2002, Division Five of the First District averaged 3.5 votes to affirm, while Divisions Two, Three, Four and Six of the First and the Second District averaged zero.  In 2003, Division Five of the First District averaged five votes to affirm, while Divisions Two, Four and Six of the First averaged zero.  In 2004, Division Five of the First District averaged 2.67 votes to affirm, while Divisions Three and Six averaged zero.

In 2005, the Fourth District averaged six votes to affirm.  Division One of the First and the Third District were next at three votes, and Divisions Two, Three, Four, Five and Six of the First all averaged zero votes to affirm.  In 2006, Division Six of the First District averaged six votes to affirm, and Division One of the First and the Third District averaged four votes to affirm.  In 2007, cases from the Fourth District averaged 2.6 votes to affirm and Division Four of the First District averaged 2.5, while Divisions Three and Five of the First and the Second, Third and Fifth Districts averaged zero votes.  In 2008, Division One of the First averaged six votes while Division Five averaged five.  In 2009, only three districts – Division Five of the First (2.5 votes), the Fourth District (2.25) and the Fifth District (0.14 votes) averaged more than zero votes to affirm.

Join us back here next time as we address the years 2010 through 2020.

Image courtesy of Flickr by Michel Curi (no changes).

Today we’re continuing our two-part analysis of each District of the Appellate Court’s performance at the Supreme Court since 1990.  Question 1, which we address below, is how likely is a civil decision from each part of the Appellate Court to be affirmed unanimously?  Question 2, which we’ll look at for the next post, is among the rest of the civil cases – all cases not affirmed unanimously – what’s the average votes to affirm the Appellate Court?  This week we’re looking at the years 2000 through 2009.

In 2000, all the civil cases decided by the Court from the Third District were unanimously affirmed.  Two-thirds of the decisions from the Fourth District and from Division Four of the First District were unanimously affirmed as well.  Meanwhile, none of the decisions from Division One, Three or Six of the First or the Second District were affirmed unanimously.  In 2001, all decisions from Division Six of the First District were affirmed unanimously.  Three-quarters of the decisions from the Third District were.  None of the decisions from Division Two of the First District were.  In 2002, the best performer was Division Three of the First, with two-thirds unanimous affirmances.  Divisions Two, Four and Five of the First had zero unanimous affirmances.  In 2003, the best performers were Divisions Two and Five of the First District and the Second District, each of which had half their decisions unanimously affirmed.  In 2004, all the decisions from Division Two of the First were unanimously affirmed – none of the decisions from Division One of the First were.

In 2005, three-quarters of civil decisions from Division One of the First District were unanimously affirmed.  Two-thirds of the decisions from the Fourth District were too, but only 12.5% of the decisions from the Fifth District were unanimously affirmed.  In 2006, half the decisions from Divisions Three and Six of the First District and the Third District were unanimously affirmed.  In 2007, all the decisions from Division One of the First District were unanimously affirmed, but none of the decisions from Division Five of the Fourth District were.  In 2008, three quarters of the decisions from Division One of the First District were affirmed unanimously, but none of the decisions from Divisions Two and Three were.  In 2009, half the decisions from Division Six of the First District were affirmed unanimously, but none of the decisions from Divisions Two, Three, Four or Five were.

In the next post, we’ll address the data for the same years on average votes to affirm.

Image courtesy of Flickr by COD Newsroom (no changes).

Last time, we reviewed the percentage of the time the Supreme Court affirmed civil decisions from each District unanimously.  This time we’re looking at average votes to affirm in non-unanimous decisions.

In 1990, Division Four of the First District averaged three votes to affirm.  In 1991, Division Five averaged three votes.  In 1992, Division Four averaged 2.75 votes to affirm.  In 1993, the Fourth District averaged 2.5 votes to affirm, and Division Three of the First averaged 2.33.  In 1994, Division Five of the First averaged 4.5 votes.  The Fourth District averaged 2.857 votes to affirm.  Division One of the First District averaged 2.8 votes.

In 1995, Division Two of the First District averaged three votes to affirm.  In 1996, Division Four of the First District averaged 2.5 votes to affirm.  The following year, Division Four of the First District averaged five votes to affirm.  In 1998, the Second District averaged 2.89 votes.  In 1999, Division One of the First District averaged 4 votes to affirm, and the Second and Third Districts averaged three.

Join us back here later this week as we continue our review.

Image courtesy of Flickr by Ron Frazier (no changes).

 

This time, we’re beginning a multi-week inquiry: (1) what percentage of the time does the Supreme Court affirm each District and Division of the Appellate Court unanimously; and (2) among non-unanimous decisions, what’s the average votes to affirm for each court?

In Table 1729, we report the percentage of unanimous affirmances for each District and Division between 1990 and 1994.  In 1990, two-thirds of the decisions from Division One of the First District were affirmed unanimously and ditto the Second District.  In 1991, all of the civil decisions from Division Six of the First District were affirmed unanimously.  Half of the decisions from Division Three of the First District were affirmed.  In 1992, half of the decisions from the First and Second Divisions of the First District were affirmed.  In 1993, all of the decisions from Division Two of the First were affirmed, while half of the decisions from the Third District were.  In 1994, sixty percent of the decisions from Division Four of the First District were affirmed.

Eighty percent of the civil decisions from the Third District in 1995 were affirmed unanimously.  Sixty percent of the civil decisions from the Second District in 1996 were affirmed.  In 1997, all of the decisions from Division One of the First District were affirmed unanimously.  In 1998, half of the civil decisions from the Fourth District were affirmed unanimously.  In 1999, two-thirds of the decisions from Division Six of the First District were affirmed unanimously.

Join us back here next time as we review the data for average-votes-to-affirm.

Image courtesy of Flickr by Ron Frazier (no changes).

 

This time, we’re looking at the percentage of civil cases in which each of the Justices who served during the years 2010-2020 voted with the majority.  Although the Court’s unanimity rate has been fairly steady with the exception of a brief two-year dip during this period, this will help illuminate which Justices drove any disagreement on the Court.

In the Table below, we report year-by-year majority voting rates for Chief Justice Anne Burke and Justices Garman, Freeman, Neville, Kilbride and Carter.  The lowest majority rates for 2010 were Chief Justice Burke and Justice Freeman, but both were quite high – voting with the majority 87.5% of the time.  The numbers were similar in 2011 – Justices Freeman and Kilbride were at 89.19% and 85.71%, respectively.

In 2012, the unanimity rate took a sudden dip.  The chart suggests that Justice Kilbride was the main driver of the dip, given that his majority voting rate dropped from 85.71% in 2011 to 74.36% in 2012.  The following year, Justice Kilbride was at 75.76%, but Chief Justice Burke had dropped too – to 81.25%.  By 2014, the Chief Justice and Justices Freeman and Kilbride were all back into the upper eighties, and the overall unanimity rate corrected as a result.  In 2015, both the Chief Justice and Justice Garman were in the nineties, Justice Freeman was in the high eighties and Justice Kilbride was at 84.09%.  By 2017, the Chief Justice and Justices Freeman and Kilbride were all in the nineties and Justice Garman voted with the majority 100% of the time.  In 2018, Justice Kilbride was in the high eighties, Chief Justice Burke and Justice Garman were both at 95.45% and Justice Freeman was at 100%.  Over the past two years, each of these Justices has voted with the majority in at least nine of every ten civil cases.

We report five more Justices – Justices Thomas, Michael Burke, Karmeier, Fitzgerald and Theis – in the final Table.  Each of these Justices were nearly always in the majority throughout the period.  In 2010, three of four were at a majority voting rate of 100%.  In 2011, Justices Thomas, Karmeier and Theis were all above 97%.  The following year, Justices Thomas and Karmeier were a bit over 89%, and Justice Theis was at 92.31%.  In 2013, Justice Karmeier dropped down to 88 and change, while both Justice Thomas and Justice Theis were over 90.  (These numbers for 2012 and 2013 confirm our earlier data about what helped drive down the unanimity rate).  In 2014, Justices Thomas and Karmeier were over 90% and Justice Theis was slightly under.  From 2015 through 2017, all three Justices were over 90% majority voting.  In 2018, Justices Thomas and Theis were over 95% and Justice Karmeier fell to 86.36%.  In 2019, Justices Thomas and Karmeier were both in the nineties and it was Justice Theis who dropped slightly to 88.24%.  Last year, Justice Karmeier was in the majority 96.67% of the time, Justice Theis 96.88% and Justice Michael Burke was in the majority 94.74% of the time.  Prior to his retirement, Justice Thomas voted with the majority in 100% of civil cases.

Join us back here next time when we’ll begin our review of the criminal case data.

Image courtesy of Flickr by Mike Steele (no changes).

 

 

In this post and the next, we’re concluding our review of the Court’s unanimity rate in civil cases set against changes in the Court’s party alignment.

Despite changes in four of the Court’s seven seats during these eleven years, the party alignment of the Court remained the same throughout – four Democrats, three Republicans.  As we see in the Table, although the number bounced around just a bit, it remained fairly steady throughout.  In 2010, 72.73% of civil cases were unanimous decisions.  In 2011, that figure was 76.32%.  We then saw a two-year drop, to 55% in 2012 and 58.82% in 2013, before it returned to its trend level and stayed there: 77.78% in 2014, 79.55% in 2015, 75% in 2016, 80.77% in 2017, 63.64% in 2018, 70.59% in 2019 and 78.13% in 2020.

In our next post, we’ll take a look at the Justices who served on the Court during these years and look at their rate of voting with the majority.  This should show which Justices were the drivers of the dip in unanimity in 2012 and 2013.

Image courtesy of Flickr by Olivier Bruchez (no changes).

 

This week, we’re tracking the Supreme Court’s unanimity rate in civil cases, matched against the evolving party alignment of the Justices.  Last time, we reviewed the data for the 1990s.  Today, we’re reviewing the data for the years 2001 through 2010.

With Democrat Thomas Kilbride having replaced James Heiple in the final days of 2000, the Court was at five Democrats and two Republicans from 2001 through the end of 2004.  For these four years, the overall unanimity rate was 71.14% – ten points higher than it was in the nineties.  The unanimity rate in 2001 was 74.51%.  It fell to 66% in 2002, rose to 69.57% in 2003 and rose back to 74.07% in 2004.

The party alignment fell back to four Democrats and three Republicans in the closing days of 2004 as Justice Lloyd Karmeier replaced Justice Philip Rarick.  For the five years that followed, the overall unanimity rate increased three points to 74.66%.  The reversal rate in 2005 was 81.25%.  It fell precipitously in 2006 – all the way to 59.18%.  But the rate was back up for the following three years: 80.49% in 2007, 71.43% in 2008 and 82.93% in 2009.

Join us back here next week as we continue our review of the Court’s unanimity rates.

Image courtesy of Flickr by David Brossard (no changes).

 

 

 

 

This week we’re looking at an issue related to our discussion of panel effects: has the unanimity rate on the Court been impacted by shifts in the party alignment of the Justices?

For the entire decade of the 1990s, the Court consisted of four Democratic Justices and three Republicans.  Although none of the changes affected the party alignment, it was nevertheless a period of shifting membership on the Court.  Charles Freeman replaced Daniel Ward, James Heiple replaced Howard Ryan and Michael Bilandic replaced John Stamos in 1990.  Joseph Cunningham replaced Horace Calvo in 1991.  Mary Ann McMorrow replaced William Clark, John Nickels replaced Thomas Moran and Moses Harrison replaced Joseph Cunningham in 1992.  Louis Rathje replaced John Nickels in 1999, and Robert Thomas replaced Rathje the following year.  The Court finally saw a shift in party composition in the 2000 election, when Thomas Kilbride replaced James Heiple, but Justice Kilbride did not participate in his first civil case until 2001.  In that same election, Justice Thomas Fitzgerald was elected to replaced Justice Michael Bilandic.

Notwithstanding the static party composition of the Court, the Court’s unanimity rate steadily trended down throughout the decade.  The rate for 1990 was 78.65%.  By 1995, it had fallen to 62.5%.  The unanimity rate then dropped below 50% for the years 1997 through 1999 before looking up slightly in 2000, to 57.89%.  For the entire period, the Court’s unanimity rate was 61.55%.

Join us back here next time as we review the data for the next decade.

Image courtesy of Flickr by Brian Crawford (no changes).

 

With this post, we begin our short review of some of the academic literature regarding panel effects.

Of course, the first question one encounters when taking a case before an appellate court is how one’s panel will be chosen.  A majority of appellate courts state, either in their operating procedures or their rules, that appellate panels for argued cases are assigned randomly.  This assumption has been built into most of the analytics work done on panel effects for at least a generation.

But is it really true?  I have spoken with lawyers in several federal circuits who are dubious about just how random assignments are.

Let’s begin by making several things thoroughly clear: no one is suggesting that panels are intentionally chosen to manipulate results in particular cases (although disappointed litigants have done so once in awhile).  Further, reasonable people could dispute just how beneficial random assignments are.  There are any number of good reasons to deviate from strictly random assignments in the federal circuits: maximizing the availability of senior status judges; respecting the vacation plans or speaking or writing commitments of judges; ensuring that a particular judge doesn’t draw several panel assignments in a short period, or go lengthy periods without an assignment; ensuring that particular judges sit with a variety of their colleagues, rather than sitting repeatedly with one or two other judges; accounting for recusals; or sending a case which was previously ruled upon by a particular panel back to the same judges following remand.  The list goes on.

But even if completely random assignments aren’t necessarily a reasonable goal, the question remains: how are appellate panels chosen?

In 2015, Professors Marin K. Levy and Adam S. Chilton published Challenging the Randomness of Panel Assignment in the Federal Courts of Appeals, 101 Cornell L. Rev. 1 (2015).  The authors gathered panel information for all twelve regional circuits between September 2008 and August 2013.  Collectively, the dataset covered the activities of 775 judges and over 10,000 panels.  The professors then wrote a program to simulate the choice of over one billion entirely random panels.  They then decided to compare their dataset of randomly simulated panels to the “real world” data by counting the incidence of an objective characteristic in both datasets – how many panels had appointees of Republican Presidents on them.  Accordingly, they developed detailed data on how common panels of zero, one, two and three Republican nominees were, and then calculated whether the actual docket results fell reasonably close to that random distribution.  The professors reported that their statistical tests showed evidence that panel assignments deviated from a strictly random result in four circuits: the D.C. Circuit, the Second Circuit, the Eighth Circuit and the Ninth Circuit.  They then tested their results for robustness and calculated that the probability of all their results being solely due to chance was less than 3%.  The data from the remaining eight Circuits fell reasonably close to the fully random distribution – although given the reasons discussed above why complete randomness may not be realistic or beneficial, there is reason to wonder just how robust that result is.  Two years later, Professor Levy published a follow-up article, Panel Assignment in the Federal Courts of Appeals, 103 Cornell L. Rev. 65 (2017).  There, she discussed her interviews about panel assignment practices with thirty-five judges and senior administrators.  She reported that no two courts approached panel assignment in the same way and argued that it was far from clear that the benefits of random assignments outweighed the drawbacks.

Next time we’ll continue our discussion of the literature on panel effects over at the California Supreme Court Review.

Image courtesy of Pixabay by Piro4D (no changes).

This week, we’re taking a short break from our usual number-heavy analysis for another glance at some of the vast academic literature on the analytic-driven analysis of appellate decision making.  This is a four-part post – two here and two over at the California Supreme Court Review – sampling some of the literature on “panel effects.”

Of course, virtually all appellate decision making takes place in panels of judges – sometimes three, occasionally five, frequently seven or at the U.S. Supreme Court (or a Circuit en banc), nine or more.  It’s easy to fall into the trap in assessing an appellate panel of treating it like a political focus group.  For example, say I’m representing a defendant, and I learn that my panel consists of two Republican nominees and one Democrat, I might think that’s a reasonably favorable panel.  But the implicit assumption built into that statement is that none of the judges’ votes will be impacted by any of the other judges.

Although most political votes are cast by voters who don’t know each other or care about others’ opinions, that’s not always true.  Studies have shown that if you put a Republican, for example, in a room full of Republicans, his or her views will drift further right than they would otherwise have been.  Put the Republican in a room surrounded by Democrats, and the opposite effect is observed – the Republican drifts more moderate.  Repeat the experiment with a Democratic voter, and the effect flips – more conservative in a room full of Republicans, more liberal with other Democrats.

Appellate panels, of course, are quite different.  Often (with the exception of the Ninth Circuit), the judges know each other well, and may have been working together for years.  They presumably have a shared commitment to something they think of as the “law of the Circuit/state/court.”  Many believe that unanimity has an intrinsic value in reinforcing the moral authority of their court.  Their vote isn’t secret – they’ll have to look a colleague in the eye, at least figuratively speaking, and tell him or her why he or she is wrong.  And if they don’t succeed in convincing their colleague, they’ll have to write an opinion explaining how thoroughly wrong their colleague is – knowing that it’s going to be preserved in thick hardbound books in the library on the computer screens of any lawyer in the country who wants to see it for pretty much all eternity.

One can imagine that might temper your enthusiasm for constantly dissenting in a hurry.

So that’s where the literature of “panel effects” comes in.  How much difference does the composition of your panel really make?  And if you’re trying to predict the vote of Judge A, do you only want to know his or her philosophy – or are the leanings of the other judges on the panel a powerful predictor of Judge A’s vote?  We’ll begin our review next time.

Image courtesy of Flickr by A Syn (no changes).