With this post, we begin our short review of some of the academic literature regarding panel effects.

Of course, the first question one encounters when taking a case before an appellate court is how one’s panel will be chosen.  A majority of appellate courts state, either in their operating procedures or their rules, that appellate panels for argued cases are assigned randomly.  This assumption has been built into most of the analytics work done on panel effects for at least a generation.

But is it really true?  I have spoken with lawyers in several federal circuits who are dubious about just how random assignments are.

Let’s begin by making several things thoroughly clear: no one is suggesting that panels are intentionally chosen to manipulate results in particular cases (although disappointed litigants have done so once in awhile).  Further, reasonable people could dispute just how beneficial random assignments are.  There are any number of good reasons to deviate from strictly random assignments in the federal circuits: maximizing the availability of senior status judges; respecting the vacation plans or speaking or writing commitments of judges; ensuring that a particular judge doesn’t draw several panel assignments in a short period, or go lengthy periods without an assignment; ensuring that particular judges sit with a variety of their colleagues, rather than sitting repeatedly with one or two other judges; accounting for recusals; or sending a case which was previously ruled upon by a particular panel back to the same judges following remand.  The list goes on.

But even if completely random assignments aren’t necessarily a reasonable goal, the question remains: how are appellate panels chosen?

In 2015, Professors Marin K. Levy and Adam S. Chilton published Challenging the Randomness of Panel Assignment in the Federal Courts of Appeals, 101 Cornell L. Rev. 1 (2015).  The authors gathered panel information for all twelve regional circuits between September 2008 and August 2013.  Collectively, the dataset covered the activities of 775 judges and over 10,000 panels.  The professors then wrote a program to simulate the choice of over one billion entirely random panels.  They then decided to compare their dataset of randomly simulated panels to the “real world” data by counting the incidence of an objective characteristic in both datasets – how many panels had appointees of Republican Presidents on them.  Accordingly, they developed detailed data on how common panels of zero, one, two and three Republican nominees were, and then calculated whether the actual docket results fell reasonably close to that random distribution.  The professors reported that their statistical tests showed evidence that panel assignments deviated from a strictly random result in four circuits: the D.C. Circuit, the Second Circuit, the Eighth Circuit and the Ninth Circuit.  They then tested their results for robustness and calculated that the probability of all their results being solely due to chance was less than 3%.  The data from the remaining eight Circuits fell reasonably close to the fully random distribution – although given the reasons discussed above why complete randomness may not be realistic or beneficial, there is reason to wonder just how robust that result is.  Two years later, Professor Levy published a follow-up article, Panel Assignment in the Federal Courts of Appeals, 103 Cornell L. Rev. 65 (2017).  There, she discussed her interviews about panel assignment practices with thirty-five judges and senior administrators.  She reported that no two courts approached panel assignment in the same way and argued that it was far from clear that the benefits of random assignments outweighed the drawbacks.

Next time we’ll continue our discussion of the literature on panel effects over at the California Supreme Court Review.

Image courtesy of Pixabay by Piro4D (no changes).