«Examining the Efficiency of the U.S. Courts of Appeals: Pathologies and Prescriptions Robert K. Christensen and John Szmer May, 2011 This paper can ...»
IEL PAPER IN COMPARATIVE ANALYSIS OF
INSTITUTIONS, ECONOMICS AND LAW NO. 4
Examining the Efficiency of the U.S.
Courts of Appeals:
Pathologies and Prescriptions
Robert K. Christensen and John Szmer
This paper can be downloaded without charge at the IEL Programme – Institutions
Economics and Law Working Paper Series
http://www.iel.carloalberto.org/Research-and-Pubblication/Working-papers.aspx or http://polis.unipmn.it/index.php?cosa=ricerca,iel
Examining the Efficiency of the U.S. Courts of Appeals:
Pathologies and Prescriptions Robert K. Christensen1 University of Georgia email@example.com John Szmer University of North Carolina at Charlotte firstname.lastname@example.org Abstract: Until recently (e.g., Lindquist 2007), few studies have examined the factors that might affect aspects of judicial efficiency, including the time it takes a court to decide a case. In our analysis of a sample o f U.S. Courts of Appeals decisions from 1971-1996, we examine a variety of potential causes of inefficiency, or pathologies, before suggesting a series of prescriptions.
1 Both authors equally contributed to this manuscript. The authors would like to thank Reese Manceaux for his assistance in merging a variety of seemingly incompatible databases, as well as Nicole Arnold for her assistance in collecting data.
1 Judicial Pathologies and Prescriptions Over the last several decades, many legal professionals (e.g, 109th Congress, 206; ABA, 1978;
ABF, 1968) and/or scholars (e.g., Cohen, 2002; Hettinger, Lindquist, & Martinek, 2006; Posner, 1996; Rubin, 2007) have examined the question of judicial performance and efficiency. Most of the studies note the ‘pathologies’2 of the judicial process that lead to inefficiency and diminished quality of decisions. A subset of these studies either suggests or critiques alternative prescriptions that might enhance efficiency (e.g., Baker 2008; Binder & Maltzman, 2009; Cecil, 1985; Lindquist, 2007). Nevertheless, very few studies have tried to more comprehensively and empirically identify the pathologies, or root causes of inefficiency, while simultaneously testing the efficacy of the various prescriptions. We take up a part of that effort here at the U.S. federal appellate level, with a focus on efficiency in terms of disposition time—our dependent variable.
To more concisely and directly engage the theoretical and empirical nature of judicial pathologies and prescriptions, we discuss our data measures in the same sections in which we formulate hypotheses. We begin with a general discussion of our data set. We then underscore some of the more important works on judicial efficiency and describe how we have operationalized this as the dependent variable in our study. We then review the pathologies of judicial decision-making in terms of efficiency and suggest several hypotheses and prescriptions.
These facets of these pathologies constitute our independent variables of interest. We then detail several important control variables. Next we present our analysis with a discussion of findings.
We conclude with directions for future research and implications.
The earliest reference we found to this term’s use is Gough’s article (1955-1956), discussing, 2 coincidentally, ‘swift justice’—our primary dependent variable in this study.
To explore factors impacting judicial efficiency at the case level, we examine a sample of reported U.S. Courts of Appeals cases (from the twelve geographically divided circuits) decided during calendar years 1971-1996.3 The thirteen courts of appeals are the national intermediate appellate courts. Twelve of the courts cover distinct geographic areas, or circuits (the thirteenth, the Federal Circuit Court, is a national appellate court that hears cases involving a limited number of substantive issues). The judges are appointed by the president with the advice and consent of the Senate. They typically hear mandatory appeals of questions of law arising from cases decided by the lower federal trial courts (the U.S. District Courts).
The case data were taken from the U.S. Courts of Appeals Database (Songer 1996), which contains a sample of thirty cases per circuit, per year during that period; and the Federal Judicial Center’s (FJC) Federal Court Cases: Integrated Data Base, Appellate Terminations, 1970-2000.4 By focusing on reported cases we are, in effect, controlling for the impact of publication. We also controlled for panel size by eliminating en banc cases. Moreover, we chose to utilize the Songer database because it contained the identities of the judges, and could be linked to the Multi-User Database on the Attributes of United States Appeals Court Judges, 1801-2004 (Gryski, Barrow, and Zuk 2004) and the A Multi-User Database on the Attributes of We did not include cases beyond 1996 because the FJC did not contain the information necessary to 3 allow us to construct the efficiency dependent variable. Moreover, we could not consistently find the necessary information using WestLaw or Lexis. More than one third of the cases did not contain any information regarding the date the last brief was filed (the starting point for our dependent variable), and the missing data was systematic (in particular, it occurred more often in certain circuits and it occurred more frequently in later years).
Of the 8,588 cases, we analyzed 7,616. Most of the cases were excluded because of missing data on one 4 or more variables. We also excluded five outliers with unusually high values of the dependent variable— the time it took the panel to decide the case. Specifically, case processing times in excess of five years were considered outliers possibly resulting from key stroke error. The results of the hypothesis tests did not change when we included the bankruptcy cases and the outliers (or used different cut points to determine the outliers).
Most judicial decision-making studies focus on the nature of the decision, rather than the time spent making it. Most discussions of judicial efficiency either point out the need for court reform, resulting from perceived inefficiency, or analyze the logical implications of various types of reform—without empirically testing these implications (e.g., Posner, 1983; Richman & Reynolds, 1988). A significant line of empirical research has analyzed some of the implications of one particular type of reform: the increased use of unpublished decisions by the U.S. Courts of Appeals. These typically either focus on explaining the court’s decision to publish (Merritt & Brudney, 2001) or they compare the characteristics of unpublished and published cases (Songer, 1990). However, these studies almost always ignore the impact that this practice would have on judicial efficiency. Only a handful of studies test whether the reforms actually enhance efficiency (e.g., Beenstock & Haitovsky, 2004; Binford, Greene, Schmidlkofer, Wilsey, & Taylor, 2007).
Lindquist (2007) and Cauthen and Latzer (2008) have recently provided some of the more comprehensive studies of judicial efficiency. Lindquist’s (2007) study of the U.S. Courts of Appeals examined the impact of several aggregate, circuit-level characteristics (e.g., number of judges, as well as use of oral arguments, publication, and judges sitting by designation).
Lindquist’s (2007) study, while path-breaking, is limited in its analysis to aggregate level efficiency. Cauthen and Latzer’s (2008) study examines case-level capital appeals. They find relationships between opinion length and processing times, treatment of the lower court, dissensus, and ideological diversity. Their study, however, is limited to state court decisions in a fairly narrow, albeit important area of the law.
For example, in the study of capital appeals Cauthen and Latzer (2008) cite three reasons. Long processing times (1) compromise public conference in the justice system, (2) dilute a sentence’s deterrent effect, and (3) can be grounds for further litigation. In general, however, the logic follows the notion of Constitutional due process and the adage that swift justice is fair justice.
Disposition Time, a measure of judicial efficiency, or more precisely, judicial inefficiency, is measured as the number of days it takes the panel to decide a case after the parties have submitted all of the written briefs to the court.5 Higher values indicate increasing delay, or deliberation time.6 The date the brief was submitted was obtained from the FJC.
If pathologies are the root causes of judicial inefficiency, what factors precipitate these pathologies? Work by Cohen (2002) suggests that increasing workloads leads to bureaucratization of courts. Bureaucratization, in turns, impacts factors like the use of support staff and judge collegiality, which are thought to influence process and outcomes (Eastman, 2006; Lindquist, 2007). Increased workloads may stem from the combination of reluctance to appoint more judges (Lindquist, 2007) and an increasing cultural reliance on the adversarial system to resolve disputes (Kagan, 2001).
Alternatively, we could have used the number of days from the oral argument, as opposed to the 5 submission of the brief. In some ways, this is a more valid measure, since we are interested in several panel level independent variables, and the panel really has no impact on the disposition speed until after oral arguments. However, approximately fourteen percent of the cases are decided without oral arguments. Obviously, if we had used the oral argument date, we would have had to exclude those cases.
The selection bias problem would be magnified by the systematic variance of the use of oral arguments across circuits and over time. Given that the disposition time calculated using the date from oral argument is highly correlated with the measure using the date from brief submission (approximately 0.70), we think the latter is a valid surrogate that enables us to include the non-orally argued cases.
While others have made this same assumption (e.g., Baker 2008; Lindquist 2007), we do recognize that 6 this is an oversimplification. From a broader cost-benefit perspective of efficiency, the fastest opinions are potentially less efficient in that it could be a function of a poor decision, or a poorly articulated justification for the decision.
empirical research hypotheses we discuss four main areas of judicial pathologies. We categorize these pathologies into factors broadly related to diversity, burnout, expertise, and institutional mechanisms. We discuss each in turn.
Diversity. Broadly rooted in the notion that collegiality leads to faster decision making, we explore whether panel diversity impacts case disposition times. We include three measures of diversity: ideological, tenure, and law school quality. Prior theory suggests that diversity leads to conflict (reduced collegiality), which results in less efficient (Pelled, Eisenhardt, and Xin
1999) panels. Ideological diversity increases dissensus (Boyea 2007; Hettinger et al. 2006), which then leads to longer disposition times (Cauthen and Latzer 2008; Lindquist 2007, 692).
We hypothesize accordingly:
H1. Panel ideological diversity leads to longer disposition times.
We estimated Ideological Diversity as the absolute value of the difference of the ideology scores for the most liberal and conservative panelists. To estimate the panelist’s ideology, we utilized the widely employed Giles-Hettinger-Peppers (2001) scores, which use the Poole and Rosenthal common-space NOMINATE scores of the appointing president and the judge’s home state senators from the president’s party (which can be found at www.voteview.org). Specifically, if no home state senators are from the president’s party or the judge sits on the D.C. court, the president’s value is used; if one senator is from the president’s party, the senator’s score is used;
if two senators are from the president’s party, the average of the two senator’s scores are used.
This is one of the standard methods of operationalizing U.S. appeals court judge ideology (see the following for examples, Clark 2009; Hettinger, Lindquist, and Martinek 2006; Kaheny, Haire, and Benesh 2008).
(i.e., the mixture of seasoned and less experienced judges) will lead to diminished feelings of
collegiality. We therefore hypothesize that:
H2. Panel tenure diversity will lead to diminished efficiency.
Following the approach utilized by Pelled, Eisenhardt, and Xin (1999) for continuous measures, we estimated Panel Tenure Diversity using the coefficient of variation (the standard deviation divided by the mean) to capture the variation of appointment dates between judges on a panel. Larger coefficients indicate greater diversity.
Panel Law School Quality Diversity continues the theme that differences lead to conflict and thus inefficiency. Drawing upon early work on prestigious education and stratification (Collins, 1971), and work that suggests that even among Ivy League schools there is
stratification (Kingston & Lewis, 1990), we hypothesize that:
H3. Panel law school quality diversity will lead to inefficiency.
Panel Law School Quality Diversity is a binary variable that equals '1' when there is some mixture of judges who attended elite7 (Slotnick, 1983) versus non-elite law schools. It is coded '0' if all of the panelists attended the same type of school. We also included a Panel Law School Quality variable that is the number of judges on the panel that attended an elite law school.