Aaron L. Nielson (University of Texas at Austin – School of Law) has posted Aggregating Original Meaning, forthcoming in the Columbia Law Review, vol. 127, on SSRN. Here is the abstract:
Public-meaning originalism—the theory that the meaning of legal texts, including the U.S. Constitution, is fixed when authoritatively adopted and binding until authoritatively changed—is ascendant. Although scholars debate this theory, a majority of U.S. Supreme Court Justices describe themselves as originalists and advocates increasingly advance arguments sounding in public-meaning originalism. And even jurists who are not originalists often recognize history’s relevance. The Court, however, has overlooked a conceptual puzzle—and so has everyone else: What should originalists do when deciding a case with multiple steps, each requiring its own originalist analysis? For example, courts routinely decide what a constitutional provision means and whether it has been incorporated against the States. Courts also determine whether a right has been violated and what the remedy should be. And courts often decide whether the Constitution creates a power and how to apply it to “edge” cases. Because finding original-public meaning is difficult, however, judges cannot be certain they have resolved each step correctly. Nonetheless, after resolving one step, they proceed to the next step as if resolution of the first was certain. Such inattention to possible error runs headlong into the mathematical concept of aggregation, which requires using the product rule or conditional probability when assessing the likelihood that multiple claims are true. This Article introduces aggregation into the originalist literature and demonstrates how the premises of public-meaning originalism support accounting for aggregate uncertainty. Yet although aggregation may help judges reach decisions more consistent with original meaning, its disruptiveness also heightens the significance of many other concepts and debates, such as what it means to “prove” the law, the strength of stare decisis, the scope of construction, the validity of liquidation and tradition, the legitimacy of compensating adjustments to offset non-originalist precedent, and the value of corpus linguistics. Aggregation, however, should be more than just disruptive. Rather, it may add at least some additional stability to the law by shoring up familiar features of constitutional doctrines that originalists thus far have struggled to defend, including the presumption of constitutionality, clear-statement rules, and hybrid rights.
Fascinating and recommended.
This article uses the term “aggregation” to discuss a familiar problem from evidence theory—the conjunction problem—in the context of originalism. The conjunction problem—the observation that the joint probability of two independent claims, each satisfying the preponderance standard, may fall below that standard when combined—has received extensive attention in evidence scholarship, and Nielson’s proposal to extend that analysis to multistep originalist reasoning is a new move.
What the paper does not yet do is engage with inferentialism as a theoretical response to the conjunction problem. The paper cites Pardo & Allen’s work on juridical proof and the best explanation, but treats it as one pragmatic workaround among several rather than confronting it as a fundamental challenge to the Bayesian premises on which the paper’s argument rests. Inferentialism, associated with Ronald J. Allen and Michael S. Pardo, argues that legal reasoning is best modeled as inference to the best explanation rather than as Bayesian probability assignment. On the inferentialist account, the conjunction problem is a problem for Bayesian formalization, not for legal reasoning as such. In other words, the conjunction problem exposes a deep problem in Bayesian approaches.
There is also a generalization problem: if aggregation afflicts originalism wherever conclusions depend on multiple independent steps, a parallel argument would apply to every other constitutional theory and indeed to all complex legal reasoning, which suggests that the issue may be less distinctive to originalism than the article implies. If the argument does generalize in this way, that strongly suggests that the conjunction problem is not a problem at all as applied to multistep legal reasoning. I look forward to a revised version of the paper.
If you are interested in the conjunction problem, see Michael S. Pardo, The Paradoxes of Legal Proof: A Critical Guide, 99 B.U. L. Rev. 233 (2019), and Ronald J. Allen, Factual Ambiguity and a Theory of Evidence, 88 Nw. U. L. Rev. 604 (1994). On inferentialism specifically, see Michael S. Pardo & Ronald J. Allen, Juridical Proof and the Best Explanation, 27 Law & Phil. 223 (2008). On inference to the best explanation, see the Legal Theory Lexicon entry on Inference to the Best Explanation (Abduction).
Lawrence Solum
