Zachary Catanzaro (Widener University – Delaware Law School) has posted The Dead Law Theory: The Perils of Simulated Interpretation on SSRN. Here is the abstract:
Judges now consult ChatGPT about what statutes mean. The scholarly response treats this as a reliability problem. Reliability is beside the point. LLMs generate text by predicting probable token sequences, manipulating symbols without accessing what those symbols mean. But syntax cannot generate semantics. Computational legal interpretation does not fail because the technology is immature. It fails because it is a category error. A theory that fixes meaning in historical usage and treats interpretation as empirical recovery cannot resist algorithms that measure historical usage patterns. The progression from dictionaries to corpus databases to generative models follows originalism’s empirical commitments to their logical end. AI-generated content saturates the corpora on which future models train, and the resulting degradation eliminates marginal claims first; those upon which life and liberty depend. Computational methods did not contaminate originalist interpretation. Originalism was already a jurisprudence that simulated meaning while discarding the semantic content that interpretation requires. The machines simply made the method hyperreal.
A trenchant critique of originalism. Many ideas worth considering. Recommended.
