Ruoxi Li (Hong Kong University of Science & Technology), Sirui Han (Hong Kong University of Science & Technology), & Yi-Ke Guo (Hong Kong University of Science & Technology) have posted Simulated Justice: How AI Alignment Replaces Conflict with Coherence on SSRN. Here is the abstract:
What if the danger of legal AI is not that it misjudges, but that it sounds perfectly judicious? As legal value alignment becomes a central goal in the design of large language models (LLMs), these systems increasingly produce outputs that appear lawful, balanced, and procedurally sound. But behind their rhetorical fluency lies a powerful illusion: the simulation of legal judgment without its institutional or political substance. This paper critiques alignment not as a failure, but as a success too well executed, one that replaces legal disagreement with semantic symmetry, and recasts justice as an output format. Drawing on jurisprudential theory and discourse analysis, we show how AI answers flatten normative conflict into technocratic equilibrium, offering “reasonable” positions that obscure power, silence dissent, and domesticate law’s agonistic core. Through examples such as hate speech, surrogacy, and criminal responsibility, we expose how alignment displaces the burden of choice with the comfort of coherence. In response, the paper proposes an anti-alignment jurisprudence, one that values contestation over calibration, and treats law not as a language model’s task, but as a political act that must remain interruptible. In doing so, the paper cautions against the growing role of generative AI as a source of normative authority, not because it gets law wrong, but because it sounds too much like it gets it right.
Very interesting and recommended.
