Gabriel Weil (Touro University – Jacob D. Fuchsberg Law Center) has posted Abnormally Dangerous Algorithms: The Case for Strict Liability at the AI Frontier on SSRN. Here is the abstract:
As advanced AI systems gain autonomy, they will increasingly cause realworld harms to nonusers: people who neither deployed the system nor chose to interact with it. Existing tort categories do not cleanly allocate these losses, because many serious AI harms arise from residual, hard-to-eliminate risks rather than from readily provable negligence. This Article argues that severe third-party harms from alignment failure are a core case for strict liability. Victims do not consent to the risk; the risk may remain highly significant even with reasonable care; and strict liability better calibrates activity levels and loss-spreading while preserving incentives for safety investment. Doctrinally the Article develops two complementary paths for courts to reach that result without recognizing AI legal personhood or presuming new legislation: (1) treating the training and deployment of sufficiently capable systems as an abnormally dangerous activity when catastrophic residual risks persist despite reasonable care; and (2) extending vicarious responsibility doctrines to situations where AI agents act with sufficient autonomy. The Article concludes by exploring implications for AI governance and the allocation of AI-related risks in society.
Highly recommended!
Download it while it’s hot!
To receive a daily summary of posts from Legal Theory Blog by email, get a free subscription to Legal Theory Stack.
Lawrence Solum
