Cullen O’Keefe (Institute for Law & AI; Centre for the Governance of AI) & Kevin Frazier (The University of Texas School of Law) have posted Automated Compliance and the Regulation of AI on SSRN. Here is the abstract:
Regulation imposes compliance costs on regulated parties. Thus, policy discourse often posits a trade-off between risk reduction and innovation. Without denying this trade-off outright, this paper complicates it by observing that, under plausible forecasts of AI progress, future AI systems will be able to perform many compliance tasks cheaply and autonomously. We call this automated compliance. While automated compliance has important implications in many regulatory domains, it is especially important in the ongoing debate about the optimal timing and content of regulations targeting AI itself. Policymakers sometimes face a trade-off in AI policy between potentially regulating too soon or strictly (and thereby stifling innovation and national competitiveness) versus too late or leniently (and thereby risking preventable harms). Under plausible assumptions, automated compliance loosens this trade-off: AI progress itself hedges the costs of AI regulation. Automated compliance implies that, for example, policymakers could reduce the risk of premature regulation by enacting regulations that become effective only when AI is capable of largely automating compliance with such regulations. This regulatory approach would also mitigate concerns that regulations may unduly benefit larger firms, which can bear compliance costs more easily than startups can. While regulations can remain costly even after many compliance tasks have become automated, we hope that the concept of automated compliance can enable a more multidimensional and dynamic discourse around the optimal content and timing of AI risk regulation.
