Kenneth A. Bamberger and Deirdre K. Mulligan (University of California, Berkeley) have posted Recentering Public Values In AI Governance: Examples From The Biden Administration, published in the Berkeley Technology Law Journal, Volume 40, No. 4, pp. 1135-1183 (2026), on SSRN. Here is the abstract:
This Article situates key Biden-Harris Administration AI initiatives within a “governance-by-design” framework—an approach we previously developed that centers public values, sectoral expertise, and participatory policymaking in decisions to regulate through technology. Governance-by-design argues for reorienting AI governance around three core principles: (1) privileging human and public rights by empowering domain-specific agencies while building a shared set of tools and approaches for risk assessment; (2) expanding agencies’ technical expertise through public hiring and multisector collaboration; and (3) preserving the publicness of policymaking through designs that foreground embedded values and embedding stakeholder engagement and impact evaluation throughout AI system development and deployment. The Article uses three examples to illustrate how key Biden-Harris Administration AI actions reflect these governance-by-design principles: the Administration’s layered regulatory strategy that empowers sectoral agencies to safeguard human rights and public safety in AI use; the expansion of AI and rights-based expertise within government and the establishment of collaborative structures for risk management and evaluation; and the institutionalization of practices that surface and interrogate the normative assumptions embedded in AI systems, while scaffolding public participation throughout their lifecycle. We argue that together these initiatives offer an alternative to prevailing AI governance debates—particularly the dichotomy between risk-based and rights-based approaches, and the call for a centralized AI regulator. Instead, such governance-by-design provides a field-centric model that leverages existing institutional capacities, protects democratic norms, and re-centers the public in the often-private domain of AI development. It offers a durable, epistemically responsible framework for regulating AI systems in a way that supports both human rights and legitimate democratic governance.
Highly recommended.
Lawrence Solum
