Reuben Sass (United States Naval Academy), Claudio Novelli (Yale University – Digital Ethics Center), & Enrico Zio (Polytechnic University of Milan; Aramis S.r.l.; PSL Research University – INSERM U932 – Immunity and Cancer) have posted Quantifying Values: The Problem of AI Risk on SSRN. Here is the abstract:
Recent AI regulations require deployers of high-risk systems to assess impacts on values like fundamental rights and other legally protected interests. However, existing practice-most notably Fundamental Rights Impact Assessments (FRIAs) under the EU AI Actremains mostly limited to qualitative guidelines. As a result, risk evaluation can be inconsistent and highly variable across assessments. To address this issue, we propose a reference model for value-impact assessment that specifies the core components that any more detailed formalization should incorporate. The model builds on a classical risk paradigm, integrating distinct measures for hazard, exposure, vulnerability, and response or mitigation. We use the EU AI Act’s FRIA as a reference case for applying the model to AI risk assessments. The model provides three especially salient recommendations. First, it rejects purely ordinal risk matrices in favor of geometric or ratio-scaled severity ladders, enabling the coherent calculation of expected risk. Second, in analyzing risk mitigation measures, it requires separate estimates of efficacy and reliability, with explicit success criteria and scope conditions that allow for dynamic updating. Third, the model operationalizes regulatory precaution, or risk appetite, through the integration of tail-sensitive risk metrics (e.g., Conditional Value-at-Risk). This lets regulators give appropriate weight to low-probability, high-impact (or catastrophic) harms to fundamental rights without distorting ordinary expected-risk calculations. The result is a structured foundation for more consistent value-impact assessments.
Highly recommended.
