Agustin on Relational Integrity in AI

Matthew Agustin (Responsible Innovation Lab) has posted Relational Integrity in AI: Preserving Human Agency, Accountability, and Meaning Under Pressure on SSRN. Here is the abstract:

As AI systems increasingly operate in language-mediated and relational contexts, many of the risks they introduce do not arise from discrete failures, misuse, or malicious intent. Instead, harm often emerges through gradual shifts in how systems are interpreted, relied upon, and positioned within human judgment and institutional practice. These shifts are frequently well-intended, incentive-aligned, and normalized through ordinary use, allowing role clarity, accountability, and relational boundaries to erode even when systems appear effective. The Relational Integrity in AI Framework (RIAF) offers a diagnostic approach to these dynamics. Rather than treating relational harm as a matter of compliance, output control, or user error, RIAF conceptualizes it as a structural phenomenon shaped by cumulative interaction, interpretive signals, and institutional embedding. The framework introduces relational drift as a precursor to stabilized structural failure, identifies a canonical set of relational integrity failure modes, and argues that sustained protection depends on preserving specific human capacities rather than enforcing static rules or prescriptions. RIAF is not an implementation guide or ethical checklist. It is a conditioning framework designed to support reflection, pressure-testing of relational integrity, and responsible judgment in contexts where AI systems increasingly mediate meaning, authority, care, and decision-making.