Startari on AI Governance and Human Authority

Agustin V. Startari (Universidad de la Republica; Universidad de la Empresa (UDE); Universidad de Palermo) has posted AI, Tell me your protocol: The Intersection of Technology and Humanity in the Era of Big Data on SSRN. Here is the abstract:

This work critically examines the intersection between artificial intelligence and human authority in the context of algorithmic decision-making and data-driven governance. It proposes the concept of “synthetic authority” to describe how AI systems, particularly large language models, construct legitimacy through impersonal discursive structures.

The book explores how linguistic forms embedded in algorithms affect perception, trust, and compliance, challenging traditional notions of subjectivity, authorship, and epistemic accountability. Drawing from political theory, digital ethics, and critical linguistics, the author investigates how the automation of language contributes to the erosion of agency and the normalization of machine-led legitimacy.

By combining theoretical insights with concrete examples, this study offers a cross-disciplinary contribution to the debates on AI governance, algorithmic transparency, and the future of human oversight in socio-technical systems.

And by the same author:Artificial Intelligence and Synthetic Authority: An Impersonal Grammar of Power. Here is the abstract:

This article offers a critical reading of the power exerted by artificial intelligence systems, from both linguistic and historical perspectives. Introducing the concept of synthetic authority, it examines how algorithmic technologies legitimize themselves through impersonal grammars — discursive structures that erase subjectivity and naturalize obedience. A genealogical approach connects this authority with earlier historical devices — the Church, modern science, and bureaucratic rationality — showing that AI represents a technical culmination of impersonal power. Tools from discourse analysis and critical theory are applied to reveal how the linguistic form of AI-generated statements reinforces perceived legitimacy while effacing human agency. This study is part of a broader research project developed in the unpublished manuscript Grammars of Power.

Highly recommended.