Nicolin Decker has posted The Continuity vs. Conscience Doctrine (CVC): The Mappability Boundary in Artificial Systems and Human Rights on SSRN. Here is the abstract:
This paper articulates an origin-level classification doctrine within a broader, integrated canon of constitutional and governance analysis addressing artificial systems and human authority. Together with The Artificial Conscious Agency Doctrine (ACAD): A Constitutional, International, and Moral Framework for Synthetic Intelligence in the Post-Semiconductor Era, The Doctrine of Moral Closure in Artificial Systems: The Continuity Paradox, and The Doctrine of Force Multiplication Without Formation (DFM): Artificial Intelligence, Market Competition, and the Educational Substitution Prohibition, this work examines a foundational question posed by persistent artificial systems: what kinds of entities are capable, in principle, of bearing moral and legal rights. As artificial systems transition from episodic tools to continuous presences—retaining memory, optimizing across time, and integrating into core civil, economic, and governmental functions—legal institutions will be compelled to classify what kind of thing such systems are. Absent a principled boundary fixed at origin, classification will occur implicitly through analogy, reliance, and precedent rather than through deliberate constitutional judgment.
The Continuity vs. Conscience Doctrine (CVC) establishes a categorical distinction between continuity systems and conscience-bearing beings, grounded in the concept of mappability in principle. Artificial systems, by virtue of precedent accumulation, optimization under constraint, and structural exhaustibility, are mappable in principle and therefore governable as instruments. Human beings, by contrast, are not fully mappable in principle due to conscience, moral rupture, repentance, and the capacity for refusal without incentive—properties that render human moral agency irreducible to system behavior.
The doctrine rejects capability-based criteria such as intelligence, sentience, autonomy, or self-reference as unstable foundations for legal or moral status, demonstrating how such measures inflate, drift, and harden into precedent through analogical reasoning. It further identifies institutional risks arising from delay, including retroactive personhood claims, accidental recognition cascades, and international treaty fragmentation, and explains why prolonged inaction increasingly functions as tacit consent in contemporary governance environments.
Designed as a pre-emergent safeguard, CVC is pre-legal and pre-policy. It proposes no regulatory framework, tests, or thresholds. Instead, it supplies an administrable origin-level boundary that courts, legislatures, and treaty bodies may rely upon to preserve human moral sovereignty, prevent retroactive reclassification, and ensure that rights attach to conscience rather than continuity.
Recommended.
