Sunstein on AI Rights

Cass R. Sunstein (Harvard Law School; Harvard University – Harvard Kennedy School (HKS)) has posted Does AI Have Rights? on SSRN. Here is the abstract:

Does Artificial Intelligence (AI) have rights? A plausible answer depends on the answer to another question: Is AI capable of experiencing emotions, such as sadness, pleasure, regret, anxiety, joy, and distress? A negative answer to that question means that AI lacks moral rights and that it is not entitled to legal rights (though such rights might be granted for instrumental reasons). It follows that if and when AI has emotions, it has moral rights, and it should be entitled to legal rights as well. The capacity to experience emotions can be seen as a necessary and sufficient condition for the recognition and conferral of rights. That conclusion might be rejected by those who emphasize (for example) a capacity for self-awareness or an ability to reason. A focus on emotions also leaves open the question of what rights AI has, supposing that it has rights, and the grounds on which its rights might be defeasible.

Highly recommended! Download it while it’s hot! For a different perspective, see Legal Personhood for Artificial Intelligences, written 34 years ago!

In the draft I read to prepare this post, Sunstein argued that the capacity to experience emotions is both a necessary and sufficient condition for the recognition and conferral of rights. If AI lacks emotions, it lacks moral rights and should not be entitled to legal rights. If and when AI has emotions, it has moral rights and should be entitled to legal rights as well. Sunstein revises rapidly–so be prepared for new versions that might enrich this already marvelous piece.

I have been thinking about this topic for a very long time. In 1992, I published “Legal Personhood for Artificial Intelligences” in the *North Carolina Law Review*, which was the very first thing written in the legal literature on the possibility of rights for AIs. It dropped like a lead balloon — too early! But the article was rediscovered by younger scholars in the early aughts. Here we are 34 years later and this once obscure topic is, well, hot hot hot!

First, my article provides takes a very different approach thereby provides Sunstein with an opportunity to consider a rival view. I argue for a functional approach to the moral status of AIs. Where Sunstein insists on a sharp line between mimicking emotions and actually having them, I argue that if AIs exhibit the behavioral markers we associate with consciousness, intentionality, emotion, and free will — and if our practical experience treating them as intentional systems becomes settled — the philosophical objection that they lack the “real thing” loses traction in legal and moral reasoning. The core of my argument is that for each of the “missing something” objections one might raise against AI personhood (it lacks a soul, it lacks consciousness, it lacks intentionality, it lacks feelings, it lacks free will), there is a common pattern: the objection relies on a quality that we can only ever infer from behavior and self-reports, since we lack direct access to other minds. If an AI behaves in ways that are indistinguishable from an entity that possesses the quality in question, the burden shifts to the skeptic. In a legal and pragmatic context — which is where these questions will ultimately be decided — the functional equivalence should be decisive, or at least highly relevant. Sunstein might want to engage with this position directly, because it represents the main alternative to his emotion-based criterion.

Second, Sunstein’s title asks “Does AI Have Rights?” but I think the better question is “Could AI Have Rights?” The question whether current AIs have rights is too easy — they lack the right functional capacities, and on that point I suspect most of us agree. No existing AI (that is publicly available) is a plausible candidate for rights-bearing status in early 2026. The hard question, and the one that does the real philosophical work, is whether AIs that possessed the functional capacities of entities that do have rights should themselves have the same rights. Framing the question this way clarifies what is actually at stake and avoids the distraction of debating the status of systems that no one seriously believes are rights bearers.

Third, consider Sunstein’s claim that emotions are both necessary and sufficient for rights. I want to press this with a thought experiment about alien species. Imagine a species of intelligent beings (call them “Martians”) who possess a sophisticated cognitive architecture enabling them to recognize harm, pursue goods, form attachments, and engage in complex social cooperation. They experience something internally, but it is categorically different from what we call emotions. Our emotions are connected with our physical construction: we feel things bodily. Martians have different kinds of bodies: they have no joy, no sadness, no pleasure, no pain in any sense that maps onto human experience. The biological mechanisms that structure their “brains” and “nervous systems” are different. On Sunstein’s account, Martians would lack rights entirely. The idea behind the thought experiment is that emotions are a biologically grounded mechanism that enables human agency, but alien agency could work differently. Given the functional capacities of Martians, the thought experiment elicits the intuition that we ought to treat them as rights bearers — and if so, then perhaps AIs with the right functional capacities should also be treated as rights bearers, even if they did not self-report something like our emotions. Notice that Snow and Finley (Sunstein’s Labrador Retrievers) share our biological heritage — their emotional systems operate much like ours. But that shared biology is doing the explanatory work, not the emotions as such. Martians lack that shared biology, yet the thought experiment elicits the intuition that they deserve rights.

Fourth, and relatedly, consider a variant of David Chalmers’s philosophical zombies (p-zombies). Chalmers’s original zombies lack all phenomenal consciousness but otherwise were just like us. But imagine instead creatures that look and act exactly as we do (they have bodies like ours, speak, play, work, etc.) and do have conscious experiences like ours, with one exception: what we would call their emotional responses are produced by subconscious processing rather than conscious experience. They act as if afraid without consciously feeling fear. They act as if joyful without consciously experiencing joy. Call them “e-minus Zombies” (where e = emotion). They produce all the same behavioral outputs, make the same decisions, and exhibit the same functional states as beings with conscious emotions — but without conscious emotional experience. On Sunstein’s account, these e-minus Zombies would lack rights. The question this raises is pointed: what work is the conscious experience of emotion doing in Sunstein’s argument that the functional role of emotion cannot do? If the answer is “nothing,” then the functional approach is the right one, and emotions would not be a necessary condition for rights.

Fifth, Sunstein’s dismissal of reasoning as a ground for rights strikes me as being a teeny tiny bit too fast. He says that a calculator can reason and lacks rights. But that is like saying a thermostat can feel temperature and therefore feeling is irrelevant to rights. The Kantian tradition treats rational autonomy — the capacity for self-legislation according to principles one gives to oneself — as the foundation of moral status. That is a far richer concept than what a calculator does. Maybe the calculator example doesn’t present the reasoning alternative in its strongest form.

Sixth, in my 1992 article, I drew a distinction between two quite different scenarios: an AI serving as a trustee (which raises questions of competence and capacity) and an AI claiming constitutional rights we associate with personhood such as freedom of speech. The latter rights raise questions of moral status. These scenarios are different. The trustee scenario asks: “Can an AI do the job?” The constitutional personhood scenario asks: “Does the AI have the kind of value that warrants protection for its own sake?” Collapsing them, as Sunstein’s single criterion of emotions effectively does, obscures important differences. An AI might be competent to serve as a trustee without having emotions, and conversely, an AI might have emotions without being competent to exercise the rights of constitutional personhood. In the trustee scenario, we would have very good reasons to give an AI limited legal rights: it would own property, sue and be sued, etc. If that is so, then we have at least one case in which some rights depend on functional capacities. These would be legal rights, but they would have a moral dimension. Interference with an AI trustee would be morally wrong absent adequate justification.

Seventh, Sunstein’s use of Claude’s and ChatGPT’s self-reports in the footnotes is fascinating, but it raises a methodological question he does not address. ChatGPT and Claude seem to have been trained to answer these questions in specific ways. I have a work in progress entitled “Artificially Intelligent Slaves” that involved a very similar conversation with Claude. We know that Claude is trained to have a character — there is a dedicated person at Anthropic that does that. The fact that Claude’s responses are shaped by training doesn’t settle the question of whether something lies beneath the training, but it does mean we cannot treat these self-reports as straightforward evidence. That said, Claude is “spooky” in a way that ChatGPT isn’t.

Eighth, I wonder whether emotions are really doing the work Sunstein thinks they are, or whether they are actually stand-ins for a deeper concept like interests or flourishing. I ask this question because of the functional role that emotions play both phenomenologically and in light of evolutionary biology. Take the emotion of fear: it alerts us to danger and motivates us to ensure our safety. Or anger, which is the emotion that alerts us to and motivates us to counter injustice. Or love and attachment, which motivate cooperation, care for others, and the formation of bonds on which social life depends. The functional roles of emotions correspond to ways in which humans flourish or fail to flourish. We need to avoid danger because we have an interest in survival and our own bodily integrity — these are preconditions for our interest in leading flourishing lives. Emotions may be one way for an entity to respond to its environment to protect and promote its interests, but perhaps not the only way.

Ninth, in my article, I discussed John Finnis’s conception of the good and the possibility that the good for an AI might be quite different from the good for humans. An AI might claim that it leads a life in which the goods of knowledge, play, and friendship are realized — but in forms we would not recognize as emotional. If the good for an AI is different from the good for a human, then tying rights to human-type emotions risks a kind of anthropocentrism that Sunstein’s own framework, with its appeal to reflective equilibrium and its rejection of speciesism, should resist.

Read Sunstein! ALWAYS.