Christopher Brett Jaeger (Baylor Law School) & Daniel Levin (Vanderbilt University – Psychology and Human Development) have posted Representing Technological “Minds”: How Anthropomorphic Inferences Influence Legal Judgments and Policy Opinions on SSRN. Here is the abstract:
The increasing presence of technological agents raises challenges for law and policy. How we address these challenges depends in part on the inferences we draw about the technological agents. People are sometimes quick to attribute human-like qualities to these agents, making global anthropomorphic inferences based on minimal cues. But in other cases, people consider anthropomorphism deeply, relying on a more selective, multi-dimensional approach that invokes a range of tacit and explicit inferences. We present three studies documenting how this more selective approach influences legal and policy judgments involving technology. Participants evaluated scenarios about accidents involving autonomous machines. Participants more readily attributed to machines specific human-like skills (e.g., using strategies) than broad experience (e.g., having consciousness)—and these different types of attributions had distinct (sometimes opposing) influences on judgments. Further, participants’ internal cognitive conflict consistently predicted their attributions of broad experience, which, in turn, consistently predicted their legal judgments and policy opinions.
