William C. Houze has posted Agency, Tools, and Authorship in the Age of Large Language Models: A Demonstration and Analysis on SSRN. Here is the abstract:
This paper examines the nature of human authorship in an era of sophisticated large language model systems. Through a designed experiment in AI-assisted composition, it demonstrates that LLMs function as tools extending human cognitive reach rather than as autonomous creators capable of independent authorship. The study presents twelve mini-essays generated through human-LLM collaboration across diverse domains — law, physics, literary criticism, cultural anthropology, historical linguistics, structural engineering, fine arts, urban planning, cellular biology, cosmology, and philosophy — to illustrate that authorship resides in the human mind that conceives, directs, constrains, evaluates, and integrates expressive output. Drawing on U.S. copyright doctrine, including landmark Supreme Court and appellate decisions establishing the human authorship requirement, as well as U.S. Copyright Office guidance on AI-generated works, the paper establishes that legal authorship requires human creative control and original intellectual conception. Philosophically, it argues that authorship is grounded in intentionality, meaning-making, and responsibility — capacities that LLMs simulate but do not possess. Engaging counterarguments from extended-cognition theory, emergent-intentionality claims, edge cases of thin human control, and the case for AI as co-author, the paper proposes the Houze/Maestro Model, a portable authorship protocol for AI-era scholarship that defines roles, disclosure elements, acceptable practices, an epistemic checklist for human authors, and implementation guidance for institutions. The paper’s methodology itself enacts the model it proposes: initial essays were generated by Microsoft Copilot, the analytical framework was developed through iterative dialogue with Claude Opus 4.5, and successive drafts were submitted for LLM peer review by multiple systems, with all revisions accepted or rejected by the human author. Handwritten pre-interaction notes are presented as Feistian artifacts — documentary evidence of human-originated conceptual architecture preceding any LLM engagement. The paper concludes that academia’s reluctance to embrace LLMs as legitimate scholarly tools reflects, in significant part, cultural lag rather than conceptual rigor, while acknowledging that measured institutional caution serves legitimate pedagogical and integrity interests.
