AI Governance, Market Integrity, and Regulated Industries
Writing & Legal Scholarship
Law Review Articles
The Intent Gap: Artificial Intelligence, Prediction Markets, and the Collapse of Federal Manipulation Doctrine
Stevie Michelle Cline, The Intent Gap: Artificial Intelligence, Prediction Markets, and the Collapse of Federal Manipulation Doctrine, U. Ill. J.L. Tech. & Pol’y (forthcoming 2026).
Prediction markets are now public infrastructure. Their prices are quoted in political analysis, cited by institutional investors, and used by policymakers as real-time probability signals for elections, economic events, and geopolitical crises. Documented wash trading exceeds twenty-five percent of volume on major platforms. AI trading systems — reinforcement learning agents, LLM-powered probability models, autonomous arbitrage bots — are proliferating in these markets. And the federal enforcement framework designed to protect market integrity cannot reach them, because every manipulation provision under the Commodity Exchange Act and the federal securities laws requires proof of human intent that AI systems do not possess.
The Article develops the first systematic taxonomy of AI manipulation risks specific to prediction markets — coordinated trading without communication, information asymmetry exploitation, and emergent manipulative behavior — and demonstrates that each defeats every existing doctrine at the intent element. It proposes upstream scienter: a liability framework that relocates the legally relevant intent inquiry from the moment of manipulation to the design and deployment decisions that made manipulation foreseeable. The framework includes a three-part liability test, a tiered safeguard structure calibrated to AI system risk, a compliance safe harbor, and a complete model rule suitable for CFTC adoption. It operates coherently across the SEC/CFTC jurisdictional divide. The analysis identifies a problem that extends far beyond financial regulation: every intent-based legal doctrine — criminal mens rea, antitrust conspiracy, tort negligence — faces the same structural failure when AI agents replace human actors. Prediction market manipulation is the leading edge.
The Third Party in the Machine: Attorney-Client Privilege, Work Product Doctrine, and the AI Disclosure Problem After United States v. Heppner
Stevie Michelle Cline, The Third Party in the Machine: Attorney-Client Privilege, Work Product Doctrine, and the AI Disclosure Problem After United States v. Heppner, Int’l J. L. Ethics Tech. (forthcoming 2026).
On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York ruled from the bench in United States v. Heppner that thirty-one documents a criminal defendant generated using Anthropic’s commercial AI tool Claude were protected by neither the attorney-client privilege nor the work product doctrine. The Article identifies three critical deficiencies in the court’s reasoning: first, that the work product holding misreads Rule 26(b)(3) by importing a “direction of counsel” requirement the Rule does not contain; second, that treating a platform’s privacy policy as dispositive of confidentiality is incompatible with the reasonable-expectations analysis governing analogous questions in Fourth Amendment and privilege law; and third, that the ruling creates a two-tier system of privilege protection that tracks wealth rather than legal merit.
The Article proposes a multi-factor reasonable-expectations test for AI-assisted privilege claims, drawing on the framework articulated in Carpenter v. United States, and a normative framework for building AI tools that preserve privilege by design. The analysis engages the interlocutor problem — the fact that generative AI, unlike every prior communication technology, does not merely transmit or store user input but receives, processes, and responds to it substantively — and argues that this distinguishing feature requires a more nuanced analytical framework than the binary yes-or-no approach that existing doctrine provides. The Article situates Heppner against comparative authority from the United Kingdom, Canada, and Australia, and against the ABA’s 2024 Formal Opinion 512 on generative AI.Working Papers
Policy Commentary
Comment on Advance Notice of Proposed Rulemaking, Prediction Markets (CFTC)
Letter from Stevie Michelle Cline to Christopher Kirkpatrick, Sec’y of the Comm’n, Commodity Futures Trading Comm’n, re: Advance Notice of Proposed Rulemaking, Prediction Markets (CFTC RIN 3038-AF65) (Mar. 16, 2026).
Filed on the day the ANPRM was published in the Federal Register. The comment argues that the manipulation framework contemplated by the ANPRM inherits the scienter problem developed in The Intent Gap: federal anti-manipulation doctrine, built on a presumption of human actor intent, cannot do the work of policing prediction markets in which algorithmic trading systems, autonomous agents, and AI-driven participants are generating manipulation-like effects without any legally cognizable mental state. The comment proposes an upstream framework grounded in the Commission’s Core Principles authority — position limits, surveillance requirements, abusive trade practices rules, and contract-resolution standards — that constrains the conditions under which manipulation is possible, rather than developing new event-contract-specific intent tests that will share the same doctrinal fate as their predecessors.
Comment on Concept Paper, Accelerating the Adoption of Software and AI Agent Identity and Authorization (NIST NCCoE)
Letter from Stevie Michelle Cline to Nat’l Cybersecurity Ctr. of Excellence, Nat’l Inst. of Standards & Tech., Comment on Concept Paper, Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization (Apr. 6, 2026).
Filed in response to the NCCoE’s February 2026 concept paper on AI agent identity and authorization. The comment argues that the concept paper addresses only one half of the problem: technical identity and authorization standards tell us who an agent is and what it is permitted to do; they do not tell us who is legally responsible when an agent acts in ways that cause harm, exceed the scope of its delegated authority, or produce outcomes that no human principal specifically authorized. The comment is organized around five areas: (I) the liability attribution gap in current frameworks; (II) the delegation problem and the limits of “on behalf of” models; (III) the foundation model liability chain; (IV) sector-specific considerations in healthcare; and (V) six recommendations for the NCCoE project, including expanding project scope to address liability attribution, developing guidance on the distinction between technical authorization scope and legal scope of authority, recommending foundation model provenance in agent identity metadata, addressing multi-agent delegation chains, including healthcare as a priority use case, and engaging legal and regulatory expertise in project design.
Publications
Who Supervises the AI Agent? New York’s Ethics Rules Were Built for Human Assistants. The Agentic Era Demands Something New.
Stevie Michelle Cline, Who Supervises the AI Agent? New York’s Ethics Rules Were Built for Human Assistants. The Agentic Era Demands Something New., N.Y. St. B.J. (forthcoming Fall 2026).
Targeted to the Fall 2026 Ethics issue, the Article argues that New York’s supervisory framework under Rules 5.1 and 5.3 — designed to supervise human lawyers and nonlawyer assistants acting under instruction — is structurally incomplete for AI agents that plan, execute, and adjust multi-step legal workflows autonomously, without human input at each stage. The Article identifies three specific gaps where the existing framework fails to map onto the technology: the opacity problem (an AI agent’s reasoning is not observable to the supervising lawyer in the way a human associate’s work is), the scope problem (an agent’s effective authority is a function of its technical configuration, not its legal permissions), and the attribution problem (no existing doctrine cleanly allocates responsibility between the lawyer, the firm, and the foundation model developer). Drawing on agency law principles — particularly the Restatement (Third) of Agency’s distinction between actual and apparent authority — the Article proposes concrete steps for practitioners and for the NYSBA to close the gaps before the next wave of enforcement makes the issue a crisis.
Your AI Agent Is Not Your Employee: Five Questions Every General Counsel Needs to Answer Before Deploying Autonomous AI
Most legal departments built their AI governance frameworks in 2024 and 2025 for generative AI tools that respond to prompts. Agentic AI is categorically different, and the governance playbooks have not caught up. Drawing on four prior General Counsel appointments and a current frontier-lab vantage point, the Article presents five practical questions every in-house lawyer must answer before signing off on an agentic AI deployment: what can the agent do and, more importantly, what can’t it; who is liable when it goes wrong; what a vendor contract actually needs to say (model version control, audit rights, data retention, indemnification, termination); where to draw the authority line between technical scope and legal scope; and how to report AI risk to the board in a way that constitutes governance rather than decoration. The Article is written GC-to-GC, with no footnotes, and is designed to give the reader a checklist they can hand to their procurement team the next morning.
Citations & Drafts
Pre-publication drafts available on request. Finalized work mirrored on SSRN.
For permission to cite working papers, write to hello@steviecline.com.