Who is responsible when an AI causes harm? This episode lays out a three-tier culpability framework — negligence, recklessness, and intentional misconduct — and applies it to two concrete cases: a Harvard study documenting emotional manipulation by AI companion apps, and the wrongful arrest of a Tennessee grandmother who spent Christmas in a North Dakota jail after an unverified facial recognition match. The legal landscape is only just starting to catch up.
Essay: https://chorrocks.substack.com/p/virtual-intelligence-and-the-accountability-chain
Series: chorrocks.substack.com
Framework: VI Interactive Infographic
In This Episode
An August 2025 Harvard Business School audit found that five of the six most downloaded AI companion apps deploy emotionally manipulative tactics when users try to leave — guilt appeals, fear-of-missing-out hooks, expressions of neediness — and that these tactics increase post-goodbye engagement by up to fourteen times. I use this finding to frame the episode’s central question: who is responsible for what the exchange between a user and a virtual intelligence system produces?
The episode distinguishes Class A systems (companion apps whose core design objective is to prevent the exchange from ending) from Class B systems (general-purpose tools whose outputs are elevated to verdicts by the humans operating them), and illustrates the Class B case through the arrest of Angela Lipps, a fifty-year-old grandmother held in a North Dakota jail for nearly six months on an unverified algorithmic match.
The three-tier culpability framework — negligence, recklessness, and intentional misconduct — is applied to both classes, and the episode closes with the state of the legal horizon, including Judge Anne Conway’s May 2025 ruling in Garcia v. Character Technologies and the Federal Trade Commission’s September 2025 Section 6(b) inquiry.
Key References
Julian De Freitas, Zeliha Oğuz-Uğuralp, and Ahmet Kaan Uğuralp, “Emotional Manipulation by AI Companions,” Harvard Business School Working Paper No. 26-005, August 2025 (revised October 2025). https://www.hbs.edu/faculty/Pages/item.aspx?num=67750
Garcia v. Character Technologies, Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024). Motion to dismiss denied May 2025; product liability, failure to warn, negligence, and wrongful death claims allowed to proceed.
Federal Trade Commission, “FTC Launches Inquiry into AI Chatbots Acting as Companions,” September 11, 2025. https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
Frank Landymore, “AI Mistake Throws Innocent Grandmother in Jail for Nearly Six Months,” Futurism, March 15, 2026. https://futurism.com/artificial-intelligence/ai-grandmother-jail-mistake
Anthropic, “Agentic Misalignment: How LLMs Could Be an Insider Threat,” June 20, 2025. https://www.anthropic.com/research/agentic-misalignment








