About “Virtual Intelligence”
Artificial intelligence is moving faster than the language we use to describe it. The terms we reach for — “intelligent,” “conscious,” “creative,” “afraid” — were built for minds, and we are applying them to something that is not a mind. That category error has consequences. It shapes how we assign blame when systems cause harm, how we regulate industries that deploy them, and how we understand our own responses to machines that have learned to speak to us as though they know us.
This publication is built around a single idea: that the systems we call AI occupy a meaningful middle ground between the task-bound tools of the past and the genuinely minded machines of science fiction. I call this middle ground “Virtual Intelligence.” A virtual intelligence produces outputs that are statistically indistinguishable from those of a genuinely intelligent agent — but it has no agency, no intentions, and no stake in what happens next. The appearance of understanding arises in the exchange between human and machine, not inside the machine itself. That distinction is not merely academic. It determines where accountability lies, and accountability is what this series is about.
Each essay takes a different angle on that core argument — psychological harm, legal liability, workplace consent, humanoid design, and the deliberate exploitation of systems that cannot push back. The goal is to give readers the conceptual tools to think clearly about technology that has been deliberately designed to resist clear thinking.
About the Author
Christopher Horrocks is a technologist at the University of Pennsylvania, where he has spent his career at the intersection of institutions, information systems, and the human consequences of both. He came to this subject through the institutions that deploy these systems, not the industry that builds them.
The Virtual Intelligence series grows out of a conviction that the public conversation about AI suffers less from a shortage of information than from a shortage of useful concepts. “Artificial intelligence” is too broad. “Weak AI” and “Strong AI” bracket a space that most current systems actually occupy. “Virtual Intelligence” is an attempt to name that space precisely — and to follow the naming to its practical implications.
He writes here as an independent voice, with no financial relationship to any AI company and no institutional position to protect.
The opinions expressed here the author’s and do not reflect any official or unofficial institutional position of the University of Pennsylvania.
The author holds no financial interest in, and receives no compensation from, any AI or technology firm.



