Why “Virtual Intelligence”?
Naming, Agency, and Accountability in the Age of Large Language Models
Summary
This essay argues that large language models occupy a distinct conceptual position — neither the task-bound simulation of weak AI nor the genuine self-directing agency of strong AI — and that the failure to name this distinction precisely has real consequences for how responsibility is understood and assigned. Drawing on philosophy of action, particularly the accounts of agency offered by Searle, Frankfurt, and Dennett, I contend that the intelligence users encounter in these systems is relational: it arises in the exchange between human and machine, not from any self-governing center within the system. Naming this condition “virtual intelligence” is not a rhetorical gesture but a practical one — it keeps the locus of accountability visible at a moment when the language surrounding these systems tends to obscure it.
Introduction
A major research university has an opening for a senior administrator. The hiring committee decides to use a large language model to summarize information about the candidates. The AI produces a comparative analysis and concludes that one candidate “demonstrates stronger leadership potential” than the others. The phrasing of this output is confident and coherent in the context of a candidate search.
In subsequent discussion, a committee member remarks, “The AI ranked her highest.” The statement passes by without comment. No one on the committee asks how the training data may include institutional biases, how prompts framed the evaluation criteria, or how probabilistic modeling of a comparative result was delivered with confident-sounding phrasing. The system appears to have decided.
This scene is increasingly common. Large language models are now used to draft policies, summarize research, assist in technical work, and participate in everyday decision-making. Their outputs are linguistically fluent, adaptive, and often persuasive. As a result, they are described and treated as intelligent agents due to how human cognition responds to fluent language. That description carries implications. In ordinary language, intelligence implies intention: something that forms judgments, holds commitments, and acts for reasons.
The distinction between artificial intelligence in the strong sense and what may more accurately be called virtual intelligence is therefore not semantic; it is clarifying. Large language models generate outputs by identifying statistical patterns in vast bodies of data and producing probable continuations. They do not form beliefs, maintain projects, or weigh outcomes against internal standards. When fluency is mistaken for grounded judgment, apparent coherence can be misread as understanding. Precision in terminology supports precision in interpretation. To understand a thing we must properly name that thing.
The distinction also protects psychological and institutional clarity. Humans are predisposed to attribute agency to systems that speak in complete sentences and explain themselves. When we begin to ask what a system “decided” or “preferred,” responsibility subtly shifts. Understanding contemporary generative systems as instances of virtual intelligence keeps the locus of agency visible. The intelligence users experience arises in the exchange, not inside the machine.
Strong AI, Weak AI, and Conceptual Drift
The debate over machine intelligence long predates contemporary systems. Philosophers and computer scientists have distinguished between strong AI and weak AI to clarify what is, and is not, being claimed about artificial systems. The terminology is commonly associated with John Searle’s 1980 paper, “Minds, Brains, and Programs.”[1]
Weak AI refers to systems that simulate aspects of intelligent behavior to perform specific tasks. A program can play chess, classify images, recommend products, or generate text that resembles reasoning. In each case, the system behaves as if it were intelligent within a bounded domain (e.g., the rules of the game of chess). No claim is made that it possesses beliefs, understanding, or consciousness. Weak AI concerns performance.
By contrast, Strong AI is the thesis that a sufficiently advanced artificial system would not merely simulate understanding but instantiate it. On this view, a machine could possess genuine mental states — that is, beliefs, intentions, perhaps even consciousness — in the same sense that humans do. Strong AI is not simply about capability. It is about ontology; that is, being.
These distinctions are often blurred in contemporary discourse. Systems that clearly fall within the domain of weak AI are described in language that implies the stronger claim. Large language models are said to “reason,” “decide,” or “prefer.” The vocabulary of agency attaches itself to probabilistic modeling systems, resulting in conceptual drift.
I offer the term virtual intelligence with the intention of arresting that drift. It overlaps with weak AI in declining to attribute genuine mental states to current systems, but it sharpens the emphasis. Weak AI describes simulation. Virtual intelligence emphasizes that the apparent intelligence users encounter is relational — it arises from the interaction between architecture, training data, and human prompting rather than from a self-governing center within the machine. Strong AI would entail genuine agency. Virtual intelligence names the present condition more precisely.
Agency
To determine whether contemporary systems approach the strong AI definition or remain instances of virtual intelligence, we must clarify what agency entails.
In the philosophy of action, an agent is not merely something that produces behavior. An agent is a being that can act for reasons: forming intentions and carrying them out to achieve a result.[2] Acting for a reason differs from reacting to a stimulus. A thermostat activates when a temperature threshold is crossed. An unsupported rock falls under gravity. Neither acts because it has concluded that a particular outcome matters. Agency, in its traditional sense, involves more than output. It involves acting on reasons.
Agency also unfolds across time. Intentions are not momentary impulses. They organize behavior, constrain reconsideration, and coordinate means and ends across changing circumstances.[3] When a person commits to a project, that commitment persists. It shapes later decisions. It can be reaffirmed or revised. This structure of commitment extended through time is central to intentional action.
The American philosopher Harry Frankfurt argued that agency involves reflective endorsement — the ability not merely to have desires, but to decide which desires one endorses and wants to govern one’s conduct.[4] This reflective ownership grounds responsibility. We hold people accountable because their actions express commitments they stand behind.
Computer science often employs a thinner definition. In many AI texts, an agent is any system that receives inputs and produces outputs in response.[5] Under this engineering definition, a thermostat qualifies as an agent, as does a recommendation engine. That usage is useful for design, but it is insufficient for questions of intention, judgment, or accountability.
For present purposes, robust agency can be defined as the capacity to initiate and regulate action based on internally maintained commitments that persist across time. Robust agents respond to reasons and reflectively endorse or revise their commitments. That definition distinguishes mere mechanisms from responsible actors.
Which Standard Do Large Language Models Meet?
With that definition in place, we can evaluate contemporary generative AI systems. Large language models produce sophisticated behavior. They generate arguments, summarize research, revise drafts, and simulate deliberation. They often appear to reason. This appearance can be misleading.
A language model does not initiate action based on internally maintained commitments. It does not possess projects it seeks to carry forward. It does not maintain intentions that constrain future conduct across contexts. Its outputs are triggered by prompts and shaped by statistical optimization over training data. The system does not act because it has concluded that an outcome matters. It generates text that conforms to probabilistic patterns.
It is true that such systems maintain parameters, internal representations, and contextual memory during operation. But persistent weights are not the same as commitments. A commitment constrains action because it is taken to matter by the system itself; a parameter constrains output because it encodes structure.
Nor do these systems exhibit reflective self-governance. They cannot endorse one motivation over another or repudiate a prior commitment because they judge it mistaken. They can simulate these moves linguistically. But there is no standpoint within the system from which such revision is owned.
Such systems qualify as agents under the thin engineering definition. Under the richer account tied to reasons, commitment, and self-governance, they do not. The question at issue is not whether systems can optimize objectives; it is whether they can bear responsibility for doing so.
The Relational Character of Virtual Intelligence
Calling this intelligence “virtual” does not diminish its power. Large language models encode vast data structures. They compress patterns across enormous corpora. They are sophisticated computational artifacts.
But the reasoning users encounter is co-constructed. The user supplies a prompt shaped by goals and context. The system produces a probabilistic continuation. The user interprets the output as advice or analysis. Meaning emerges in this exchange. The perceived intelligence is relational, created by the interaction of user and system.
Humans naturally interpret complex systems by adopting what Daniel Dennett has called the “intentional stance,” treating behavior as if it were guided by beliefs and desires because doing so aids prediction.[6] When a system produces coherent language, our cognitive heuristics for recognizing agency activate automatically.
Yet there is no standpoint within the system from which anything matters. There are no stakes, no vulnerability to outcome, no internally maintained commitments. Without stakes, there is no judgment in the human sense — only pattern completion.
Naming and Accountability
The way we name these systems shapes where accountability resides. When systems are described as deciding or determining outcomes, responsibility and deference are shifted from the human users to the systems themselves. The language of “what the system concluded” replaces the language of “what we encoded and deployed.”
In highly consequential domains such as hiring, lending, policing, and medicine, this shift matters. These systems execute patterns shaped by human choices embedded in data, architecture, and institutional incentives. If intelligence is understood as virtual, the chain of accountability remains visible. “Virtual intelligence” preserves accountability because it denies the existence of an inner moral subject within the system.
Nothing in this argument forecloses the possibility that future artificial systems might have robust agency. If a system were to originate and sustain its own commitments, revise them reflectively, initiate action from a continuing standpoint, and bear consequences in a way that grounded answerability, it would warrant a different classification. The present claim is narrower. It concerns contemporary large language models and related systems.
Conclusion
Contemporary generative AI systems are powerful tools. They reproduce the outer form of reasoning with remarkable fluency. However, fluency is not self-governance; statistical modeling is not intention; and simulation is not agency.
Strong AI would entail genuine, self-directing agency. Weak AI describes task-bound simulation. Virtual intelligence captures the present reality: systems that, in limited contexts, generate behavior indistinguishable from reasoning without possessing internally grounded commitments or stakes.
Apparent intelligence is not evidence of inner mentality. The intelligence users encounter arises in the exchange. Recognizing this does not diminish these systems. Proper naming preserves the boundary that keeps agency and accountability human.
Footnotes
1. John Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, no. 3 (1980): 417–457. https://zoo.cs.yale.edu/classes/cs458/materials/minds-brains-and-programs.pdf. Retrieved February 20, 2026.
2. Stanford Encyclopedia of Philosophy, “Agency,” ed. Edward N. Zalta. https://plato.stanford.edu/entries/agency/. Retrieved February 20, 2026.
3. Stanford Encyclopedia of Philosophy, “Intention.” https://plato.stanford.edu/entries/intention/. Retrieved February 21, 2026.
4. Harry Frankfurt, “Freedom of the Will and the Concept of a Person,” Journal of Philosophy 68, no. 1 (1971): 5–20. https://www.jstor.org/stable/2024717?seq=1. Retrieved February 20, 2026.
5. Stanford Encyclopedia of Philosophy, “Artificial Intelligence.” https://plato.stanford.edu/entries/artificial-intelligence/. Retrieved February 19, 2026.
6. Daniel Dennett, “Intentional Systems,” Journal of Philosophy 68, no. 4 (1971): 87–106. https://www.jstor.org/stable/2025382?seq=1. Retrieved February 21, 2026.
Revised March 2026. Minor editorial revisions for consistency with series conventions.
The opinions expressed in this essay are my own and do not reflect any official or unofficial institutional position of the University of Pennsylvania.



