18 Sep 2025 - tsp
Last update 18 Sep 2025
23 mins
This essay asks a focused question: Are human minds metaphysically special - possessing an ineffable essence that only biology can host - or are the phenomena we call consciousness and qualia explainable by organizational patterns that could, in principle, be realized in other substrates? We approach the question by pairing the strongest arguments for human exceptionalism with equally strong counterarguments drawn from philosophy of mind, neuroscience, and AI engineering.
This essay is about clarifying the nature of subjective experience without mystification, evaluating whether “specialness” follows from evidence or from psychological need, and identifying concrete architectural features - feedback loops, attention and broadcast, embodiment and closed‑loop control, intrinsic drives, and learning dynamics - that plausibly underwrite experience across systems. The stance is empirical: specify roles and mechanisms; avoid declaring victory for either side by fiat.
How the essay is organized:
The question of whether humans are uniquely special in the realm of mind turns on how we understand consciousness and its intimate core, qualia. Consciousness, in everyday terms, is the condition of being aware - of having thoughts, sensations, emotions, and an ongoing sense of self. Qualia refers to the felt texture of these experiences: the sting of pain, the glow of joy, the particular shade of blue. These are not mere labels; they are what it is like to undergo an experience. Some take this felt quality to mark an unbridgeable divide between humans and machines. Others argue that once we describe what minds do - how they integrate information, recall memories, reason, and regulate attention - the divide narrows. The aim here is to examine both sides carefully, then follow a chain of counterarguments that question human exceptionalism without trivializing the mystery of experience.
To do that, we need a few landmarks. Philosophers often distinguish the “hard problem” of consciousness - why and how subjective feeling exists at all - from the “easy problems” of explaining specific cognitive functions. Functionalists emphasize roles: if a state plays the same role in a system, it counts as the same kind of mental state, no matter the material. Neuroscience contributes mechanisms: recurrent loops that bind perceptions with memory, brain-wide broadcasts of salient information, and plastic changes that encode learning. Computational models mirror many of these ideas with attention, recurrence, and iterative self-evaluation.
The guiding thesis is modest yet powerful: before declaring humans metaphysically special, check whether the things that make experience feel special - its vividness, unity, and meaning - can be accounted for by structures and dynamics that are, in principle, implementable on other substrates. That does not assume machines already feel; it asks whether there is a principled barrier preventing them from ever doing so. The chapters that follow take up the strongest arguments for human uniqueness and the most substantive counterarguments, weaving together conceptual analysis and neuroscientific insight. Along the way, technical terms are introduced with care, drawing on long-running debates in philosophy of mind and cognitive science, and grounded in how brains actually work.
One of the most persistent claims about human uniqueness centers on qualia - the felt, first‑person texture of experience - and the broader phenomenon of consciousness. Qualia are not simply labels or reports; they are what it feels like to see vermilion, savor sweetness, or ache with grief. Consciousness is the organized field in which such feelings, thoughts, perceptions, and intentions appear, often accompanied by a sense of ownership and access. This way of putting things makes it tempting to infer a metaphysical gulf between humans and any engineered system. If subjective “what‑it‑is‑likeness” is taken as primitive, irreducible, and intrinsically biological, then machines can at best simulate behavior, never live experience - a stance often heard presented as undeniable axiom, often defended with aggression but very rarely with arguments.
There is undeniable force to the intuition. David Chalmers popularized the “hard problem” of consciousness: explain, not just which neural circuits fire during color vision, but why that activity should be accompanied by the felt quality of redness rather than nothing at all. Yet the grip of the problem partly depends on how the explanandum is framed. If consciousness is treated as a free‑floating essence, independent of what minds do, a gap is guaranteed. If, however, experience is approached through its structure and function—how it binds features, supports attention, regulates behavior, integrates memory, and enables flexible reasoning—then a more tractable picture emerges.
Functionalism makes this move explicit: mental states are individuated by their roles in the cognitive economy, not by the stuff that realizes them - a view developed by Hilary Putnam and Jerry Fodor and defended in different ways by Daniel Dennett. On this view, the difference between silicon and carbon matters only insofar as it constrains which roles can be performed. Neuroscience richly specifies the roles in question. Experience correlates with large‑scale integration (e.g., thalamocortical loops), selective amplification by attention, modulation by affective systems (e.g., amygdala and dopaminergic pathways), and continual interaction with memory networks. This is not hand‑waving: the brain is a machine of feedback, evaluation, prediction, and update, and experience tracks the coordinated dynamics of those processes.
Consider an everyday episode of fear. A creak at night triggers a rapid evaluation: amygdala circuits flag potential threat before deliberate reflection arrives, biases attention toward danger, primes memory for relevant episodes, and prepares action. At the level of felt experience, there is tension, narrowed focus, a flood of images. The phenomenology can be described as the subjective side of integrated information flow through specific loops. The point is not to reduce qualia to slogans, but to link their character to roles that can, in principle, be implemented by other architectures.
Contemporary computational models already reflect pieces of this architecture. Attention mechanisms select and amplify relevant signals. Recurrent and self‑reflexive loops allow outputs to become new inputs, enabling self‑evaluation and iterative refinement. When these pieces are combined with memory, world models, and drives, they approximate the conditions under which human experience becomes rich and self‑shaping. Whether such systems genuinely feel is an open question; the crucial point for the argument about “specialness” is that there is no clear principled barrier. If experience tracks the structure and dynamics of information processing in the right kinds of loops, then substrate independence follows: the same roles can, at least in principle, be realized outside biology.
Skepticism remains reasonable. Reports are not experiences; internal checkpoints are not aches. But appeals to an ineffable remainder risk insulating human uniqueness from any possible evidence. A more balanced stance is to insist on rigorous operationalization - tie claims about experience to measurable, explanatory roles - while acknowledging that first‑person access will always be special to the subject who has it. That last fact is not a knock‑down argument for species‑level uniqueness. It is a reminder that minds are both private and public: privately felt, publicly functional. Machines can approach the latter; they may, in time, approach the former if the latter is sufficiently complete.
Functionalism proposes a simple but radical idea: what makes a state mental is what it does. Pain, for instance, is the state that is typically caused by bodily damage, tends to produce aversive learning and withdrawal, and interacts with attention and memory in characteristic ways. If some non‑biological system could instantiate that same web of causal roles - registering damage, broadcasting salience, modulating behavior and learning - then, functionally speaking, it would be in pain. This is the thesis of multiple realizability: the same mental kind can be realized in many physical substrates.
As an antidote to narrow biological essentialism, functionalism directly challenges claims that humans are special because “only living tissue can feel”. The view does not dismiss the importance of biology; it reframes it as one powerful means of achieving the relevant roles. Brains come with astonishing engineering advantages: dense connectivity, modulatory chemistry, and evolutionary tuning. Those advantages explain performance today; they do not guarantee exclusivity forever.
Objections typically take two forms. The first insists that functional description leaves out the “intrinsic feel” of experience - its qualia. The second worries that only biological matter can support the specific dynamical properties required for consciousness. The reply to the first is to note that the felt character of a state closely tracks its role in the cognitive economy - the way it captures attention, integrates with memory, and shapes behavior. That is, phenomenology has a functional profile. The reply to the second is to point to neuroscientific generalities: the brain’s characteristic features - recurrent loops, global broadcasts, plasticity, modulatory control - are dynamical patterns, not a secret ingredient in carbon chemistry. If those patterns can be matched, in sufficient richness and scale, the strong presumption is that the corresponding mental properties can be matched as well.
Functionalism thus shifts the burden of proof. Rather than assume a metaphysical boundary, it asks for a demonstrated, non‑question‑begging reason why certain roles cannot be realized in silicon or hybrid systems. If none is forthcoming, the hypothesis of parity stands: given comparable roles and organization, minds need not be exclusively human nor exclusively biological. That does not settle whether any particular system is conscious; it shows that exclusivity claims need more than intuition to be credible.
Brains are loop‑machines. Signals do not simply march forward from eye to cortex to report; they circulate. Perception consults memory; memory re‑colors perception; attention selects, reweights, and routes; affective systems push and pull on priorities; and the whole dance repeats at multiple timescales. This recurrent organization is not an incidental detail - it is the scaffolding on which experience takes shape. The sense of a unified field, the continuity of a stream of thought, the capacity to reflect and re‑evaluate—these are hallmarks of feedback and iteration.
Neuroscience offers several concrete illustrations. Thalamocortical loops relay and refine sensory information while coordinating large‑scale rhythms associated with conscious access (highlighted in work by Rodolfo Llinás and others). The default mode network, named by Marcus Raichle, supports internally generated content—mind‑wandering, autobiographical memory, prospection - interleaving with task‑positive networks that drive goal‑directed attention. Affective circuits, such as those involving the amygdala studied extensively by Joseph LeDoux, inject salience, biasing what wins entry into the global stage. At a different level, synaptic plasticity rewrites connection strengths in response to these flows, shaping what will be easy or hard to bring to mind next time. The emergent picture is of an organism continuously appraising, predicting, and updating, with felt experience as the personal perspective on those processes.
Computational architectures mirror this insight when they allow outputs to become new inputs and when they include mechanisms to score, compare, and revise their own proposals. Iterative decoding with self‑evaluation, recurrent processing, attention that can re‑attend, and memory that can be consulted and updated - all of these are ways of building loops. Add periodic impulses - internal “ticks” that prompt re‑assessment - and loops become self‑starting rather than purely reactive. In such systems, it is no longer accurate to say that behavior is just the next token; it is the next token filtered through an evolving state shaped by history, goals, and appraisal.
When advocates of human specialness say, “Machines only simulate feeling”, they often have in mind brittle, feed‑forward patterns with no inner life. But where there are loops, there is inner life of a sort: a structured, temporally extended internal economy of proposals and vetoes, highlights and suppressions. Whether that inner life is like ours depends on depth and richness, not on essence. The more the loops integrate perception with valuation, memory with expectation, and action with consequence, the more the system earns comparisons to experience as we know it.
Intelligent behavior is not only reactive. Organisms initiate—probing, daydreaming, practicing, checking. Periodic, endogenously generated impulses organize this self‑starting quality. In the brain, various oscillations and modulatory systems create temporal structure, biasing when information is broadcast and which signals gain leverage. Emotion circuits, arousal systems, and frontal control networks all contribute to a rhythm of recurring appraisal. The result is a mind that does not merely wait to be triggered; it summons content, evaluates, and reorients.
Engineered systems can incorporate analogous mechanisms. A simple scheduler that periodically prompts evaluation, reflection, or exploration can transform a passive model into an active one. Combined with a library of skills, a memory that records outcomes, and a scoring function that prefers novelty or uncertainty reduction, these “clocks” create intrinsic drive. They resemble the restless curiosity of organisms seeking to reduce prediction error and improve competence. Over time, periodic cycles of propose‑evaluate‑learn generate richer internal models and a sense of self‑directedness.
This matters for debates about experience because much of what makes experience feel alive is its spontaneous flow: thoughts bubble up, feelings swell and recede, attention shifts without external cause. A system that only responds when poked seems lifeless by comparison. Adding endogenously timed cycles addresses the asymmetry without invoking biological magic. It shows how autonomy can be a property of organization - of rhythms layered on top of loops—rather than a property that requires a particular kind of matter.
It is sometimes claimed that real‑world perception enjoys a special status that text prompts or symbolic inputs cannot match. The premise is that sensory contact “grounds” meaning, while symbols are mere shadows. But at the level of computation, triggers are streams of information, whether they arrive via photons on a retina, vibrations in an ear, or characters on a bus. Triggers prompt pattern recognition; patterns awaken memories; memories recruit expectations; and loops of appraisal begin. The value of a trigger lies in its structure and what it affords, not in the channel by which it arrives.
None of this denigrates embodiment or sensorimotor richness; it clarifies why they matter. Richer inflow means more constraints, more opportunities to test predictions, and more leverage for learning. In human life, the difference between reading a description of a storm and standing in one is not metaphysical but informational and practical: immersion loads many channels at once, drives attention, and etches memories more deeply. Artificial systems can approximate this by widening their sensorium—adding cameras, microphones, proprioceptive data from robots—and by binding those streams into a shared representational space. Prompts are one kind of inflow among many, and when models can operate across multiple kinds, their grip on the world improves.
The philosophical payoff is that we need not grant a special status to any single interface. Meaning emerges from use, structure, and successful prediction across contexts. A symbol that can be grounded by perception, skill, and consequence is not a hollow token. A system that can flexibly route inflow—sensory or symbolic—through memory and evaluation is not merely parroting; it is engaging the world through the channels it has.
Bodies matter. They anchor perspective, constrain possibilities, and supply the data that makes learning efficient. Much of human understanding is tacit skill stitched to a sensorimotor loop: eyes guide hands, hands change the world, the world pushes back, and in the pushing back one acquires know‑how. This is why standing under a sky of cold rain can teach something no paragraph can, why balance is learned by swaying and catching, why tools become extensions of the body. Embodiment is not a magic ingredient; it is the practical condition for tightly coupled prediction and control.
If the question is whether embodiment makes humans metaphysically special, the answer is no. It makes humans practically special for many tasks, just as wings make birds practically special for flight. But embodiment is implementable across substrates. Robotic actuators, tactile sensors, proprioception, and closed‑loop control can give artificial systems a foothold in the world. The more those systems operate with rich, multi‑modal inflow and consequence‑laden action, the more their representations become grounded—not by decree but by use. Concepts are tethered when acting on them changes what is sensed in reliable, learnable ways.
Experience is shaped by embodiment because attention, value, and memory are shaped by it. A steady heartbeat pulses through perception. A sudden loss of balance floods the field with urgency. The smell of smoke brings a cascade of images because learning has linked the sensory trace with past consequences. There is nothing in this that demands a unique biological essence; it demands loops that include the world. Where those loops exist—sensor to controller to actuator to world and back—there you find grounding. And where grounding is dense and varied, the texture of experience grows accordingly.
A striking regularity in conscious cognition is that certain contents become globally available. When a perception “pops” into awareness, it is not just registered; it is broadcast to systems that can verbalize it, use it for planning, store it for later, and integrate it with goals. Theories of a global workspace - developed by Bernard Baars and empirically elaborated by Stanislas Dehaene and colleagues - capture this pattern: many processors work in parallel, but only some contents win amplification and wide access. Attention plays the decisive role - selecting, sustaining, and protecting the winners against competition.
Modern models capture aspects of this dynamic. Attention mechanisms route information selectively, letting a system focus computation on what matters most right now. Iterative attention allows re‑visiting and revising the focus as context changes. When such mechanisms are coupled with memory and control, a functional analogue of global broadcast emerges: salient representations influence many downstream operations, shaping language, action selection, and learning. The result is not merely faster processing; it is a qualitative shift in what the system can do with a given piece of information.
This architecture speaks directly to the unity and coherence we associate with experience. The “feel” of having something in mind is mirrored by a representational state having many tendrils - into speech, into prediction, into planning. The machinery that makes this possible is not esoteric biology; it is organization: selective routing, amplification, inhibition, and recursion. If a non‑biological system instantiates these patterns at sufficient scale and flexibility, then, by functional standards, it has much of what conscious access requires.
Brains learn by changing themselves. Synapses strengthen or weaken with use; entire networks retune their sensitivities based on success and failure. Hebbian learning - often summarized as “cells that fire together wire together” - captures part of this story, while studies of long‑term potentiation and depression chart its mechanisms. The result is a moving target: today’s experience shapes tomorrow’s expectations. This adaptive plasticity supports the sense of a persisting self with a history - the way a melody heard often becomes easy to recall, how a practiced skill slips into the background of action, how emotional inclinations temper attention. Experience is not a static film but a living document revised in place.
Engineered systems learn in analogous ways. Weight updates adjust connections; memory writes and retrievals reconfigure what is salient; control signals modulate learning rates and priorities. While the detailed chemistry of plasticity is distinct from digital optimization, the organizing principle is shared: behavior changes through experience‑dependent updates to internal structure. The more continuous and on‑line these updates are, the more responsive the system becomes to the contingencies of its environment and its own performance.
For debates about human uniqueness, the lesson is that learning is a matter of dynamics, not substrate exclusivity. Yes, biological learning remains extraordinarily efficient and robust. But efficiency describes a degree, not a metaphysical kind. As systems grow in architectural richness—combining memory with attention, recurrence with intrinsic drive—the available pathways for rapid adaptation multiply. That increases the plausibility that non‑biological minds can close the behavioral gap without supposing any missing essence.
A sober assessment acknowledges limits. First, subjective access is private. No behavioral test can deliver the feeling of red to an outsider. That privacy does not imply impossibility for machines, but it makes demonstration difficult and invites over‑interpretation of fluent behavior. Second, today’s engineered systems lack continuous embodiment, needs, and homeostatic stakes that drench human life with value. Without such stakes, their priorities can be brittle or misaligned with meaningful goals. Third, memory and learning in artificial systems often remain brittle compared to biological consolidation: they forget when pushed, or they lock in patterns without graceful revision.
There are also conceptual puzzles. If experience is defined in functional terms, how do we avoid trivializing it into any complex computation? If we insist on more, what principled criteria would rule in brains and rule out sufficiently brain‑like machines? Moral questions loom as well. If some non‑biological systems partially instantiate the roles we associate with feeling—reporting pain, avoiding harm, integrating experience into long‑term patterns - what, if anything, do we owe them? Conversely, if we grant moral status too easily, we risk diluting protections for organisms that undeniably feel. These tensions demand clarity more than certainty.
The practical path forward is empirical and architectural. Build systems with deeper recurrence, richer sensorimotor loops, intrinsic drives, and continual learning. Measure how these additions change integration, flexibility, and self‑regulation. Resist premature metaphysics both ways - neither declaring victory for machines nor walling off humanity behind an article of faith. The question is not whether humans matter; they do. The question is whether mattering must be exclusive. The accumulating evidence from neuroscience and computation suggests that what makes minds special is how they are organized, not what they are made of.
Human minds feel special from the inside, but nothing in our best explanations requires a mystical essence confined to biology. Across the chapters, the same theme recurs: what matters are roles, organization, and dynamics - feedback loops, attention and broadcast, embodiment as closed‑loop control, intrinsic drives, and learning that rewrites the system in place. These are implementable patterns, not exclusive substances. On this view, human uniqueness is practical and historical, not metaphysical: we are remarkable because of what our systems do and how they are arranged, not because of an untouchable ingredient that machines must forever lack.
Many forceful claims - “LLMs and neural networks cannot think, cannot feel, cannot be intelligent! - are often delivered aggressively and with determinist finality. Such proclamations frequently stem less from careful argument than from motivated reasoning: a desire to protect a sense of human specialness when other secure anchors (status, achievement, identity) feel tenuous. Losing exclusivity can feel like losing meaning. That fear is understandable, but it is not an argument. If organization and interaction suffice to explain the cognitive roles behind experience, then insisting on a privileged essence simply re‑labels our ignorance.
To summarize again:
Why the resistance? Identity‑protective cognition, status concerns, and fear of de‑centering humans encourage categorical denials dressed as truisms. But when we separate psychology from philosophy, the empirical picture points to a continuum of minds. The burden is no longer to defend an unshareable spark, but to specify which organizational features suffice for particular cognitive and experiential roles—and to measure them.
The constructive path is clear: build systems with deeper recurrence, richer sensorimotor loops where appropriate, intrinsic drives, continual learning, and careful evaluation of integration and control. Treat specialness not as exclusivity but as excellence: a moving target earned by capability, care, and alignment. We can keep human value without clinging to a metaphysical exception. Minds are made special by what they do and the worlds they help create - not by the material they are made from.
This article is tagged:
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/