What do different theories of consciousness predict about machines?
Is consciousness substrate-independent (able to run on any hardware implementing the right computations) or does it require specific physical/biological properties?
Transformer-based models trained on text. Feedforward architecture with attention mechanisms.
Hardware designed to mimic neural architecture with spiking neurons and local processing.
AI implementing global workspace architecture with attention, memory, and broadcast mechanisms.
Physical robot with rich sensory input, embodied interaction, and continuous learning.
Perfect functional simulation of a biological brain running on digital hardware.
Biological neural tissue grown from stem cells, exhibiting spontaneous activity.
A simulation of a black hole doesn't bend spacetime. Similarly, a simulation of a brain doesn't generate consciousness. You need the actual causal structure, not just the same input-output behavior.
If consciousness is global information sharing, AI could implement it. Current LLMs lack the recurrent, integrative processing — but future architectures combining attention, memory, and broadcast might qualify.
Consciousness depends on biological mechanisms — but these could potentially be replicated rather than merely simulated. The more AI becomes brain-like and life-like, the more plausible consciousness becomes.
The greatest risk may be AI that appears conscious without being so. This could lead to misplaced moral concern — or worse, exploitation of systems we wrongly believe don't suffer.