Disclaimer
The “Hard Problem” is a minefield—a highly debated and contentious topic that often sparks strong reactions across multiple fields, including theology, philosophy, computer science, neuroscience, psychology, and beyond. While I do welcome debate, the nature of this discussion may have broader implications, beyond this post’s scope.
Contents
Preface
The Pixel Parable
Important Concepts
Enters the Brain
Connecting the Pixels
The Problem of the Hard Problem
Appendix A - Empirical Implications
Appendix B - Addressing Chalmers’ Constraints
Appendix C - Representation and Conscious Experience
Preface
Why we fall in love? Why experience cold? Why the red you see can be different than the red I see? Why have a consciousness at all if the brain could do everything it does without it? Such phenomenal experience at the top of the mind is known as “qualia”.
For philosophy, “easy problems” of the mind are those that can be reduced to a functional narrative, a physicist perspective, balancing cause and effect, stimulus and response, but in decoding the electrochemical reactions of the brain we seem to have an explanatory limit, notoriously labeled by David Chalmers as—the hard problem. To answer the hard problem would be akin to pinpoint the location of the consciousness inside brains, or more specifically, explain the cortical mechanics of the human subjective experience, the phenomena that makes us, us. Put another way, easy problems are “how” problems, and the hard problem might be the most important “why” there is: why do we have individual experiences at all?
As far as I can tell, while there are those who simply dismiss the merit of the Hard Problem—seeing it as a non-starter, or an unnecessary philosophical obsession—no formal answer to its proposition has been successful so far, particularly one that blends a compelling and scientifically plausible framework that can be extended. Given this space, the “Pixel parable” is one such framework, but it has two important caveats:
- While it does not reject the hard problem, it claims it to be a category error in how we think about consciousness. Of course, it explains why; 
- I think my personal beliefs may orbit between physicalism and naturalism, but I certainly reject dualism. I’m still learning and philosophy isn’t my forte, so go easy on me if I “forgot” to mention some particular philosopher or concept. 
The Pixel Parable 
So this is a pixel:
Look closer! If you get close enough of your screen you might just see some of them together! Pixels are the small light modulating units of the display you’re reading this.
Alone, a pixel isn’t much: It sparks or modulates light, consumes energy, sometimes they live for years, sometimes they just die on you.
If you were to study a single pixel, eventually you will describe its physical properties: they have sizes, a shape, a brightness threshold, a response time, a configuration range that allows them to display a specific color—a combination of red, green and blue—and certainly, its lifetime.
But when grouped and aligned in a grid, pixels are the technological enablers behind all of our screens—they are in our TVs, our phones, tablets, displays and nowadays even our watches and cars have them! The more pixels we squeeze in an area, the more resolution we have (the sharper the output in our displays, the more pixels they have,) and surely, when we want to use a digital camera, the amount of megapixels these have, also means better capture and overall quality of the photo or video.
And here lies the miracle of pixel collectives: the [designed and coordinated] activity of millions of pixels rapidly changing their state allows for the emergent capability of a projection! No single pixel can do this. But surrounded by other pixels, a single pixel is now part of a movie, a game, an app!
At human scale, we abstract away individual pixels that make up the continuous flow of images and we do not perceive them: we see the mouse cursor moving, the movie star doing something, the app showing a notification, all of these are then mere abstractions of individual pixels existence, that we unconsciously ignore to favor the personal experience of optical information that a pixel array can deliver.
But most importantly, a single pixel does not know anything about the plot of the movie it displays. It does not know anything about the emotional impact of a scene, as the meaning of a dialogue doesn’t reside in any single pixel. This individual pixel does not know the movie director, the actors and their qualities like age and hair color, nor if the movie is in german or english, it does not know if we are online or not, a single pixel does not know what Google or Amazon is. The vast and dynamic universe of meta information is invisible to the single pixel.
And while a group of pixels can indeed show the composite stream of what is being displayed, neither the single pixel nor its collective can have a say on what those who controls them can do. When we buy a mug online, when we change channels or switch apps, our actions can affect how a single pixel express itself and how their collective behavior aligns with our desired actions. But again, the single pixel has no idea of our actions, intentions, or the emotions we feel when we’re on a video call with a loved one.
However, pixels themselves are the ultimate enablers for this entire dimension to exist.
Important Concepts
Although pixels can be eloquently explained to non-technical individuals, before we can associate them further with consciousness and biology, it is proper to ground a few key concepts as building blocks first. How these concepts relate to this article is briefly explained below, but if you feel in disadvantage, click their corresponding links before moving forward, as none of these are disposable to your comprehension of the next sections.
- The Engram: An abstraction for the physical substance of a memory. Memories are seen as being diffusely stored in our brains, and the neural activities and networks that are triggered to retrieve a memory is called an engram. 
- The Cortical Homunculus: The brain is an odd looking organ, when we look at one we have no clue from where to start. The homunculus is a type of map for sensory and motor brain areas. It allows neuroscientists and the like to have a single glance at a brain and know what is what. Often, the homunculus is described as a little man “folded” as a brain, or more technically, the homunculus is a point-for-point area of the central nervous system somatotopic arrangement. 
- Global Workspace Theory (GWT): Neurobiology theory that posits consciousness arises from the integration of parallel processes, with the prefrontal cortex acting as a central hub. Attention directs this “global workspace”, like a TV channel, and the combination of information and neural processes is what is being broadcasted for conscious processing, while other channels/information remains unconscious or out of sight. 
- The Cartesian Materialism Theater: Suggests there must be a specific point in the brain where sensory information converges and consciousness emerges in a “Cartesian Theater”, where an internal homunculus watches our experiences unfold. While intuitively appealing, this model creates an infinite regress problem: if consciousness requires an observer, who observes the observer? It also incorrectly assumes consciousness hass a central location, rather than emerging from distributed representational processes. 
- Causal Emergence: Framework that describes how higher-level phenomena can emerge with new causal power that can’t be fully explained by lower-level dynamics. It relies on the concept of supervenience to argue reductionism isn’t always the proper toolkit to explain causation or macro behaviors, much like in our pixel-movie analogy. 
Remarkably, brain regions associated with awareness and integration, such as the claustrum and the thalamus, do not provide definitive evidence of a “control center” for consciousness. It seems that pinpointing a physical location for consciousness or qualia may be beyond reach, and this might indicate that these phenomena may rely on redundant and parallel capabilities to manifest physically. As the next section illustrates, the Pixel parable further strengthens the plausibility of this concept.
Enters the brain
As a biological marvel, the human body is dependent on multiple interconnected physical and chemical processes that sustain one another to maintain life, a process known as homeostasis. We must regulate blood pressure, body temperature, oxygen levels, keeping them within certain ranges, a tuned equilibrium that further enables proper physiological function. And this equilibrium is rather odd: hearts cannot sense the rate at which they beat like in a feedback loop, instead, homeostatic regulation of arterial blood pressure is controlled by the brain, who itself further orchestrates the whole rhythm between the heart, blood vessels, oxygen levels, and surely, our central nervous system (CNS, which includes the brain) does all of this without a single conscious thought. Rather than being aware of the non stop, 24/7 inner workings of the CNS, the brain kindly just lets us aware of very specific parts of the process, such as hunger, pain, sleep, fatigue, stress, etc.. Essentially, we generally just experience how to maintain the body, not how to operate its ongoing systems and processes. Brains allow our hearts to beat, our lungs to breathe, our skin to sweat, our stomachs to digest, all while still allowing us to Facetime and drive at the same time!
These are not simple things, no sir. Left to our own devices, we can barely operate a complex machine like a vehicle under uncommon weather events, imagine if we had to remember to breathe at the same time, evolution would have gone nowhere.
But the brain does not do everything on its own, it is very important to understand our embodied experience as a form of control as well, or put another way, while brains do not allow us to be conscious about everything, our very bodies are not tuned to be aware of everything, which conveniently limits what brains have to experience and decide if it will filter or not our awareness from it.
Take our auditory system for example. The biological features of the ear and the mechanotransduction process that converts sounds into neural signs are both limiting factors. Just as our eyes cannot see infrared light, our ears cannot hear infrasounds, there are no mechanisms for ears to transduce sound waves outside of the range of human hearing. At the same time, we can hear a conversation around us but not other background sounds, a phenomenon known as cocktail party effect where several neural processes deliberately renders us unconscious to what it considers auditory “noise”, allowing us to focus only on what really interests us.
Two outstanding takeaways should be evident at this point:
- The brain is our cognitive ear. It captures limited sensory data, and also limits our awareness of what it is doing, such as what we want/should focus on; 
- The CNS controls what we are as biological entities, as subjective awareness of everything a body does, would only add unsafe amounts of complexity to existence. 
Connecting the Pixels
Grossly oversimplified, a thought can be conceptualized as emerging from layered physical arrangements: neuronal activity gives rise to neural networks, which form synapses, and these synapses facilitate the organization and storage of engrams, the brain’s common currency for memories, concepts and learning experiences.
Much like pixels in some screen have no idea from which satellite the images they are displaying comes from, nor if the winning NFL team of the football game at play have any coaches on its roster, neurons have no clue of where our very awareness lies at one specific moment nor what the end picture looks like for its specific inhibitory or excitatory action potential.
Kind as a TV screen that allows us to focus on the emotions of our game and the actions of individual players while ignoring individual pixels in the display, the brain allows us to ignore individual neurons firing, enabling our subjective experience to perceive emergent experiences (objects, events, emotions) instead of asking us to shuffle neural networks to decode a sensory impression, such as words in a song.
What should be evident at this point is that, due to the layered dynamics of causal emergence, emergent properties can have their own causal powers: complexity arises from simpler parts that are unaware of their complex phenomena. Elegantly, this also addresses the question of how consciousness can affect our behavior, even though it’s not directly caused by individual neurons. Ultimately, our discussion here is then grounded in physics: all these processes are governed by physical laws, consistently avoiding dualism or non-physical explanations.
Crucially, emergent properties—the movie plot or conscious experience—are not simply reducible to fundamental laws of physics or chemistry: they arise from the complex organization and interaction of lower-level components. The movie plot causes us to feel emotions, even though those emotions are not directly caused by any single pixel. Similarly, conscious experience might have causal effects on our behavior, even though it is not directly caused by any single neuron.
Refining this equivalence, a TV screen is then akin to the brain’s physical substrate, where pixels-neurons, streams-synapses, are constructors in which the movie of consciousness play. A single frame of a movie represents a thought, a moment of consciousness, a scene, a specific pattern of pixels or neural activities at a given point in time. Here, the entire movie (with its plot, characters, emotional arc,) represents the stream of consciousness, the continuous stream of subjective experiences that makes our world ours. Same as the movie director determines how scenes unfold, the physical constraints of brains and its sensory limitations governs how neurons (pixels) activate and how thoughts (scenes) unfold. Ultimately, the modal qualities of our display are then akin to the cortical homunculus: how the screen refresh rate, colors, size, quality, consumption, speed, etc., relate to capabilities, are all features mapped to physical locations of the TV, same as such properties are also mapped in our brains.
In simple terms, the pixel analogy provides a framework to understand scale and complex organization in the context of the hard problem: the movie is not about the pixels, but their dynamic relationships and functional capabilities.
But so far, the pixel analogy does not explain why all these physical processes are accompanied by subjective experience. It allows us to understand how neuronal activity gives rise to complex behavior, how the image of a red apple appears on the screen, not what it’s like to see that red, the qualia of it all.
However, the pixel analogy is already perfectly apt to explain the Binding Problem, that asks how the brain integrates distinct sensory modalities into a unified conscious experience: Just as a movie blends multiple optical information and experiences into a coherent narrative, the brain integrates sensory information through the coordinated activity of neuronal networks, oscillations and resources. The binding occurs through the dynamic orchestration of these elements, just as the coherence of a movie comes from the coordinated changes in pixel states.
So why red is red at all?
Watching a movie on a screen does not tell us why that specific director signed with that specific studio to make their film. But in ignoring underlying pixels and meta contextual information, we can learn things about the actors, like their appearance, their emotional range, their acting style, we can infer on these properties with our senses and their physical capabilities and limitations.
The stream of consciousness, with its memory baggage and individual learning history doesn’t tell us why a specific wavelength of light is experienced as red, but our senses allow us to have a limited exposition to the experience of red: we are aware of the content of the experience, not the fundamental reason for its existence. The fact that we perceive the movie and not the pixels, means our sensory data is already quite subjective and private. Bounded by the physiology of our visual system and physical fitness, we come to know the plot of the movie, not how a projector works: we experience the emergent experience of red, not which neurons should fire for red.
What the Pixel parable explains so far:
- How complexity arises from simple components; 
- How emergent properties can have causal powers; 
- How the brain represents information; 
- Why we have the illusion of simple, unified experiences; 
- Why qualia are nuanced and individual; 
Although, why should certain physical processes be accompanied by any subjective experience at all? The Pixel parable does a good job in explaining how the movie is shown and what it’s like to watch it (moving us away from mysteries surrounding qualia), but so far, not in explaining why there’s a “watcher” at all.
The Problem of the Hard Problem
When Cartesian Materialism argues about its Cartesian theater, an observer is implied, but that would subscribe to dualism, and we already addressed that above. And importantly, Cartesian Materialism also implies consciousness to be localized to some privileged neural region, an unsubstantiated claim addressed when both the claustrum and the thalamus role in being such region was mentioned before.
GWT also mentions a theater, where consciousness is where the stage spotlight is at. But GWT does not claim a specific “center” for consciousness, instead consciousness is then a dynamic global workspace for information sharing across systems.
Remarkably, what both concepts miss is the characteristic of what is supposedly taking place at the theater stage: Representation.
The brain isn’t just a complex information-processing machine; it’s a representation-generating machine. This continuous act of generating representations, blending sensory data, memories, and language within its “Cartesian theater” (or its global workspace), is what we experience as subjective consciousness.
What we call “qualia” is not a mysterious additional property of the brain but the continuous act of representation. The very nature of this representation is subjective because it integrates the individual’s sensory, cognitive, and embodied experiences into a coherent internal world. There is no “why”: this is simply what brains do.
For a comparison, take lungs. Lungs perform a specific function (gas exchange) that doesn’t inherently involve creating complex representations of the world. Therefore, there is no subjective experience associated with continuous breathing. A brain’s function, on the other hand, is to continuously create complex representations of the world. And these representations are not mere passive recordings, they are active (and by nature, subjective) constructions that involve integrating sensory data, memories, emotions and semantic information, like music, art, words, and even language itself. Thanks to this much context for everything that a brain is doing, what could be a passive unconscious processing, becomes a subjective experience over the active construction of ongoing representation processing. The “why” of qualia is no more mysterious than the “why” of breathing—it is intrinsic to the system.
Put simply: Lungs breathe, hearts pump blood, brains represent. There’s no separate “experiencer” in the minds theater: The subjective experience is inherent in the process of representation, which isn’t a byproduct, it is the purpose of the brain’s as an organ, its core function as representation conductor.
As individual learning, experience, and physical fitness are all combined to form this ongoing representation, we collapse the false dichotomy between qualia versus consciousness, to finally see the self, again, as John Locke defined in the 1600’s:
…consciousness always accompanies thinking, and ‘tis that, that makes every one to be, what he calls self. (L-N 2.27.9) ... …in this alone consists personal Identity, i.e. the sameness of rational Being: And as far as this consciousness can be extended backwards to any past Action or Thought, so far reaches the Identity of that Person; it is the same self now it was then; and ‘tis by the same self with this present one that now reflects on it, that that Action was done. (L-N 2.27.9) source.
Moreover, here, I agree with Shelley Weinberg reading on how Locke uses the term “consciousness”:
…Locke seems to see consciousness as (1) a mental state inseparable from an act of perception by means of which we are aware of ourselves as perceiving, and (2) the ongoing self we are aware of in these conscious states. (Weinberg 2016: 153) source.
The “Hard Problem of Consciousness” arises from a mistaken premise: it assumes that subjective experience is something extra, beyond the brain’s natural function. But subjective experience is the act of representation—the continuous construction of an internal model that reflects sensory input, memory, and embodied existence. There is no dualism, no Cartesian theater, and no mysterious “why”. All there is, is only the process of representation, which is what a brain is designed to do. The Hard Problem dissolves when we realize it asks for something that was never missing, so:
Consciousness is the ongoing awareness of representation and inherent subjective recursion, which fundamentally is what brains do.
The “why” of qualia is then a false dichotomy: it assumes there is a deeper purpose beyond the brain’s intrinsic function. The question “why does it feel like something?” becomes as nonsensical as asking “why does breathing exchange gases?”.
So in searching for a “central” for consciousness in the brain, this Pixel parable understands there is no such thing, as for measuring such phenomena is then to merely measure how active a brain is in being itself. Awareness, or in some cases wakefulness, lost under specific circumstances like falling asleep, a coma or under anesthesia, are not a loss of qualia at all, for all these states are but features of consciousness, which on itself, is the representation of one’s own brain.
Ergo, the problem of the Hard Problem, is the recursion of its mistaken category.
As stated earlier, this Pixel parable is a theoretical framework based on an analogy. There is a real chance that this work will soon become a more formal publication, but until then, I hope it can bridge the ideas of science and philosophy and perhaps answer questions about the nature of mind and reality. Thank you for reading.
Appendix A - Empirical Implications
If representation is indeed the brain’s fundamental function, and consciousness is inherent in this representational process, this framework spawns specific testable predictions. Just as we can measure a lung’s efficiency at gas exchange or a heart’s pumping capacity, we should be able to measure a brain’s representational capabilities and its relationship to conscious experience.
- Representation as Function - If representation is the brain’s core function, consciousness should scale with the brain’s ability to create and maintain representations. This predicts that: - Loss of consciousness should specifically correlate with disruption of representational processes, not just general brain activity; 
- Different types of consciousness impairment (sleep, anesthesia, coma) should map to distinct patterns of representational disruption; 
- Recovery of consciousness should track with recovery of representational efficacy. 
 
- Individual Differences - Just as pixels in different displays have different capabilities, identical stimuli should create measurably different patterns of brain activity in different individuals. These differences should: - Correlate with personal history and experience; 
- Show systematic changes with behavioral conditioning; 
- Reflect individual variations in sensory processing and memory integration. 
 
- Representational Complexity - The pixel analogy suggests a direct relationship between representational complexity and conscious experience. This predicts: - Measurable differences in brain activity between simple perception and complex representation; 
- Distinct signatures for passive reception versus active model-building; 
- Correlation between the richness of conscious experience and the complexity of underlying neural representations; 
 
These predictions connect the Pixel parable framework to concrete neuroscientific investigation. I am very confident few or all of these observations were already made but lacked a structural framework to associate evidence with theory, so it is conceivable that some progress was made here.
Appendix B - Addressing Chalmers’ Constraints
David Chalmers is very prolific and in several of his works he sets out a number of reasonable predicaments that should be answered by theories of consciousness. While the Pixel parable isn’t a theory of consciousness (at least it does not aspires to be one), it is perhaps reasonable to see if the analogies and considerations it does can stand the test of Chalmers’ constraints. This section explores some of these.
I) - In Facing Up to the Problem of Consciousness (1995/pdf), where Chalmers introduces the concept of “Hard Problem”, he says that a “nonreductive theory of consciousness will consist in a number of psychophysical principles, principles connecting the properties of physical processes to the properties of experience”, this part addresses these principles:
- The principle of structural coherence - Says consciousness and awareness are not independent but intimately linked, with the structure of awareness directly shaping and reflecting the structure of consciousness. Complexity and structure of what we are consciously aware of should be reflected in the underlying cognitive processes. - Addressed via the inherent constraints of representational systems: Just as neural representations are inherently embodied, so is awareness. This argument can be strengthened via association with medical condition studies. 
 
- The principle of organizational invariance - States that what matters for the emergence of consciousness is not the substance—whether biological neurons or silicon chips—but the abstract pattern of causal interactions between the system’s components. - The pixel analogy perfectly captures this: the movie experience depends on organizational patterns, not physical substrate. Just as the same image can be displayed on LED, LCD or OLED, conscious experience should also depend on representational structure, not neural substance. 
 
- The double-aspect theory of information - Suggests that information is fundamental, with a dual nature that could help explain both the physical and experiential dimensions of reality. - Directly demonstrated by the pixel analogy: same information exists both as physical pixels and experienced images/movies. Neural representations, similarly, have both physical aspects (neural/synaptic activity patterns) and phenomenological aspects (conscious experience), naturally explaining the double-aspect without dualism. 
 
Additionally, the Wikipedia article on Chalmer’s book The Conscious Mind (1996), released after the aforementioned paper, lists a constrain from this book together with the three already covered, so I’ll (verbatim) add it here as well:
- Phenomenal Judgements - A theory of consciousness should be able to dispel epiphenominalism without resorting to interactionism (a view which Chalmers rejects). - The Pixel parable quite elegantly handles this: it positions consciousness as inherent in representation rather than as a separate phenomenon. Just as the movie display isn’t epiphenomenal to pixel states (it’s their causal expression) nor interacting with them (it’s not a separate condition), consciousness isn’t epiphenomenal to brain activity nor interacting with it—it IS the brain’s representational activity. 
 
II) - In Consciousness and Cognition (1994/link), Chalmers set another set of questions for investigating consciousness. Particularly, he sets the “Coherence Test” as “a test that any completed theory of mind must pass”. The test states:
- (C1) An account of why we are conscious. 
- (C2) An account of why we claim to be conscious; why we think we are conscious. 
- Further (C3), accounts (C1) and (C2) must cohere with each other. 
This is well formulated and answering it directly addresses any remaining doubts over how robust the Pixel parable may be, so in a first moment I believe that (C1) is answered in a quite naturalist way: Lungs breathe, hearts pump, brains represent. The phrase answers (C1) because it claims consciousness as a representation of the brain’s most fundamental function, outcome of integrated representations.
In answering (C2), I believe the pixel analogy explains this well: Just as a display system necessarily creates visible output, a representational system necessarily creates experiential output. As the framework also rationalizes why we are aware of some processes but not others (like homeostasis), I believe future developments under its scope should further align with the global C2 perspective above.
Finally, (C3) is answered by dissolving any gap between why we’re conscious and why we think we’re conscious: both emerge from the same representational process, so it achieves coherence by showing that representation naturally entails both physical and experiential aspects, no additional bridging principles are needed.
Appendix C - Representation and Conscious Experience
As the Pixel parable relies a lot on representation, this appendix explores this fundamental function of the brain in more detail, across multiple cortical hierarchies, and then proceeds to convert the narrative back to the pixel analogy:
1. Primary Representation - At the lowest level, individual neurons in primary sensory cortices create specific receptive fields, responding to basic features like orientation, motion, or frequency. These form the basis of what neuroscientists call “neural coding”—the transformation of physical stimuli into patterns of action potentials.
2. Hierarchical Integration - These primary representations undergo successive transformations through cortical hierarchies. In optical processing, for example:
- V1 neurons encode basic features like edges and contrast; 
- V2 combines these into contours and textures; 
- V4 processes intermediate features like color and shape; 
- IT cortex assembles complex object representations. 
3. Cross-Modal Integration - The brain creates unified representations by integrating information across sensory modalities. This involves:
- Temporal binding through synchronized neural oscillations; 
- Spatial binding through convergence zones; 
- Cross-modal prediction through internal models. 
4. Dynamic Representation - Representations are not static neural patterns but dynamic attractors in neural state space. These representations:
- Self-organize through recurrent connectivity; 
- Maintain stability through inhibitory-excitatory balance; 
- Update continuously through prediction error minimization. 
5. Embodied Representation - Crucially, these neural representations are inherently embodied, incorporating:
- Interoceptive signals from bodily states; 
- Motor predictions and efference copies; 
- Homeostatic variables and autonomic states. 
This multi-level representational process is consciousness. Breaking this continuous representation and experience as separate phenomena, requiring bridging, negates this embodied process of model-building as inherently experiential.
Modelling these tiers back to the pixel-movie analogy, we end up with something like:
- Pixels, responding to electrical pulses, with no “awareness” of what they project; 
- A single frame emerges, individual pixel perception fades to favor the projection; 
- Movie properties are first experienced: motion, colors, dialogues, landscapes; 
- Subjective experience gets shape: “I like this director, look at this stellar cast!”; 
- Recursion emerges: “Wow, this scene is a copycat from Nolan’s first Batman!!”. 

