May 2026
Monday, the 11th
I wrote this yesterday. It's mostly a response to this article. I'd like it to be widely read, but who has the energy to convince editors? And why should I bother, it looks like all the facts will be outdated by next week anyway.
Minds, Memes and Morality
When I sneeze, I hear my father sneezing. When my brother speaks, I hear my father speaking. Mathematically tractable sound waves, triggering the mathematically intractable phenomenon of a mind remembering another mind.
Reading Peter Godfrey Smith I saw mind and consciousness develop in parallel with life, ultimately yet-to-be clarified aspects of physical systems exhibiting ever more complex stimulus-response patterns to complement the ever more complex surrounding ecosphere. There seems to be a class of biologists and neurologists who see consciousness as an intrinsic aspect of each such specific physical system, in particular related to the stimulus-response (termed perception-action) duality.
It is typically the case that conscious awareness of something implies the relative simplicity of both stimulus and response, even if these are connected by intractable internal processes: relatively simple stimuli can be associated to infinitely more complex physical objects that are the source of said images, sounds, smells etc. In fact many humans attribute complexity to phenomena that have relatively simple physical explanations, such as the weather. Atheists are still having a hard time trying to bring objectivity into our interpretation of the universe.
As cultural elements nominally independent of human genetic material, religions are part of the "meme" universe (introduced by Richard Dawkins). Easy examples of memes are words: patterns of sound with lifetimes typically longer than human lifetimes, spread from human to human. In the more general context of information, many concepts are also memes: long-lived patterns of information communicated from human to human (think "prime number"). Feelings are also concepts, but they correspond to rough patterns of neural activity common to all humans because we share the genes that build the underlying hardware.
The conscious human mind mixes "meme-concepts" and "hardwired-concepts". The word "mama" is universal because it's the simplest linguistic interpretation of the cry of infants. Onomatopoeias like "cuckoo" are non-arbitrary non-meme words with obvious objective explanations. Basic feelings are concepts with not-so-obvious objective explanations, and the words that label them are arbitrary. Intelligence, consciousness, attention and other concepts are meant to quantify aspects of the mind as it works with all concepts. Human languages provide best-effort labels to human concepts, and human stories and monographs are meant to quantify relations between concepts by fixing relations between the labels of concepts. And today humans have LLMs who seem to encode many of those relations and correlations. I find it perfectly natural to wonder about the overlap between "mind", "intelligence", "consciousness" and "LLM".
But the previous paragraph is rendered mostly wrong by how fluid and loose concepts and words are in the real world — the absurdity of the human mind is excellently documented in the appropriately titled Brain Bugs. Human memories and concepts are analogue in the sense that synapses weaken over time and they are strengthened every time a signal passes through them. Once out of "training" and in its "inference" phase, an LLM is just a fixed database. So we can safely call out the ignorance of Richard Dawkins and state that there's nothing like a self-reflecting self-aware conscious LLM. Apparently not even humans need consciousness to handle language.
Except that LLMs are wrapped in fluid scripts and prompts that do work like memory and do access the LLM in open-ended loop operations, yielding AI agents. And we are barely starting to investigate the physics of semi-conductors, probably on our way of sticking more efficient and complex components into our machines. The stated goal of the various companies is to reach AGI and/or ASI, loose terms for "human-like" and "better-than-human intellectual abilities".
So what happens then? Let's ignore the experts who say ASI is a threat, and the mathematics that says the AI agents we already have cannot be controlled. If we take science seriously, the concept of consciousness describes a continuum. We mostly protect and provide for infants and the mentally ill. Some of us are fighting to protect apes, cetaceans and/or octopuses. What do we care for? The fact that a specific configuration of molecules, atoms and ions can't help but be conscious? Or the fact that the mind and all its workings that we can put into words are just approximations of the underlying quantum mechanics, approximations that can in principle also emerge from systems functioning on completely different underlying physics? A numerical simulation of an explosion is not an explosion, but a numerical simulation of an information-processing system is an information processing system.
So what is it that I care about a few seconds after I sneeze? The physical object that I am remembering, or the patterns of the physical object? I expect to be pointed to the people who are enslaved today and the animals who are suffering today. Some would like to protect AI agents, I just think there's zero effort in simply stopping before we get brains in jars. But in the real world, society already allows the creation of literal mini-brains in Petri dishes.
