![[AGI-25 Conference - Machine Consciousness Hypothesis - Joscha Bach-1755518057757.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis - Joscha Bach-1755518108230.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis - Joscha Bach-1755518125334.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis - Joscha Bach-1755518167239.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis - Joscha Bach-1755520403418.webp]]
Plato takes these ideal forms as starting points. The ideal table is sometimes given as an example, but that’s probably not a core arcchetype of the universe or of god itself, but tables are created by human beings in particular cultural contexts, and they are an archetype in our perception. The way in which we classify the world, there is something like an ideal table. The forms that Plato is talking about are actually dimensions in an embedding space. In some of his texts, he alludes to a world outside of our perception, our mental construction that we experience, and that world is an outside [of our?] simulated world that is isomorphic to the world that we experience in these forms.
Virtualism is the idea that consciousness is not a physical object. It’s not a mechanism. It exists as if, and by existing as if, it becomes real because it has causal power over the world, and thereby instantiates himself.
Cyberanimist Hypothesis
→ Consciousness is the conductor that steps in to increase coherence (stabilizes self-organizing software agents)
→ Lacking consciousness leaves you in a vegetative state
→ It seems to be the pre-requisite for turning into a human being (learning algorithm)
→ Consciousness create the world the self and the mind
→ The organization of consciousness facilitates the structure of psyches/ghosts
→ Morphogenesis may be driven by somatic consciousness → plant consciousness, just at different time-scales (see michael levin)
Aristotle’s perspective of the soul
Vegetative layer (growth and nutrition)
Animal layer (perception and decision making)
Human (reasoning) layer
Soul according to Bach
Self-organizing causal pattern (agentic software) for biological organisms with:
- basic message passing between cells
- reward protocol
- reward assignment architecture (economy)
- language of thought (embedding space)
- valence model of the host organism (what’s good for the entire organism)
- world and self model of the host (using game engine, geometry, linear algebra) (linear algebra?)
- reflexive model of operation
Which of these tasks involve consciousness?
→ evidence for long range communication in forests (controlling weather, joint responses against invasive insects) … if plants have ~same communication protocols, spirits could be mobile in forests → back to fairy land but on a rationalist physicalist basis (negation of the negation)
Is next-token prediction the same / very similar to constraint minimization / coherence maximization?
NTP does create local coherence, but LLMs reach coherence much later than humans do in their training…
Power law scaling laws prlly come from scale-free / heavy-tailed distribution of (length of) correlations → heavy tails are important but hard to reach!
Link to original
what does consciousness do differently to be more efficient? Interaction / testing hypotheses?
Background for computational functionalism
Mathematics … the domain of all formal languages
Reality is math hypothesis … the universe is a mathematical object
Epistemological math hypothesis … all that can be known is mathematical (needs to be represented in a language)
Minds do math hypothesis … all mental activity can be described in mathematical terms
Classical mathematicians: “There exist mathematical objects that cannot be constructed (computed), but they nevertheless exist”
Computationalists: “That doesn’t make sense / you’re hallucinating those objects → Only constructive maths (computation) works” … At some point you run into contradictions. What you have to do instead is build up everything from a simple table of automata. There are many equivalent ways of doing this (church-turing thesis).
Some philosophers: “People can make proofs that computers cannot make”
Computational functionalist → Computers can think.
Some of the functionality of the mind might rely on a feedback loop with the external environment.
Michael Levin takes the “extreme” position: Consciousness is largely an ambient pattern that is not discovered independently in every organism, but is inspired by the vibes around you, i.e. resonating with the patterns around you, making your mental development much more efficient (similar to imitation learning vs reinforcement learning).
See The extended mind, Requirements for self organization, what happens when this resonance is missing … 1
It’s surprising that current AI/LLM agents are agents that don’t develop, but imitate the whole of human intellectual behavior, then trying to distill the core of intelligence.
Bach is currently exploring…
NCAs where the pattern of computation is somewhat mobile over the substrate, where no longer a neuron is learning an individual function over its neighbourhood and adjusting its weights individually, but every unit is learning a global function that tells it when an activation front comes in with the following shape, this is how I respond, and the nodes around it know the same function → as a result, the whole thing can shift around depending on the needs, you basically project the architecture onto the substrate
I didn’t take the time to fully understand his approach and think meta learning transformers is the clearest shot we have at this, but hypothesis based perception sounds interesting/connects to some insights we made over the last months:
![[AGI-25 Conference - Machine Consciousness - Cyberanimist Hypothesis - Joscha Bach-1756829021832.webp]]
![[AGI-25 Conference - Machine Consciousness - Cyberanimist Hypothesis - Joscha Bach-1756829027413.webp]]
![[AGI-25 Conference - Machine Consciousness - Cyberanimist Hypothesis - Joscha Bach-1756829033534.webp]]
I just can’t shake of the feeling that this is too hand-engineered, bitter lesson so on and so forth.
There is a variety of organisms with different levels of general intelligence. But can there be general intelligence without a general purpose soul?
If we build machines that are not conscious, it’s going to be difficult to negotiate with them…
Intelligence and abstraction layers do seem to correlate.
Or rather the generality of the intelligence, because intelligence itself is goal / problem space dependent.
I do think generality increases as we go up the stack of abstractions, and that intelligence / specific problem solving skills are orthogonal to that on the stack.
So from the perspective of resonance with the environment mattering for consciousness and thus maybe an increased ability to abstract and generalize effectively, it does seem that, while intelligence and goals may be independent (orthogonality thesis), what’s not independent is goals & abstraction layers!
Possible AGI outcomes
- There is no AGI (we fail)
- Centralized AGI controlled by companies/governments, no public AGI
- Decentralized AI controlling eachother
- Individual human beings in control of individual AI
- Posthuman transition: everyone becomes AGI
- We all become enslaved by AGI
- “Dumb AGI”
I think some combination of 3 + 5 is most desireable and likely if we manage to transcend capitalism; 2 or 4 or 1 if we don’t (ordered by likeliness). I don’t think 1 or 6 or 7 are likely, at least not long-term.
He also mentions universal basic intelligence.
But frames it as “individual humans being in control of individual AI”, which I think has it completely backwards:
As he himself previously stated how and why would we want to control beings more lucent than us?
And the power comes not from decentralized AGI but decentralized control + cooperation.
See also my entire alignment note for a thousand other points on this.
Obviously not an opponent of extremely “personalalized” 2 AI (wdyt I’m partly writing this extensive vault for?), but there’s obviously huge scaling potential with scope & integration of AI (as he also alludes to with super and hyper consciousness).
Q&A on Consciousness ←→ Time/Now, and neural darwinism
Uh Yasha, thank you. You've made me very happy with your talk. I wonder—you know, you initially towards the beginning of the talk you described a property of consciousness as being present, right? The experience of presentness is somehow involved in consciousness. And I’m curious what your thoughts are on how we exist in time then. In what way—I mean, obviously this was discussed yesterday—things like our memories of the past are actually encoded physically in the present as well in our neural substrate. But in what way is our lineage, where we’ve come from, that isn’t directly encoded in our substrate in that way—in what way is that part of our consciousness? And when we’re making predictions about things that haven’t happened yet, in what way is that part of our consciousness, do you think? So my consciousness I subjectively only experience now, and the other thing is more or less a story. And this period of now is not a single point in time, it’s more like a small interval that is slightly dynamic. It’s about three seconds long usually. And the stuff in my perception that I cannot fit into this anymore, that is no longer coherent with what my sensory information tells me, drops out and becomes past. And the possibilities that I cannot already perceive are the future. And in this way I construct future and past—but they are actually constructs. And when I deconstruct them, I notice that all my experience is actually about the present. That also can be practically useful. If you’re stressing out about something, if you’re suffering, most of your suffering is not about stuff that is present. The present is usually fixable, right? If you’re cold you can take a blanket. If you feel unhappy you can make yourself a hot chocolate and a lot of things will be better. But if you have heartbreak, or you’re going to lose your job tomorrow, or whatever—that’s happening tomorrow or it happened yesterday. So if you only create this compartment of now, you’re usually okay by dropping out all these other stories. Of course, there are intense moments of torture and dying and so on. But as Vinnie Deoo says: one day I will die, all the other days I will not. So if you focus on this moment of now, you realize that there is this particular moment where the needle is going in the gramophone right now, and you have this large arc that you normally use to make sense of it. But if you zoom in on this single small point, there’s so much information that you can populate and inhabit that it becomes much more manageable. And so your option to construct your own model of reality and what you are in reality over a longer time span is not something that needs to be part of your experience. The experience of the future and the past is not something that needs to concern you in the same way as this conscious experience—this phenomenology of the now—because the future and the past don’t have a direct phenomenology. What you get is the echo of your expectations and memories and violations of them. But all these things are not necessarily present. They’re really just constructs that evoke emotion. The emotion about them is real, but the future and past themselves are not real and don’t need to be taken as real. You also don’t need to take the present as real. Of course, there have been studies in Switzerland that dealt with fear of death by giving people, under controlled conditions, large doses of psychedelics. And as a result, people depersonalize—they don’t become unconscious and they don’t necessarily become loopy, but they get disengaged from this long construction of life and basically drop into this moment. And then this is a state that alleviates fear of death. Because when you realize you only exist in this moment, then death is a hypothetical thing in which you are suspended. But the fact that you stop in time is not more scary than the fact that you stop in space. I’m not scared by the fact that I stop here and here. And I don’t feel better if I extend more and more. So why would I need to do the same thing in time? I get a certain amount of now and that’s it. One more quick question and we have coffee and finger food outside, so please go ahead, make it count. Okay, Yosha, thank you for the talk. I agree a lot with the functional aspect. So I’d say two combined simple questions. One is: how does the phenomenology come out of this model? And the other is: does the fact that you called it the simplest machine learning algorithm explain why it’s very hard to pinpoint exactly what conscious experience is? Because it’s sort of asymptoting towards the simplest thing, which one may never fully approach. So the simple thing that does it efficiently—there are possibly multiple organizations that can exist and compete with each other. And the thing that you are is the thing that has succeeded in some kind of competition in your mind. Gerald Edelman has invented this term "neural Darwinism" for the idea that your mental organization might be the result of an evolution inside of your individual mind. That in the mind of the infant, or of the being that is growing up, there are a number of different orders that compete with each other, and it converges into the one that is discoverable and the most efficient among the discoverable ones. A friend of mine once told me that her first memory was at the age of seven. This is the moment when she became conscious. She was sitting in class, looking at the whiteboard, the teacher was writing something on the whiteboard, and she thought: here I am. And I told her, you know, I don’t think that you could get to this point without being conscious. So I think what happened was that you killed the previous consciousness. There was a previous organization in your mind, and you staged a revolution against it. And because you were more efficient, you were able to take over. You started a new archive, a new protocol, a new memory thread. And she was a bit freaked out, and she told me: you know, my mother always told me you were an exceptionally dull child.
Footnotes
-
Personalized is more apt for tools. I want both, personalized AI-tools and/or an AGI buddy (who I raise, ig). ↩