![[AGI-25 Conference - Machine Consciousness Hypothesis -  Joscha Bach-1755518057757.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis -  Joscha Bach-1755518108230.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis -  Joscha Bach-1755518125334.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis -  Joscha Bach-1755518167239.webp]]
![[AGI-25 Conference - Machine Consciousness Hypothesis -  Joscha Bach-1755520403418.webp]]

Plato was not actually an idealist in the sense that he believed that conscious or mental experience is primary.

Plato takes these ideal forms as starting points. The ideal table is sometimes given as an example, but that’s probably not a core arcchetype of the universe or of god itself, but tables are created by human beings in particular cultural contexts, and they are an archetype in our perception. The way in which we classify the world, there is something like an ideal table. The forms that Plato is talking about are actually dimensions in an embedding space. In some of his texts, he alludes to a world outside of our perception, our mental construction that we experience, and that world is an outside [of our?] simulated world that is isomorphic to the world that we experience in these forms.

The world is physical, mechanical and is a mathematical object, at the lowest level the logos.
And on top of the physical world, you have software – forms that dynamically shape how the world is working, what Aristotle calls the soul.

Virtualism is the idea that consciousness is not a physical object. It’s not a mechanism. It exists as if, and by existing as if, it becomes real because it has causal power over the world, and thereby instantiates himself.

Cyberanimist Hypothesis

→ Consciousness is the conductor that steps in to increase coherence (stabilizes self-organizing software agents)
→ Lacking consciousness leaves you in a vegetative state
→ It seems to be the pre-requisite for turning into a human being (learning algorithm)
→ Consciousness create the world the self and the mind
→ The organization of consciousness facilitates the structure of psyches/ghosts
Morphogenesis may be driven by somatic consciousness → plant consciousness, just at different time-scales (see michael levin)

Aristotle’s perspective of the soul

Vegetative layer (growth and nutrition)
Animal layer (perception and decision making)
Human (reasoning) layer

Soul according to Bach

Self-organizing causal pattern (agentic software) for biological organisms with:

  • basic message passing between cells
  • reward protocol
  • reward assignment architecture (economy)
  • language of thought (embedding space)
  • valence model of the host organism (what’s good for the entire organism)
  • world and self model of the host (using game engine, geometry, linear algebra) (linear algebra?)
  • reflexive model of operation

Which of these tasks involve consciousness?

→ evidence for long range communication in forests (controlling weather, joint responses against invasive insects) … if plants have ~same communication protocols, spirits could be mobile in forests → back to fairy land but on a rationalist physicalist basis (negation of the negation)

Is next-token prediction the same / very similar to constraint minimization / coherence maximization?

NTP does create local coherence, but LLMs reach coherence much later than humans do in their training…

Power law scaling laws prlly come from scale-free / heavy-tailed distribution of (length of) correlations → heavy tails are important but hard to reach!

Link to original

what does consciousness do differently to be more efficient? Interaction / testing hypotheses?

Mathematics … the domain of all formal languages

Reality is math hypothesis … the universe is a mathematical object

Epistemological math hypothesis … all that can be known is mathematical (needs to be represented in a language)

Minds do math hypothesis … all mental activity can be described in mathematical terms

Classical mathematicians: “There exist mathematical objects that cannot be constructed (computed), but they nevertheless exist”

Computationalists: “That doesn’t make sense / you’re hallucinating those objects → Only constructive maths (computation) works” … At some point you run into contradictions. What you have to do instead is build up everything from a simple table of automata. There are many equivalent ways of doing this (church-turing thesis).

Some philosophers: “People can make proofs that computers cannot make”

Computational functionalist → Computers can think.

Some of the functionality of the mind might rely on a feedback loop with the external environment.

Michael Levin takes the “extreme” position: Consciousness is largely an ambient pattern that is not discovered independently in every organism, but is inspired by the vibes around you, i.e. resonating with the patterns around you, making your mental development much more efficient (similar to imitation learning vs reinforcement learning).
See The extended mind, Requirements for self organization, what happens when this resonance is missing … 1

It’s surprising that current AI/LLM agents are agents that don’t develop, but imitate the whole of human intellectual behavior, then trying to distill the core of intelligence.

Bach is currently exploring…

NCAs where the pattern of computation is somewhat mobile over the substrate, where no longer a neuron is learning an individual function over its neighbourhood and adjusting its weights individually, but every unit is learning a global function that tells it when an activation front comes in with the following shape, this is how I respond, and the nodes around it know the same function → as a result, the whole thing can shift around depending on the needs, you basically project the architecture onto the substrate

I didn’t take the time to fully understand his approach and think meta learning transformers is the clearest shot we have at this, but hypothesis based perception sounds interesting/connects to some insights we made over the last months:

![[AGI-25 Conference - Machine Consciousness - Cyberanimist Hypothesis -  Joscha Bach-1756829021832.webp]]
![[AGI-25 Conference - Machine Consciousness - Cyberanimist Hypothesis -  Joscha Bach-1756829027413.webp]]
![[AGI-25 Conference - Machine Consciousness - Cyberanimist Hypothesis -  Joscha Bach-1756829033534.webp]]

I just can’t shake of the feeling that this is too hand-engineered, bitter lesson so on and so forth.

There is a variety of organisms with different levels of general intelligence. But can there be general intelligence without a general purpose soul?

If we build machines that are not conscious, it’s going to be difficult to negotiate with them…


Intelligence and abstraction layers do seem to correlate.
Or rather the generality of the intelligence, because intelligence itself is goal / problem space dependent.
I do think generality increases as we go up the stack of abstractions, and that intelligence / specific problem solving skills are orthogonal to that on the stack.
So from the perspective of resonance with the environment mattering for consciousness and thus maybe an increased ability to abstract and generalize effectively, it does seem that, while intelligence and goals may be independent (orthogonality thesis), what’s not independent is goals & abstraction layers!

Possible AGI outcomes

  1. There is no AGI (we fail)
  2. Centralized AGI controlled by companies/governments, no public AGI
  3. Decentralized AI controlling eachother
  4. Individual human beings in control of individual AI
  5. Posthuman transition: everyone becomes AGI
  6. We all become enslaved by AGI
  7. “Dumb AGI”

I think some combination of 3 + 5 is most desireable and likely if we manage to transcend capitalism; 2 or 4 or 1 if we don’t (ordered by likeliness). I don’t think 1 or 6 or 7 are likely, at least not long-term.

He also mentions universal basic intelligence.
But frames it as “individual humans being in control of individual AI”, which I think has it completely backwards:
As he himself previously stated how and why would we want to control beings more lucent than us?
And the power comes not from decentralized AGI but decentralized control + cooperation.
See also my entire alignment note for a thousand other points on this.
Obviously not an opponent of extremely “personalalized” 2 AI (wdyt I’m partly writing this extensive vault for?), but there’s obviously huge scaling potential with scope & integration of AI (as he also alludes to with super and hyper consciousness).


Joscha Bach

Footnotes

  1. Personalized is more apt for tools. I want both, personalized AI-tools and/or an AGI buddy (who I raise, ig).