Source: Machine Consciousness
Source: From Large Language Models to General Artificial Intelligence? - Joscha Bach (Keynote)

Definitions by Joscha Bach


sentience: Corporations are sentient, because they have models of what they are and how they relate to the environment and how they should interact with the environment (largely implemented by people and legal contracts).
consciousness: Corporations are not conscious - “There is nothing what it is to be like Microsoft”. (does this mean like cosciousness requires a full integration of “sensory inputs” or like idk how what does this feeling / qualia have to do with consciousness now??)
self: “The content that a system can have” (??) Agency I get, in the sense of cognitive lightcone…
mind: he calls this the “protocol layer” / functionality that allows modelling the universe.

Todo

I dont get the distinction between consciousness and sentience. Also the definition of rationality doesn’t make sense to me - At least not bach’s…
It seems inconsistent with his own definition of consciousness. I would say sentience means the ability to feel, to experience subjective phenomena, and consciousness is being self-aware / self-reflecting. Like in his bullet list above, isn’t it exactly flipped?

A computer is a universal function executer. It can do anything - The hard task is just finding the right function.

Perceptual system:

Possibilities: What fits together?

Probabilities: How should we converge?
Biases that guide the search / … in possible state space. “Interpretation bias”.
optical illusions are a consequence of this, but it allows you to converge faster.
e.g. you assume that a face is convex, so concave faces are really trippy, looking convex…

“Valence”?: Intrinsic regulation targets.
Assign resources to those parts of the perceptual system that are most valuable to resolve - motivational preferences that assign relevance.

Normativity: Imposed regulation targets.

Attentional system singles out percepts, by attending to them selectively.
It is controlled by an aspect of the self-model.
“Protocol memory” = Indexed memory.

Main purpose of the attentional system is learning: It is much more efficient, since you need to change less parts of your model all the time, as opposed to backpropagation, which works only on statistical properties, by correcting all weights by a little bit on large amounts of data.

We also use this attention to learn in real time on hypotheticals (imagined states).

Priorities for building conscious agents.

Move to continous models (change vs. state).
Coupling with the world (discovery).
Active construction of working memory.
Real-time + online learning.

Continous models: Everything about our peception is about how one state changes into the next. Something staying the same is just a special case of a change into an identity. How do different percepts (over time or sensors) relate to eachother? Percepts we cannot explain are noise. → Minimze the amount of noise and maximize the amount of bits that we can explain over time.
Challenge: How to parallelize perception while maintaining a coherent mind - that doesn’t take 16 years of training to reach human performance.

Coupling with the world puts huge constraints onto the model.

We actively construct our working memory, such that have the most useful concepts for interpreting the knowledge in a book, as opposed to shifting the context window, for example.
Furthermore, we try to minimize the amount of things we have to memorize.

Silicon golems that are colonizing the world of the living or the world of the living - us - spreading onto other substrates?

→ Systems that are empathetic, able to resonate with your mind / vibing in high resolution - perceptional feedback loops - mental states you could not have alone.

Our civilization doesn’t plan for the future anymore, due to the incoherence of our society.

References

Joscha Bach
Consciousness as a coherence-inducing operator - Cosciousness is virtual