SingularityNET Interview; Youtube
- Consciousness is an operator on mental states that reduces contradictions in our representation of the world (integrating sensory inputs and memories into a coherent “bubble” of now).
- From a phenomenological perspective: The perception of perceiving.
- The self is 3rd order perception: The perception of the perceiver.
→ When dreaming, we are still conscious, but not necessarily self-conscious. In deep sleep, we are neither.
→ There seems to be a smooth spectrum for (sophistication of) self-models, but consciousness seems to be binary 1 (apart from intermediate states / phase-transitions).
→ How does consciousness change your mental state? Coherence seems to increase whenever you (or any collective inteligence for that matter) are conscious (e.g. deep-sleep/spacing out/alienated individuals, free-market vs. awake/focused/collaborative, planned economies).
If you can't discover a solution by following agradient (intuition; gradient descent), you need to construct a solution (search; program synthesis), i.e. to think, to reflect, to test ideas and try different things until it works.
This seems to be the purpose of the conscious self, to facilitate this creation of coherence via following patterns to a local optimum, or if this is not possible / if the local optimum is not good enough, to construct solutions, experiment, lift threads of the carpet of cognition and see if we can get rid of the knots in the pattern.
Emotions are dimensions of our interests in the world.
It’s a geometric model of contraction/expansion, relating to valence, whether something increases or reduces tension.
“It’s easier to deal with something going this direction than that direction.”
The emotional configuration is the result of all those different alignment vectors that are pulling you.
But you can also transcend many emotions, turning them from unconditional reflexes to which you involuntarily react because you are pulled in a different shape into something that you understand more deeply, you understand “Where is this coming from? What does it serve? What is this part in a larger architecture?”, and if you manage to do this, your particular emotional reflex gets deconstructed and replaced by a decision-making process.
Our linguistic concepts are very sparse.
There’s no physical limitation that says we have to only be able to focus on one thing at a time or to only be able to hold a few concepts at once in our head / interconnect them.
Our bubble of now consists of representations of nodes in the perceptual scene, and all these pats need to be (made) compatible with the other parts of the scene (voting process of thousand brains theory; NOWmodel?)
E.g. if you identify a some patterns as the jacket of a person, then they can’t be part of something else anymore – you’re committing to a certain interpretation of the scene, and constraining other parts of the scene to be compatible with that interpretation.
→ Limitations for the size / number of concepts that can be integrated in the bubble.
→ Limitations based on the location of our sensors and sensor bandwidths.
Artificial substrates are much more reliable and not limited by these constraints (it also doesn’t matter as much if you’re slower to be more accurate/not loose coherence, etc.).
Current AI systems are less competent than us because they’re not integrating over the information they ingest/memorize in the way humans do. Well, in part because they don’t need to and are simply not build / trained for that.
We are not supposed to answer questions, we are supposed to apply methods.
Consensus-seeking and short-term, profit oriented incentives lead to a narrowing of the types of questions being asked / moving away from big picture thinking to asking fine-grained questions that are easy to answer with the existing paradigms / methodologies.
We can observe this in (the mainstream) many different disciplines, like neuroscience (focusing on only neurons, not other types of cells), psyochology (no longer looking at the psyche, but things that can be easily measured), artificial intelligence (started out as a philosophical project, now mostly very narrow engineering), …
We are not incentivizing research that might bear fruit only 60-100 years from now…
We make accidental discoveries like making text translation more efficient … that generalize into cognition in general. It’s being done because it’s economically useful (which narrows the scope of what is being researched).
“We have regressed before a time where philosophers and experimentalists took the concept of consciousness seriously enoughy to invent new methods when needed, and instead we submit to the consensus of what exists too often.”
The aristotelian soul
There are three souls.
The vegetative soul we find in plants (growth, nutrition, morphology).
The animal soul (decision-making, perception).
The human soul adds reason (the ability to reflect and make sense of your condition).
Our culture still sees the (non-cognitive) soul as a plant-like being organizing the our body/the interactions in our body, with considerable intelligence, but also a mind of its own, somewhat decoupled from the fast-paced patterns in the nervous system.
At some point in this progression, reflection gets decoupled from perception. We are good at holding abstract thoughts for as long as we want.
This idiot savant AI only replaces only the the prompt-completers among us.
universal basic intelligence + fully-interconnected collective intelligence = fully automated luxury gay space communism?
post-human scenario: agents interacting in a global infosphere, synthesizing/dissolving/possessing bodies arbitrarily for the task at hand
dystopian scenario: a crude monoloithic machine that we all have to work for that simply wins because it’s bigger/more efficient/more powerful than anything else, preventing sophistication and complexity from re-emerging.
I thin this one is unlikely because collective intelligence scales much better and foots on … collaboration + the sub-parts persist in one form or another too.
In a sense we’re already working for this machine in a way, but it too is put under a constant internal pressure precisely due to the contradictions of its parts and and their relation. 2
The present culture is unable to perpetuate itself.
“I’m too vegetarian to play this game.”
→ balance the ethicaly sustainable with the philosophically interesting
I would not recommend being a maverick to anyone, right? It’s it’s not the optimal strategy, but some of you may not be able to help it. In that case, you have to do what your heart tells you. Go for it and uh you’re guaranteed to have an interesting existence.
Footnotes
-
The productive forces it develops - the capacity for collective intelligence, for coordination, for solving problems collaboratively - strain against the narrow confines of private ownership and competitive accumulation. You can’t fully realize collective intelligence under a system that fragments and privatizes it. The system develops capacities it cannot contain: workers who can coordinate globally, tools that work better when shared, knowledge that multiplies when freely distributed. This contradiction between what the technology makes possible (post-scarcity coordination) and what the relations of production demand (artificial scarcity and extraction) is what generates the internal tensions that ultimately tear the system apart from within. ↩