I am not interested in emergence of mere complexity. That is easy; simple cellular automata (CAs) such as the Game of Life enable huge complexity to come from very simple rules, as do the fractals emerging from short generative formulas. But what such cellular automata and fractals do not offer is goal-seeking behavior – they just roll forward in an open-loop fashion regardless of what happens.

… (although, I must say that I am no longer convinced that we have plumbed the depths in those systems – after what we saw in this study, I am motivated to start looking for goal-directed closed-loop activity in CA’s as well, who knows). What I am after is very minimal systems in which some degree of problem-solving capacity emerges, and specifically, capacity that is not specifically baked in by an explicit mechanism. - levin

“Trying to play chess with Laplace’s Demon” … It’s possible to perfectly predict the next state of a CA just by following the local base rules, but you’ll never build a glider or turing machine etc., without considering mesoscale properties. All the microstates are equal to him, winning, loosing, pieces, are nothing to him.

Overview (from here)

Langton formally describes finite CA as consisting of a finite set of cell states of size , a finite input alphabet , and a transition function . Each cell has a N-sized neighbourhood. The number of possible neighbourhoods is .
The transition function for a CA must thus encode a mapping of different inputs to one of states.
The number of possible uniqe transition functions is thus .

Traditionally has been encoded as a complete mapping , which can be implemented as a lookup table. When working with nontrivial CA, where both and can be relatively large numbers, it becomes a problem to store the mapping in an efficient way, and the space of possible becomes too large to be explored by exhaustive enumeration.

Elementary CA transition functions have been studied extensively, and it has been reported that most rules lead to ”uninteresting” behaviour, either falling into an ”order” which is either static or repeating periodically, or into chaos, where all useful information is quickly lost in noise.
It has been speculated that it is in the critical border region between these behaviours where interesting computations can occur.
In order to find these “interesting” , smart heuristic searches are foten applied, e.g. evolutionary computation.

If a sequence of states repeats periodically it is referred to as cyclic attractor
If the CA stabilises into a permanent state it is called a point attractor.

Classification of CA behavior

Class 1: quickly evolves to a homogenous state
Class 2: exhibits simple periodic patterns
Class 3: chaotic/aperiodic patterns, sensitive to initial conditions
Class 4: complex, long-lived localized structures, “edge of chaos”, can support universal computation
In general undecidable. But in specific cases/… you can tell.
Classes 1/2 quickly resolve to a homogenous state (c=1) or a short cycle.
→ Almost all initial configurations evolve to a short cycle in a few steps (p small).

The collective behaviour of cells is not just an emergent property of simple mechanic rules (like with CA)

For example if you try to deviate them their goal, they are very good at finding new ways to get there.

Link to original


self-organization
automata
neural cellular automata

Finding predecessors of a given state in a cellular automaton is NP-hard, here attempted to solve with a SAT solver: