Overview (from here)

Langton formally describes finite CA as consisting of a finite set of cell states of size , a finite input alphabet , and a transition function . Each cell has a N-sized neighbourhood. The number of possible neighbourhoods is .
The transition function for a CA must thus encode a mapping of different inputs to one of states.
The number of possible uniqe transition functions is thus .

Traditionally has been encoded as a complete mapping , which can be implemented as a lookup table. When working with nontrivial CA, where both and can be relatively large numbers, it becomes a problem to store the mapping in an efficient way, and the space of possible becomes too large to be explored by exhaustive enumeration.

Elementary CA transition functions have been studied extensively, and it has been reported that most rules lead to ”uninteresting” behaviour, either falling into an ”order” which is either static or repeating periodically, or into chaos, where all useful information is quickly lost in noise.
It has been speculated that it is in the critical border region between these behaviours where interesting computations can occur.
In order to find these “interesting” , smart heuristic searches are foten applied, e.g. evolutionary computation.

wolfram code

Wolfram code

A numbering system for one-dimensional CA rules, introduced by Stephen Wolfram.

For a CA with states and neighborhood size , each cell’s next state is determined by one of possible neighborhood configurations. A rule maps each configuration to a next state, giving possible rules total.

The Wolfram code uniquely identifies each rule by encoding its complete transition function as an -ary number:

where is the next state for configuration .

Algorithm

  1. List all neighborhood configurations in descending numerical order
  2. For each configuration, determine the next state according to the rule
  3. Interpret the sequence of next states as an -ary number, convert to decimal

See elementary CA (1D) for the most common case (, , giving 256 possible rules) with detailed examples and visualizations.

Link to original

CA's are cells arranged in space (classically discretized into a 1/2d grid) with (binary) states over which you apply low-complexity update rules in a convolution-like process (applied uniformly across space).

As you start adding a little bit of complexity to the definition of the CA—moving from binary states to larger state spaces, moving from 1D to 2D or 3D, making state continuous, making time continuous, make the update rules stochastic, using larger neighbourhoods, etc.—the behavior becomes incredibly complex (and often looking eerily life-like to us), even though the rules remain simple.


Fundamentally, you can think of CA’s as dynamical systems. You can model fluid-dynamics, chemical reactions, biological growth, etc. with them. But as you climb the hiearchy of abstraction layers, it makes sense to think about discrete agents interacting according to some rules (at which point the continuum gets lost—in the way it makes sense to model it anyway—, a qualitative leap).

If a sequence of states repeats periodically it is referred to as cyclic attractor
If the CA stabilises into a permanent state it is called a point attractor.

Classification of CA behavior

Class 1: quickly evolves to a homogenous state
Class 2: exhibits simple periodic patterns
Class 3: chaotic/aperiodic patterns, sensitive to initial conditions
Class 4: complex, long-lived localized structures, “edge of chaos”, can support universal computation
In general undecidable. But in specific cases/… you can tell.
Classes 1/2 quickly resolve to a homogenous state (c=1) or a short cycle.
→ Almost all initial configurations evolve to a short cycle in a few steps (p small).


I am not interested in emergence of mere complexity. That is easy; simple cellular automata (CAs) such as the Game of Life enable huge complexity to come from very simple rules, as do the fractals emerging from short generative formulas. But what such cellular automata and fractals do not offer is goal-seeking behavior – they just roll forward in an open-loop fashion regardless of what happens.

… (although, I must say that I am no longer convinced that we have plumbed the depths in those systems – after what we saw in this study, I am motivated to start looking for goal-directed closed-loop activity in CA’s as well, who knows). What I am after is very minimal systems in which some degree of problem-solving capacity emerges, and specifically, capacity that is not specifically baked in by an explicit mechanism. - levin

“Trying to play chess with Laplace’s Demon”

It’s possible to perfectly predict the next state of a CA just by following the local base rules, but you’ll never build a glider or turing machine etc., without considering mesoscale properties. All the microstates are equal to him, winning, loosing, pieces, are nothing to him.

The collective behaviour of cells is not just an emergent property of simple mechanic rules (like with CA)

For example if you try to deviate them their goal, they are very good at finding new ways to get there.

Link to original


Todo


What Can We Learn about Engineering & Innovation from Half a Century of the Game of Life

Finding predecessors of a given state in a cellular automaton is NP-hard, here attempted to solve with a SAT solver:


self-organization
automata
neural cellular automata