I am not interested in emergence of mere complexity. That is easy; simple cellular automata (CAs) such as the Game of Life enable huge complexity to come from very simple rules, as do the fractals emerging from short generative formulas. But what such cellular automata and fractals do not offer is goal-seeking behavior – they just roll forward in an open-loop fashion regardless of what happens.
… (although, I must say that I am no longer convinced that we have plumbed the depths in those systems – after what we saw in this study, I am motivated to start looking for goal-directed closed-loop activity in CA’s as well, who knows). What I am after is very minimal systems in which some degree of problem-solving capacity emerges, and specifically, capacity that is not specifically baked in by an explicit mechanism. - levin
“Trying to play chess with Laplace’s Demon” … It’s possible to perfectly predict the next state of a CA just by following the local base rules, but you’ll never build a glider or turing machine etc., without considering mesoscale properties. All the microstates are equal to him, winning, loosing, pieces, are nothing to him.