Link to originalCA's are cells arranged in space (classically discretized into a 1/2d grid) with (binary) states over which you apply low-complexity update rules in a convolution-like process (applied uniformly across space).
As you start adding a little bit of complexity to the definition of the CA—moving from binary states to larger state spaces, moving from 1D to 2D or 3D, making state continuous, making time continuous, make the update rules stochastic, using larger neighbourhoods, etc.—the behavior becomes incredibly complex (and often looking eerily life-like to us), even though the rules remain simple.
Fundamentally, you can think of CA’s as dynamical systems. You can model fluid-dynamics, chemical reactions, biological growth, etc. with them. But as you climb the hiearchy of abstraction layers, it makes sense to think about discrete agents interacting according to some rules (at which point the continuum gets lost—in the way it makes sense to model it anyway—, a qualitative leap).
Link to originalTo understand the universe at a particular level, you don't necessarily need to go all the way down.
You might just go one level lower and start modelling things there, at the cost of some approximation errors, but if you do things well enough / ask the right questions, you’ll see emergence of the higher-level phenomena you’re interested in and build on top of it, layer after layer, as this emergence appears to be constant across scales.
Different layers are recognizable by the kind of goals they’re trying to solve… for which (N)CAs appear to be a reasonable model for many differnet layers in this architecture.
If the universe is discrete, everything can be modeled as a cellular automaton.
Wait, does this mean that discreteness is a requirement for computation?
Just like infinity is not possible (because it cannot be implemented, due to the incompleteness theorems), perfect smoothness is not possible?
Link to originalInfinity vs unboundedness
You can have unboundedness in the sense of a computation that doesn’t stop yielding results. But you cannot take the last result of such a computation, you cannot have a computation that relies on knowing the last digit of pi before it goes to the next step → You cannot have infinity in that sense, where inifnities are about the conclusions of functions. Unboundedness means that you always get something new, unexpected, that you couldn’t have predicted from before (relation to open-endedness & computational irreducibility?).
If there was any way to get the result of an infinite computation, it could not be expressed in any mathematical language that doesn’t have contradictions (incompleteness theorems).
→ We can only build languages in which we have to assume that infinities cannot be built. Infinity is meaningless because we cannot make it.
As soon as you formalize something, you block stepping stones and exclude the behavior of that might lead you to where you actually want to go.
A neural network by itself is not turing complete.
What you need is the iterative computation on a working space.
By recurrently feeding output back into the network’s input, every cell in a NCA/GNN can see farther and farther away from itself, because its receptive field aggregates information from the neighbourhood, but so does the receptive field of every other cell. At some point, this information bounces around like waves in a pond (reflexively).
Neural networks by themselves don’t perform iterative computations, but they decompose euclidean space into polytypes with bounded computation.
[Working with real scientists, I cam to realize that] At some point solving the problem becomes more important than the way you’re solving it.
Try not to use the big guns immediately. Stress the simplest model until you hit a wall.