How can we learn to predict in advance what else, besides the obvious, a computational (or biochemical) signal will do, in terms of emergent competencies and side-quest goals? It may turn out to be like the halting problem, in that we can’t discover these until we try it – run it and study it and make empirical statements about what we see, with no certainty about what it can and can’t ever do.
We want things. Do “machines” want things? We often think not, despite the troublesome cases of paramecia and other “simple” animals; they are kind of biochemical machines, like our bodies, but do they want things? If so, then why can’t silicon machines? If not, then what’s the difference from us, who also develop slowly from 1 cell? The biggest issue is surprise – we feel machines don’t really want things, because we can see the algorithm that drives their wanting, while ours and the paramecium’s are obscure to us. Real wanting is surprising wanting – the wants that we, as observers, can’t readily ascribe to the mechanism we cobbled together. So perhaps this minimal system is showing us what real wanting is: the sorting is not its desire – that’s what we force it to do. But the clustering – which it tries to do despite the fact that we neither programmed it nor anticipated it – maybe that is what we mean by wanting in active systems (living or not).
Todo