year: 2020
paper: https://cosmosandtaxis.org/wp-content/uploads/2020/10/ott_ct_vol8_iss10_11.pdf
website:
code:
connections: biologically inspired, dialectical materialism, Giving Up Control - Neurons as Reinforcement Learning Agents, complex systems, intelligence, self-organization, Jordan Ott


As intelligence is the result of a complex system, it is unlikely for the field to make real advancements while those writing the programs grasp ever tighter for control.

These presumptuous claims resulted directly from a pretense of knowledge. The audacious claims of the best and brightest researchers are not the issue. Instead, it is the notion that discretizing high-level, visible, characteristics of complex systems and implementing them via centrally planned rules, heuristics, and cost functions will be synonymous with the system as a whole. This is the τάξης (class) view of intelligence and its artificial creation.

It seems to me that this failure of the economists to guide policy more successfully is closely connected with their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences—an attempt which in our field may lead to outright error. It is an approach which has come to be described as the “scientistic” attitude—an attitude which, as I defined it some thirty years ago, “is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed” (Hayek 1974).

Much like Hayek’s critique of economics, the same is true for Artificial Intelligence. The field relies on methods derived from statistics and numerical optimization, which is decidedly unscientific in its application to intelligence research. “Scientism” as Hayek calls it, is the desire to abstract a system and precisely quantify aspects of it. Following this approach is understandable from the AI researcher’s perspective, given our position in the scientific community as we are surrounded by fields—biology, chemistry, and physics—that make system-level abstractions and give precise predictions about outcomes. Consequently, cost functions are a natural solution, as they provide an exact quantification of the degree to which the system has learned. However, through system abstraction and quantification, we are likely to lose critical information so as to be no longer relevant to the original system. This process’s technical underpinnings are captured in the discretize and conquer approach, which we detail in the following subsection.

Practically, it is not currently possible to record all biological details—the activity of all neurons, their synaptic weights, electrical and chemical gradients, etc. Computationally, modeling every detail could be done given sufficient computing resources but such intricacy could not run in real time. For all intents and purposes, the manifold is not known.
As a result, we must rely on incomplete observations from the manifold. These observations are high-level attributes or behaviors that are emergent products of the underlying system. Figure 1b depicts this by showing single points that represent observations realized from the full manifold. For example, intelligent systems can perceive through vision, communicate through language, reason through abstractions, and act through planning. These are all visible observations from the manifold. What is not visible is the processes,
interactions, and dynamics that produce these high-level attributes. Thus the characteristics we ascribe to intelligent beings are only the byproducts of the system from which intelligence can emerge, they are not indicative or defining features of intelligence but merely the result of it.

Interplay between the whole and its parts

Much like the macro level, shortcomings are evident in the micro-level as well. Neuroscience generates enormous amounts of detailed observational data. Where regions are discretized and studied in isolation.
Unfortunately, the whole cannot be understood by observing the individual. This principle is true of the economy, of ant colonies, and as well as of brains. We will not be able to understand intelligence by observing single actors. Neurons are individual agents in a local-decentralized system. They compete for resources with their neighbors while cooperating in order to achieve beneficial results for the whole. This concept is perfectly summarized in the words of Friedrich Engels, “For what each individual wills is obstructed by everyone else, and what emerges is something that no one willed.” Engels said this in reference to an economy, however, the application to neuroscience and the emergence of intelligence are equally satisfying.