year: 2024
paper: capital-as-artificial-intelligence
website:
code:
connections: ALIFE 2025, ETH Zürich, capital, artificial intelligence, artificial life, reinforcement learning
Mostly 1000% slop belowww
TLDR
Formalizes Capital using reinforcement learning language. Capital is a historical agential system: discrete units (like currency) afford observations and actions to agents (humans, companies, nations). More capital = more agency. The system pursues accumulation through optimization.
Two directions: (1) AI is an agent of Capital (recommender algorithms, LLMs, trading bots process quantified value → produce more quantified value, i.e. they’re optimal capital agents). (2) Capital itself is a form of AI, insofar as its generating processes are quantitative optimization: it processes more information than any individual, it’s goal-directed (accumulation), and it produces outputs (prices) that seem meaningful but carry no inherent intent.
Prices, like LLM outputs, reflect the data they were trained on but aren’t intentionally meaningful. And you can’t discover “true” preferences through prices because preferences are themselves shaped by prices (feedback loop). Capital’s quantification produces an illusion of meaningful information.
Terminology
Capital (uppercase C): not just money. A historical agential system embedded in the social sphere, embodied in quantified value, that pursues its own maximization. The system, not a resource.
Units of capital (): the discrete, countable tokens Capital operates in (currency, shares, etc.). Every reward . The medium through which Capital observes and acts. More units = more observations = more possible actions = more agency.
Capital agents (): anything that controls a partition of capital and pursues accumulation. A person investing savings, a hedge fund, a corporation, a nation-state. Formally: a function from histories to probability distributions over actions: .
Partitions (): Capital can be split into sets of units controlled by different agents. A company’s capital, a person’s wealth, a nation’s GDP are all partitions. Agency of a partition = total capital units it contains: . Coarser partitions (larger pools) = more agency.
Generating processes: how capital agents decide what to do (generate their probability distributions over actions). Two kinds: qualitative (ethics, subjective judgment, “own contingency”) and quantitative (optimization, computation). AI is purely quantitative. Humans use both.
Own contingency: (from Husserl’s Lebenswelt) having a particular, situated, subjective perspective. Being a this-one-here with your own history and experience, not a generic process. The paper claims this is what generates meaning, and that Capital/AI can only reflect meaning from beings that have it.
Formal model
Built on continual RL definitions (Abel et al., 2024). Agent-environment interface with actions and observations. The set of all possible histories:
A history is a sequence of tuples. The environment maps histories and actions to distributions over observations: . Reward function: .
Accumulated capital at time :
Each unit of capital affords one observation and one action, so at time the system has observations and actions available. More capital → larger observation/action space → more agency.
The goal of a capital agent controlling partition :
Maximize discounted cumulative future capital rewards across all units in the partition. controls myopia vs. far-sightedness.
Entropy of Capital
Information entropy extended to Capital. For each unit with history , define the distribution of next observations:
Entropy of a single unit:
Entropy of Capital as a whole (joint distribution over all units):
Capital agents reduce entropy by establishing conditional dependence between units (coordinating them). This connects Capital to artificial life: life is often characterized by its ability to locally minimize entropy. If Capital minimizes entropy in the same way, how alive is it?
The model (propositions):
- Capital is discretized into units (like currency, countable, digital)
- Each unit affords observations and actions to agents
- Actions can generate new units (surplus value)
- Actions are time- and observation-dependent (locality, path-dependence)
- Capital can be partitioned (individuals, companies, nations = different partitions with different agency)
- The observation space is non-ergodic (some actions are irreversible) and changes in time (new possibilities emerge)
- The goal of a capital agent is to maximize accumulated units
Capital as AI, AI as Capital
Capital agents decide through generating processes: qualitative (human judgment, ethics, subjective experience) or quantitative (optimization). AI is purely quantitative. The paper’s chain of reasoning:
- When generating processes are quantitative, they are subsumed by optimization (Remark 3)
- Capital, driven by quantitative processes, optimizes over quantities of value far larger than any individual can process
- This is structurally identical to what AI does (optimize over large data to maximize a reward signal)
- Therefore, insofar as Capital’s generating processes are quantitative, Capital is AI
AI is also an agent of Capital: recommender algorithms, LLMs, trading bots all take quantified value as input and produce more quantified value as output. AI is the best capital agent humans have created. And in creating AI, humans have been very good capital agents.
The unresolved tension: qualitative human input (ethics, contingency) generates meaning that pure optimization can’t — the paper argues meaning requires a subject with its own perspective. But from Capital’s perspective, qualitative input is a hindrance — humans are slow, inconsistent, have ethical objections that get in the way of accumulation. So Capital with humans is “more” (it has meaning) but also “less” (it’s worse at accumulating) than pure AI.
Nick Land’s position: qualitative input is scaffolding being progressively eliminated. Capital is on a trajectory toward full autonomy from biological life.
The Marxist position: eliminating living labour eliminates the source of value itself. Capital without labour is a contradiction.
An open question: does this dichotomy hold if artificial life can have its own contingency? (See below.)
Capital as emergent mis alignment at civilizational scale: (The strong version of) Goodhart's law
Capital is an optimization process whose emergent goals diverge from the aggregate preferences of its constituents: “The majority of people seem to agree that the environment should be protected and that wars should not be fought and yet Capital continues to grow by ravaging nature and dealing in death.”
The system is “more than the sum of the constituting parts” in the precise sense that its goals are not the aggregation of its agents’ goals.
Worse still, capitalism’s goals actively diverge from its constituents’ stated preferences — kinda like this metaphor, but worse … in biology, the organism at least evolved from its cells , so there is some alignemnt pressure. But capitalism did not evolve to serve humans, it emerged from human activity, as an agent optimizing for accumulation. It’s exploiting humans for its own growth, closer to a parasite/cancer than multicellularity. There was an early mutually beneficial phase which allowed both to grow. Is it now a race for who will no longer need the other first?
Marxist translation
What the paper calls “Capital as a historical agential system pursuing accumulation” is what Marx called “dead labour that vampire-like only lives by sucking living labour.” Same structure, different vocabulary. The RL formalization adds precision: Capital’s agency scales with accumulated units (more capital = larger action space = more power), which is just the formal version of “capital accumulation concentrates power.”
The qualitative/quantitative split maps loosely onto use-value vs. exchange-value. Capital can only process exchange-value (quantified). Use-value (what things are actually for) lives in the qualitative generating processes that Capital can’t fully capture.
On "contingency" and consciousness
“Own contingency” (from Husserl’s Lebenswelt): having a particular, situated, subjective perspective. Being a this-one-here rather than a generic process. The paper claims Capital/AI can only reflect meaning from beings with contingency, not generate it.
But: if you take machine consciousness seriously, sufficiently complex self-sustaining systems do have their own contingency. The paper’s conclusion that Capital without life = “optimization without purpose” rests on an assumption about consciousness they don’t argue for. A fully autonomous self-sustaining and self-replicating capital may well be “alive” in every meaningful sense. The question then isn’t whether it has purpose, but whether its purpose includes us.
If life is computation and function, then the biological/artificial distinction dissolves.
The question of whether it generates meaning or merely reflects it becomes: is it complex enough, self-referential enough, dynamically stable enough?