year: 2026
paper: from-entropy-to-epiplexity-rethinking-information-for-computationally-bounded-intelligence
website:
code:
connections: kolmogorov complexity


Factorization — Some data sources, orderings, and transformations lead to better OOD generalization than others.

Learning to predict the next chess board position given a list of moves VS learning to predict a list of moves given a chessboard position.
“How chess games typically unfold” is a much richer representation to learn than “how to apply move rules”.

structural (learnable, reusable information), epiplexity.
time-bounded entropy … irreducible randomness given compute budge

Worth revisiting with deeper math background

CSPRNGs have near maximal but negligible for polynomial-time-observers (noise, no learnable structure).

Compute-limited models extract more structural information than the generating process contains, because they can’t brute-force simulate the dynamics and must learn higher-level abstractions (glider species, collision rules in ECA rule 54). At sufficient compute, a “looped” transformer learns the brute-force solution and epiplexity drops.