year: 2024
paper: https://openreview.net/pdf?id=Sv7DazuCn8
website: https://khurramjaved.com/the_big_world_hypothesis.html | https://www.youtube.com/watch?v=Fwkcc9tupCI
code:
connections: Richard Sutton
TLDR: The “big world hypothesis” states that for many AI problems, the world is multiple orders of magnitude larger than any agent can fully perceive or model. Even as computational resources grow exponentially, this remains true because: (1) better sensors generate more data, and (2) the world itself becomes more complex as other agents become more sophisticated.
Key implications:
- Agents must rely on approximate solutions rather than exact ones
- Online continual learning becomes essential (learning what’s needed now, forgetting what’s not)
- Computationally efficient algorithms can outperform expensive “exact” ones
- We need new benchmarks that constrain agent resources rather than just making environments bigger
The paper argues this isn’t a temporary limitation but a fundamental property of real-world AI problems, requiring a shift in how we design and evaluate algorithms.