Missing Paper: Learnable Nature Conjecture
Type: gap Slug: gap—learnable-nature-paper Sources: nobel-prize-lecture-accelerating-scientific-discovery-with-ai—hassabis Last updated: 2026-05-13
What’s missing
The 2024 Nobel Lecture articulates a conjecture that many natural laws are learnable by ML from data alone, but no peer-reviewed paper formalises or tests this claim. As of May 2026, this remains the most important idea in the corpus without a proper scholarly home.
Why it matters
If true, the conjecture reframes the relationship between AI and science: AI is not merely a tool but a discovery engine. If false, AlphaFold’s success is domain-specific and does not generalise.
What a paper would need
- Formal definition: what counts as “learnable”? What is the failure mode?
- Negative cases: domains where ML fails to discover regularities despite adequate data
- Comparison: learnable vs. physically-simulated approaches on the same problems
- Theoretical framework: connection to statistical learning theory, computational irreducibility
Connections
- Claim: claim—learnable-nature-conjecture
- Theme: theme—AI-for-science
Honest Gaps
- The Nobel Lecture extraction is only ~8K chars — Hassabis may have articulated the conjecture more precisely than what is available.
- It is possible that a paper is in preparation at Isomorphic Labs or DeepMind but not yet published.