Post-AlphaFold (2022–Present)

Type: period Slug: period—post-alphafold Sources: deepmind-ceo-demis-hassabis-urges-caution-on-ai—hassabis, demis-hassabis-time100-on-alphafold-agi-and-humanity—hassabis, lex-fridman-podcast-475-future-of-ai-simulating-reality-physics-and-video-games—hassabis, nobel-prize-lecture-accelerating-scientific-discovery-with-ai—hassabis, pushing-the-frontiers-of-density-functionals-by-solving-the-fractional-electron-problem—hassabis Last updated: 2026-05-13


Summary

The post-AlphaFold period is defined by the 2024 Nobel Prize in Chemistry and its aftermath. Five sources span a density functionals paper (2022), the Nobel Lecture (2024), and three public statements (2023–2026). This is the only period with no peer-reviewed neuroscience papers and no game-playing AI papers — the corpus shifts entirely toward AI-for-science, public intellectual positions on AGI and safety, and the articulation of Hassabis’s broader worldview. The Nobel Lecture introduces the “learnable nature conjecture” — the most recent and least-examined claim in the corpus.

Core content

The Nobel Lecture (2024): “Accelerating Scientific Discovery with AI” (lecture—nobel-prize-lecture-accelerating-scientific-discovery-with-ai) is the capstone public statement, reviewing AlphaFold’s development and introducing the learnable nature conjecture — the proposal that many laws of nature might be discoverable by machine learning systems trained on experimental data. This is the first explicit philosophical claim about the nature of scientific knowledge in the corpus. The extraction is limited (~8K chars, likely slides/abstract only), making full analysis impossible.

Density functionals (2022): Pushing the frontiers of density functionals (paper—pushing-the-frontiers-of-density-functionals-by-solving-the-fractional-electron-problem) applied deep learning to solve the fractional electron problem in computational chemistry — a longstanding theoretical challenge. This extends the AI-for-science programme beyond protein structure into quantum chemistry.

Public intellectual positions (2023–2026): Three sources capture Hassabis’s evolving public stance. The TIME100 essay (essay—demis-hassabis-time100-on-alphafold-agi-and-humanity) frames AGI as achievable within years and positions AlphaFold as evidence that AI can solve “impossible” scientific problems. The Guardian op-ed (essay—deepmind-ceo-demis-hassabis-urges-caution-on-ai) — noting metadata year is wrong (2026→2023) — calls for international coordination on AI safety. The Lex Fridman interview (interview—lex-fridman-podcast-475-future-of-ai-simulating-reality-physics-and-video-games) is the most extensive source, covering AGI timelines, consciousness, simulation hypotheses, and the role of games as testbeds.

Shift in corpus character: This is the only period dominated by non-peer-reviewed sources (1 paper, 1 lecture, 3 essays/interviews). The intellectual content shifts from technical contributions to public philosophy and institutional advocacy.

Connections

  • Themes: theme—AI-for-science, theme—learnable-nature-conjecture, theme—safety-governance, theme—AGI-definition, theme—AGI-risk
  • Projects: project—DeepMind-general (density functionals), none (public statements)
  • Collaborators: Brendan McMorrow, David H. P. Turban, Alexander L. Gaunt, James Spencer (density functionals); Lex Fridman (interviewer)
  • Venues: venue—Nobel-Prize, venue—Science, venue—TIME, venue—The-Guardian, venue—Lex-Fridman-Podcast
  • Succeeds: period—alphafold-era — the AlphaFold results create the platform for the Nobel Prize and subsequent public role
  • Key claim: Learnable nature conjecture (lecture—nobel-prize-lecture-accelerating-scientific-discovery-with-ai)

Honest Gaps

  • The Nobel Lecture extraction is only ~8K chars — likely slides or abstract, not a full transcript. This is the most important gap in the entire corpus.
  • No peer-reviewed papers from 2023–2026 are present — the corpus may be missing recent publications (e.g., Gemini-related work, Isomorphic Labs papers).
  • The metadata year for the Guardian op-ed is wrong (2026→2023) — the actual publication date needs verification.
  • The Lex Fridman transcript (~167K chars) is the longest source but covers topics at a conversational level rather than technical depth.
  • No sources document the internal DeepMind/Google reorganisation (2023 merger into Google DeepMind) or Hassabis’s role as Google DeepMind CEO.
  • The learnable nature conjecture has no peer-reviewed paper supporting it — flagged as gap—learnable-nature-paper.
  • No primary sources cover Hassabis’s testimony before governments or regulatory bodies, despite his known advocacy role.