Papers
- A Clinically Applicable Approach to Continuous Prediction of Future Acute Kidney Injury
- A Distributional Code for Value in Dopamine-Based Reinforcement Learning
- A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play
- Advancing Mathematics by Guiding Human Intuition with AI
- Applying and Improving AlphaFold at CASP14
- Big-Loop Recurrence Within the Hippocampal System Supports Integration of Information Across Episodes
- Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease
- Computations Underlying Social Hierarchy Learning
- Decoding Individual Episodic Memory Traces in the Human Hippocampus
- Decoding Neuronal Ensembles in the Human Hippocampus
- Deconstructing Episodic Memory with Construction
- Grandmaster Level in StarCraft II Using Multi-Agent Reinforcement Learning
- Highly Accurate Protein Structure Prediction for the Human Proteome
- Highly Accurate Protein Structure Prediction with AlphaFold
- Human-Level Control Through Deep Reinforcement Learning
- Human-Level Performance in First-Person Multiplayer Games with Population-Based Deep Reinforcement Learning
- Hybrid Computing Using a Neural Network with Dynamic External Memory
- Imagine All the People: How the Brain Creates and Uses Personality Models
- Improved Protein Structure Prediction Using Potentials from Deep Learning
- Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
- Mastering the Game of Go with Deep Neural Networks and Tree Search
- Mastering the Game of Go without Human Knowledge
- Neural Mechanisms of Hierarchical Planning in a Virtual Subway Network
- Neural Scene Representation and Rendering
- Neuroscience-Inspired Artificial Intelligence
- Overcoming Catastrophic Forgetting in Neural Networks
- Patients with Hippocampal Amnesia Cannot Imagine New Experiences
- Prefrontal Cortex as a Meta-Reinforcement Learning System
- Protein Complex Prediction with AlphaFold-Multimer
- Protein Structure Predictions to Atomic Accuracy with AlphaFold
- Pushing the Frontiers of Density Functionals by Solving the Fractional Electron Problem
- Reinforcement Learning, Fast and Slow
- Semantic Representations in the Temporal Pole Predict False Memories
- The Construction System of the Brain
- The Future of Memory: Remembering, Imagining, and the Brain
- Tracking the Emergence of Conceptual Knowledge during Human Decision Making
- Using Imagination to Understand the Neural Basis of Episodic Memory
- Vector-Based Navigation Using Grid-Like Representations in Artificial Agents
- What Learning Systems Do Intelligent Agents Need? Complementary Learning Systems Theory Updated
- When Fear Is Near: Threat Imminence Elicits Prefrontal-Periaqueductal Gray Shifts in Humans
Lectures
Interviews
Essays
- Chess Match of the Century
- DeepMind CEO Demis Hassabis Urges Caution on AI
- Demis Hassabis Is Preparing for AI’s Next Chapter (TIME100)
Posts
Themes
- AI for Science
- Deep Reinforcement Learning
- Game-Playing AI
- Hippocampal Construction
- Memory and Imagination
- Neuroscience-AI Bridge
- Protein Folding
- Self-Play
Projects
Periods
- PhD Period (2007–2009)
- Postdoc Period (2009–2010)
- Early DeepMind (2010–2015)
- DeepMind Ascent (2015–2018)
- AlphaFold Era (2018–2022)
- Post-AlphaFold (2022–Present)
Collaborators
Venues
Claims
- Hippocampus as Construction System
- Learnable Nature Conjecture
- Neuroscience-AI Bidirectional Bridge
- Self-Play Sufficiency
Gaps
Intersections
Epistemic tags: Grounded = both parents make specific conflicting claims, gap directly verifiable. Extrapolative = one parent’s mechanism extended to new domain. Conjectural = relies on mechanisms from other intersections, not directly from corpus papers.
Priority ranking (top 5)
- ⬥ Self-Play Discovers Its Own Consolidation
Conjectural— Decisive experiment: does meta-self-play discover CLS without biological priors? - ⬥ Memory as Query, Not Store
Conjectural— Paradigm test: are hippocampal traces stable or do they shift with consolidation? - ⬥ EWC × Experience Replay
Grounded— Trivially testable gap: same lab, same problem, never connected - Slow RL
Grounded— Clean protocol: parametric distance + RL model fitting - ⬥ Social Hierarchy × Self-Play
Grounded— Direct RL experiment: rank representations in multi-agent play
First-order (pairwise)
- Hippocampal Construction × Self-Play
Extrapolative— Imagination and self-play both generate novel outputs by recombination - Experience Replay × Hippocampal Replay
Grounded— The most glaring silence: DQN’s replay and hippocampal SWR replay, same problem, never connected - Learnable Nature × Hippocampal Construction
Extrapolative— Both are structure prediction from compressed representations - Meta-RL × Self-Play
Grounded— A meta-learner that discovers its learning algorithm through self-play - Slow RL × AlphaGo Zero Dynamics
Grounded— The dual-system split may be emergent rather than designed - Big-Loop Recurrence × Attention
Grounded— Iterative integration as fixed-point iteration x_{t+1} = f(x_t) - GQN × Hippocampal Construction
Grounded— Scene representation + novel viewpoint synthesis; first/third-person as query variation - Grid Cells × Self-Play
Grounded— Do grid-like representations emerge differently under non-stationary self-play? - EWC × Experience Replay
Grounded— The second glaring silence: two biological forgetting solutions, same lab, never connected - DNC × Complementary Learning Systems
Grounded— DNC’s external memory is an incomplete hippocampus; CLS provides the missing consolidation - Distributional RL × Hippocampal Construction
Extrapolative— Imagined scenarios should produce distributional prediction errors - Semantic False Memories × Construction
Grounded— False memories are over-construction; the error term of the system - Social Hierarchy × Self-Play
Grounded— Multi-agent self-play should produce internal rank representations - Density Functionals × Learnable Nature
Grounded— The strongest evidence for learnable nature, never connected to the conjecture - Slow RL
Grounded— vmPFC→PAG shift is a parametric fast/slow transition - Conceptual Emergence × Big-Loop Recurrence
Extrapolative— Conceptual abstraction as phase transition from associations to categories - MuZero × Construction
Extrapolative— Both learn internal models without rules; MuZero predicts, construction generates - Personality Models × Meta-RL
Extrapolative— Personality knowledge conditions the learning algorithm
Second-order (intersection × intersection)
- Sleep Cycle
Extrapolative— Construct during wake (self-play), consolidate during sleep (replay) - Slow
Extrapolative— The brain’s dual-system architecture may be a discovered optimum - Episodic Consolidation as Structure Inference
Extrapolative— Memory consolidation is big-loop attention over constructed scenes - Meta-Replay
Extrapolative— Replay past learning episodes to improve the learning algorithm itself - Semantic Split
Extrapolative— Episodic and semantic memory as emergent outputs of a single construction system - Mathematics of Construction
Extrapolative— Use AI-guided mathematical exploration to formalise episodic construction - Mental Time Travel as Viewpoint Synthesis
Extrapolative— Recall is re-rendering a cross-episode scene from a different query viewpoint - Emergent Cognitive Maps in Adversarial Environments
Extrapolative— Fast/slow split instantiated as spatial scale split over grid-like codes - Error Term of Learnable Construction
Extrapolative— False memories reveal where learned construction grammar breaks down - Social Meta-Self-Play
Extrapolative— Rank-conditioned meta-learning; different algorithms for different opponents - AI-Discovered Laws of Mind
Extrapolative— Discover correctness properties in cognitive architecture
Third-order (convergences)
- Convergences
Conjectural— Four unified theories: AlphaFold for Imagination, Developmental Stack, Social Construction Engine, Self-Diagnostic Discovery Machine
Feedback loops (intersection × intersection, recursive)
- Self-Play Discovers Its Own Consolidation
Conjectural— Meta-self-play discovers CLS without biological priors - Memory as Query, Not Store
Conjectural— Episodic “traces” are query parameters into scene grammars - Self-Correcting Construction
Conjectural— AI-discovered laws predict false memory locations; errors bootstrap theory - Spatially Structured Mental Time Travel
Conjectural— Recall is navigation over grid-like constructed scenes