Self-Correcting Construction
Type: intersection (feedback loop) Slug: intersection—self-correcting-construction Parents: intersection—AI-discovered-laws-of-mind, intersection—false-memories-error-term-learnable-construction Last updated: 2026-05-14 Epistemic status: Conjectural
The feedback loop
AI-Discovered Laws of Mind (S) says AI can discover correctness properties in cognitive architecture by training multiple architectures and finding convergent properties. Error Term of Learnable Construction (T) says false memories reveal where the learned construction grammar breaks down — they’re informative errors, not noise. Combined: the AI-discovered laws of experiential coherence should predict false memory locations. Discover the laws → predict the error locations → find false memories at predicted locations → refine the laws. This closes a self-correcting loop: the system’s errors are used to improve the system’s theory of itself.
Why this is a novel scientific method
Current cognitive science proceeds: (1) propose theory, (2) test predictions, (3) revise theory. Errors are used to reject or refine theories externally by the researcher. Self-correcting construction proposes: (1) AI discovers laws from data, (2) laws predict error locations, (3) errors at predicted locations confirm laws and identify where they need refinement, (4) refined laws are re-discovered from expanded data. The system itself uses its errors to improve its theory, without human intervention in the theory revision step.
Concrete procedure
- Train multiple architectures on construction-like tasks (scene completion, imagination)
- Identify convergent properties across architectures → candidate “laws of coherence”
- Use these laws to predict where false constructions should occur — which scene elements will be incorrectly combined, under what conditions
- Test predictions against human false memory data (DRM paradigm, temporal pole false recognition)
- If false memories occur at predicted locations: laws are confirmed, identify which laws need refinement at those locations
- Retrain with data weighted to challenge the identified weaknesses
- Repeat until laws predict all observed error locations
Why this couldn’t work with current methods
Current false memory research is descriptive: it catalogues where false memories occur but doesn’t derive them from a theory of coherence. The DRM paradigm shows semantic false memories but doesn’t predict which specific lures will produce false memories from first principles — it uses pre-selected lure lists. Self-correcting construction would predict lure effectiveness from the learned coherence laws, not from experimenter intuition.
Connection back to mathematics of construction
The Mathematics of Construction intersection (6) proposes formalising the construction system. Self-correcting construction provides the empirical constraint for that formalisation: any formal theory of construction must predict false memory locations. If the formal theory and the empirically discovered laws converge, this is joint theoretical-empirical confirmation. If they diverge, the divergence localises the theory’s error precisely.
What makes this non-trivial
This proposes using cognitive errors as the primary data source for theory discovery, rather than cognitive successes. Standard science studies what systems get right; this studies what they get wrong, on the principle that errors reveal the structure of the generating system more informatively than correct outputs. In information-theoretic terms: correct outputs are low-entropy (confirming what you already know); errors are high-entropy (revealing where your model breaks).
Falsification: If AI-discovered coherence laws do not predict false memory locations better than chance, the self-correcting loop is false.