About this post
This picture was over simplified. LLMs can sometimes self-correct in later steps - for example by re-evaluating earlier assumptions, validating outputs, or running consistency checks. That means failures don’t always propagate linearly. Still, these self corrections 20251112233300⁝ Self-Correction (Language Models) are neither guaranteed, and they don’t eliminate the core problem: long, branching workflows are fragile.
20251113123357b⁝ LLM 20251116123102⁝ Principle of Least Astonishment