Glossary

Self-Correction (Language Models) A capability wherein a language model identifies and rectifies errors, inconsistencies, or flaws within its own reasoning or generated output. This can occur in two primary ways:

  • Multi-Turn Self-Correction: An iterative process where the model generates an output, critiques it (often prompted), and then provides a revised response in a subsequent turn.
  • Single-Star Utterance Intrinsic Self-Correction: The ability to detect and correct a reasoning error during the generation of a single, uninterrupted response, without external prompts or verification. This can be implicit (silently fixing the error) or explicit (acknowledging the mistake, e.g., “Wait, let me correct that…”).

based on: https://www.arxiv.org/pdf/2506.15894

Naur Theory (Programming as Theory Building) A perspective on software development, introduced by Peter Naur, which posits that the primary activity of programming is not the creation of source code (the text), but rather the construction of a detailed mental model (the “theory”) within the programmer’s mind. This “theory” represents a complete understanding of the problem and its solution. The source code is considered an incomplete, secondary documentation of this internal theory, which explains why program comprehension and maintenance are difficult for those who do not possess this mental model.

CoT (Chain-of-Thought) Reasoning process where a Large Language Model (LLM) generates a series of intermediate, step-by-step “thinking” steps to solve a complex problem, rather than providing only the final answer. It is used to enhance a model’s performance on reasoning-intensive tasks, such as mathematics, coding, and scientific reasoning. This paper also notes that increasing the length of this reasoning process can lead to significant improvements.