A learned idea is equivalent to a new idea. Both have been created and criticized in the mind.

But it’s not exactly the same. Because in certain proofs (e.g., zero-knowledge proofs) the verifier does not have to go though the prover’s thought processes.

Next:

Related:


The results from Anthropic study: participants in the AI group finished faster by about two minutes (not statistically significant), yet on average, the AI group also scored significantly worse on the quiz—17% lower, or roughly two letter grades. The high scorers (65%+) did something different: some generated code first, then asked follow-up questions to understand what they’d produced; others requested explanations alongside the code; the fastest group asked only conceptual questions, then coded independently while troubleshooting their own errors.

In short, you have to understand what’s been done.

Related:


Contradictory?

  • Jack Clark on Gemini solving some Erdos problems (Google DeepMind et al) (20260210)
    • AI massively speeds up generating candidate proofs, but the bottleneck becomes human experts who must evaluate correctness. “Large Language Models can easily generate candidate solutions, but the number of experts who can judge the correctness of a solution is relatively small, and even for experts, substantial time is required to carry out such evaluations”, the authors write.
      • The verifier and the prover turned on its head <> verifier could be the bottleneck <> Scott Aaronson