“One thing I’d find surprising is if there was a combination of a technological breakthrough that improved the efficiency of distributed training, and some set of actors that put together enough computers to train a very powerful system. If this happened, it would suggest you can not only have open-weight models but also a form of open model development where it doesn’t take a vast singular entity (e.g. a company) to train a frontier model. This would alter the political economy of AI and have extremely non-trivial policy implications, especially around the proliferation of frontier capabilities.” – Jack Clark, co-founder of Anthropic (20260110)
Next:
Related:
- 5-3b Knowledge creates new frontiers (and new markets)
- 7-1d2 Technology changes the problem-situation, and determines which ideas are possible and obsolete
- 7-1a2a You will be solving universal problems by attending to local-parochial problems first
- 7-1a5 Startups work on technology because great ideas made viable by newest tech (itself a new technology) is the best source of rapid change and growth
- 8-1c4a2 Twitter’s 140-character limit was useful constraint
- As the Idea Maze gets better in the sense of reflecting the reality more accurately, it becomes harder to fool yourself:
- According to Jack, Epoch has a nice analysis of distributed training
- The Jack’s quote’s implication: that the hardware can also influence the software
- My view is that the uninterrupted development of LLMs would depend on privacy infra (e.g., decentralized prover networks which includes ZK prover infra)develop
- Otherwise the physical-electrical footprint will be there