LLMs can solve problems, that’s what matters. Maybe the code isn’t perfect. But efficiency barely matters in most cases. What matters is if it can solve the problem (unless the problem is about making it efficient). It might be inefficient because LLMs are translating English into programming languages, but this isn’t LLMs-specific problem—it also happens with humans translating between multiple languages (or even within the same language). And in most cases it’s way more efficient to use LLMs than not. The difference would be equivalent to those who use internet and those who don’t.

As such, if you look at LLMs from problem-solving lens, it enhances individual sovereignty, contrary to popular beliefs. It allows you to solve your own problems without delegating the task to other capitalist entities. AI gives you the ‘average’ answer only if you don’t nudge the model—and when you really want to solve something personal, you’ll nudge the model. [0, 0.1] Put differently, the ‘average’ will fatten unless people have opinionated perspective and act on it. [1]

This has nothing to do with the AGI debate. LLMs work because they don’t have to start locally—only humans do. [1.1] The debate makes sense only if you’re truly trying to replicate humans. Creativity cannot be harnessed by more data. But we can verify faster with more data. [2]

Don’t blindly trust AI though. The user should verify the output. How? By simply asking—is the problem solved? And if you’re curious enough, you’ll know how it was done. [3]

LLMs are worth accelerating from problem-solving perspective.


[0] <> 1-1a2e7b You need some form of constraints to see anything <> 2-1a0c1d2 Less is often more

[0.1] <> 6-3d Be very specific about problems and divide a project clear cut so it doesn’t become bleak, like shared common room. <> 6-3z A world without ownership is a world with less creativity and human flourishing

[1] <> 2-1a1a3e There’s no objective average <> 3-1c3c4.2 “Whenever you find yourself on the side of the majority, it is time to pause and reflect”

[1.1] The model can already self-correct its errors, and that’s as good as finding its own problems. But the ultimate problem is defined by humans.revisit

[2] <> 5-1b1a8a1 Prediction ≠ Knowledge (because prediction requires knowledge) <> Judea Pearl

[3] Trust (to an extent) and verify (to an extent) <> 1-2g2q1 Science is about independent replication. Only trust as scientific truth what can be independently verified-replicated.