LLMs can solve problems, that’s what matters. Maybe the code isn’t perfect. But efficiency barely matters in most cases. What matters is if it can solve the problem (unless the problem is about making it efficient). It might be inefficient because LLMs are translating English into programming languages, but this isn’t LLMs-specific problem—it also happens with humans translating between multiple languages (or even within the same language). And in most cases it’s way more efficient to use LLMs than not. The difference would be equivalent to those who use internet and those who don’t.
As such, if you look at LLMs from problem-solving lens, it enhances individual sovereignty, contrary to popular beliefs. It allows you to solve your own problems without delegating the task to other capitalist entities. AI gives you the ‘average’ answer only if you don’t nudge the model—and when you really want to solve something personal, you’ll nudge the model. [0, 0.1] Put differently, the ‘average’ will fatten unless people have opinionated perspective and act on it. [1]
This has nothing to do with the AGI debate. LLMs work because they don’t have to start locally—only humans do. [1.1] The debate makes sense only if you’re truly trying to replicate humans. Creativity cannot be harnessed by more data. But we can verify faster with more data. [2]
Don’t blindly trust AI though. The user should verify the output. How? By simply asking—is the problem solved? And if you’re curious enough, you’ll know how it was done. [3]
LLMs are worth accelerating from problem-solving perspective.
[0] <> 1-1a2e7b You need some form of constraints to see anything <> 2-1a0c1d2 Less is often more
[0.1] <> 6-3d Be very specific about problems and divide a project clear cut so it doesn’t become bleak, like shared common room. <> 6-3z A world without ownership is a world with less creativity and human flourishing
[1] <> 2-1a1a3e There’s no objective average <> 3-1c3c4.2 “Whenever you find yourself on the side of the majority, it is time to pause and reflect” >< To be universal, protocols must be unopinionated
[1.1] The model can already self-correct its errors, and that’s as good as finding its own problems. But the ultimate problem is still defined by humans (at least for now).
[2] <> 5-1b1a8a1 Prediction ≠ Knowledge (because prediction requires knowledge) <> Judea Pearl
[3] Trust (to an extent) and verify (to an extent) <> 1-2g2q1 Science is about independent replication. Only trust as scientific truth what can be independently verified-replicated.
Related:
- We need to give LLMs agency. How? By clearly stating what should be solved:
- 3-1c2 Write down your problems
- ==4-1a4b6a When you write down, you are helping yourself both now and in the future==develop
- Digital-native means multiplicity and concurrency—and the two are shared by both crypto and AI
- 7-1a1 You can only solve your own problems. You incidentally help others by solving THAT.
- 7-1a2a You will be solving universal problems by attending to local-parochial problems first
- Todd Graves: “Not trying to be all things to all people is so important because if you try to be all things to all people, you’re not anything to anybody.”
-
< 1-1a5b4.2 GTFOL, ASAP—too much generalization isn’t good
- 9-4c2 ‘Programs should be written for people to read, and only incidentally for machines to execute.’ ‘Design to express algorithms, and only incidentally tell machines how to execute them.‘
- Each of your bet should have a corresponding problem
- Each of your community should be about a corresponding problem <> 2-1a0c1d The One Commandment is about focus - Focus on a single moral innovation
- 11-4 Diversification can achieve what multiplicity does in the digital
- Things are more efficient when they know what they are trying to achieve
Inspirations:
- The Last Moat Standing by @fintechjunkie
- If anyone can build your product in a weekend, what’s actually defensible?
- An opinionated perspective on the solution
- Being able to build and understanding the best way to solve a problem aren’t remotely the same thing.
- This is why great products feel “just better” even when you can’t articulate why.
- <> Steve Jobs: “Customers don’t know what they want until we’ve shown them.”
- Opinions can be copied but they can’t copy what happens next
- We’re entering an era where your competition isn’t only other startups—it’s also your user deciding they could probably just do this themselves on a Saturday.
- In that world, only one thing matters: having a perspective worth paying for. The products that survive aren’t going to be the ones with the best tech or biggest teams. They’re going to be the ones where someone formed a genuine opinion about the right way to solve something and kept refining it over and over and over again.
- If anyone can build your product in a weekend, what’s actually defensible?
- Erik Hoel: Proving (literally) that ChatGPT isn’t consciousrevisit