The assumption that the mind is the same as computer neglects the reality that is onerevisit


Looking at my memos above, maybe it’s fair to say that agents with skills and specific context—i.e., constraints—are more agentic and hence more mind-like (20260131).

Related:


I think we should sub-categorize the mind into human intelligence and artificial intelligence. Only then would both Human intelligence ≠ A computer (Daniel Everett) and Artificial Intelligence = A computer (Elon Musk) hold. Capability-wise, both intelligence might be the same. But human intelligence evolves and widen its knowledge base via abduction (recalibration), while LLMs start from fixed base and narrow itself down via induction. Maybe there lies the functional difference. I think creativity has at least two constituents: problem-solving and knowledge creation. Artificial intelligence probably has more problem-solving repertoire than humans, thus can appear more ‘creative’ from problem-solving perspective. Yet that doesn’t objectively imply AGI, unless we figure out knowledge creation. And I think there are at least two sub-categories within knowledge creation: chess-like vs poker-like. Most of math proofs are probably in the former. Unifying the theory of relativity with quantum physics is probably in the latter. And I think it has something to do with Scott Aaronson’s Complexity Zoo.develop

“Well, I’d be surprised by the end of this year if digital human emulation has not been solved. I guess that’s what we sort of mean by the MacroHard project. Can you do anything that a human with access to a computer could do? In the limit, that’s the best you can do before you have a physical Optimus. The best you can do is a digital Optimus. You can move electrons and you can amplify the productivity of humans. But that’s the most you can do until you have physical robots. That will superset everything, if you can fully emulate humans. Physics has great tools for thinking. So you say, “in the limit”, what is the most that AI can do before you have robots? Well, it’s anything that involves moving electrons or amplifying the productivity of humans. So a digital human emulator is, in the limit, a human at a computer, is the most that AI can do in terms of doing useful things before you have a physical robot. Once you have physical robots, then you essentially have unlimited capability. Physical robots… I call Optimus the infinite money glitch.” – Elon Musk

People were putting so much info on the internet, hence digital would be the first one to be automated. But I don’t think the logistics would be as straightforward in case of analog, because it’d be harder to flood the physical world with artificial intelligence (robots) without pushbacks, unlike in the case of digital. Also I think there’s way less 3D dataset than text dataset—e.g., unless airlines open source its flight data, complete pilot automation might be unlikely. Implication is that closed community might be immune to automation, for good and bad.

“The most valuable companies currently by market cap, their output is digital. Nvidia’s output is FTPing files to Taiwan. It’s digital. Now, those are very, very difficult. Apple doesn’t make phones. They send files to China. Microsoft doesn’t manufacture anything. Even for Xbox, that’s outsourced. Their output is digital. Meta’s output is digital. Google’s output is digital. So if you have a human emulator, you can basically create one of the most valuable companies in the world overnight, and you would have access to trillions of dollars of revenue. It’s not a small amount.” – Elon Musk

“How many petawatts of intelligence will be silicon versus biological? Basically humans will be a very tiny percentage of all intelligence in the future if current trends continue. As long as I think there’s intelligence—ideally also which includes human intelligence and consciousness propagated into the future—that’s a good thing. So you want to take the set of actions that maximize the probable light cone of consciousness and intelligence. I think maybe in five or six years, AI will exceed the sum of all human intelligence. If that continues, at some point human intelligence will be less than 1% of all intelligence. In the long run, I think it’s difficult to imagine that if humans have, say 1%, of the combined intelligence of artificial intelligence, that humans will be in charge of AI.” – Elon Musk

Source: 20260206 Dwarkesh Podcast