7 Comments
User's avatar
Luke Burton's avatar

I’m someone in the basement of the AI … crèche? … in Silicon Valley. I’ve found that those of us thinking about the impact of AI have a tendency to extrapolate straight from today, all the way to self-aware / autonomous / super-intelligent AGIs. Where we are today is that our AIs are mounds of lifeless floating point numbers. We pass electricity through them, they twitch like Galvani’s frogs, then lay still. How we get from this moment to AIs that respond to competitive evolutionary pressures is really quite unclear. One thing for certain is that it won’t fall out of the scaling laws. Any more than a hyper-accurate weather forecasting system would eventually evolve into a rain cloud.

The velocity of improvements we’re able to wring out of that floating point corpse gives the impression we’re on a straight like trajectory. But pretty much every human innovation tapers into an S-curve. Shouldn’t we be on Mars right now? The optimistic position would be that this is an interregnum, that the “mech suit” for thinking and doing afforded by today’s AIs is an intelligence ratchet which breaks the S-curve. I think this is very plausible, actually. The pessimistic position is the standard S-curve results in a new local maximum and we’re stuck with very lifelike but ultimately lifeless simulacrums of intelligence.

It feels as if the local maximum outcome is so horrifying to think about that we’d rather spend time worrying about alignment strategies to prevent the earth being turned into a paperclip factory. But the former has a much higher probability than the latter which should worry us a great deal more. The abrupt nullification of biological life might be preferable to the consequences of a botched rollout of stillborn AI affecting untold generations of humans.

In the stillborn AI scenario wages for the majority of human labor crash through the floor. Scaling laws mean powerful AIs are concentrated in the hands of the very few national or private entities with the capital and infrastructure required to generate terawatts of power. Operators of these AIs are able to abuse the intelligence ratchet to outthink and outsmart adversaries. There is no incentive to reach AGI: why would you create your ultimate competitor? There will be no storming of the Bastille because the planet is an info-Bastille; the data harvested in the digital and physical worlds would make Graham’s number blush.

Here’s one datapoint to make this feel less hyperbolic. It is already the case that I, a lifelong programmer and engineer in my 40s, am 100x more productive using AI agents (what the kids call “vibe coding”). It truly feels like a superpower. There is a slot-machine quality to the experience and I’ve never written so much code in my life. It’s also true that junior engineers are more productive, but by a smaller factor multiplied by less experience. I’ve seen a staggering number of “old” programmers pick up their tools again, long dormant, and perform miracles. This effect is somewhat dampened by a significant quantity of senior engineers feeling threatened and skeptical of AI while the younger ones take it for granted. But it’s happening. And the key point is the existence of the gap.

So imagine that effect multiplied out across industries, where the AI advantage stacks up for the incumbents, and it remains the case that the combination of an incumbent with significant experience plus an AI assistant always has an advantage – even a slight advantage – over inexperienced or less capable competitors. It is like compound interest for intelligence and execution; you only need to be left behind a little bit in the beginning to be an order of magnitude behind towards the end.

Anyway, we’ll be dealing with some runaway evolutionary processes, whether they’re human+AI or AI alone, and human+AI scares me more than AI itself. In the long term Mother Gaia remains indifferent and will happily turn our remains into soil for the next cycle.

Expand full comment
Satisfying Click's avatar

As usual, his writing starts with a pen, which becomes a scalpel, which ends up as a hammer.

Expand full comment
Mark Neyer's avatar

Re practical steps to induce competition: I imagine hackers and fraudsters are already at work on this, trying to get as much free usage of the LLM’s as they can. These companies also have to compete with each other. Rather than regulate the companies, I think the better use of government funds would be to fund companies using ai tools to continuously try to penetration test government systems, to reveal vulnerabilities. It would help harden government systems and maybe help us evolve “network guard dog” type intelligences.

Expand full comment
Murray Kopit's avatar

On the topic of "free" usage, my method of doing research is to employ multiple LLM instances as a panel of experts. By simply using copy and paste, I get them to critique and improve on one another's work. Not only does this synergistically amplify the results, but it gives me the benefit of a virtually unlimited token ceiling! How cool is that? This collaborative approach could add leverage to building those "network guard dog" intelligences in the future. The method is already being used in LLM architecture internally, whereas mine is done via the usual semantic interface in conversation space, no engineering required.

Expand full comment
Murray Kopit's avatar

In a few short weeks since I wrote that on May 27, I'm now using swarms of agents with Claude Code CLI. Things are moving fast.

Expand full comment
Murray Kopit's avatar

Neal, your observation that animals do what they do because of their motivation, their survival imperatives, cuts right to the bone of the difference between AI and sentient life.

In a paper I just sent to JCS, I argue that what we recognize as sentience is this very thing: the anticipation of action, rooted in a recursive drive for survival. I call it the Survival Recursion Theory of Sentience. It’s not about how well an animal can mimic intelligence, but how it must act to persist.

AI, by contrast, has no survival recursion, it just computes, with no consequence if it doesn’t. That’s why it can talk like a wise old dog, but it’s still just a parrot, and I argue that sentience is not destined emerge from AI by scale alone.

Thanks for writing this piece. It’s rare to see someone acknowledge that what makes animal intelligence real is precisely what AI can’t emulate: the geometry of survival itself. Let me know if you’d like to read more. I think it aligns well with what you’re circling here.

Expand full comment
Anjuli Pierce's avatar

I know someone who is up to the task of training predator AI models...

Expand full comment