6 Comments
User's avatar
Satisfying Click's avatar

As usual, his writing starts with a pen, which becomes a scalpel, which ends up a hammer.

Mark Neyer's avatar

Re practical steps to induce competition: I imagine hackers and fraudsters are already at work on this, trying to get as much free usage of the LLM’s as they can. These companies also have to compete with each other. Rather than regulate the companies, I think the better use of government funds would be to fund companies using ai tools to continuously try to penetration test government systems, to reveal vulnerabilities. It would help harden government systems and maybe help us evolve “network guard dog” type intelligences.

Murray Kopit's avatar

On the topic of "free" usage, my method of doing research is to employ multiple LLM instances as a panel of experts. By simply using copy and paste, I get them to critique and improve on one another's work. Not only does this synergistically amplify the results, but it gives me the benefit of a virtually unlimited token ceiling! How cool is that? This collaborative approach could add leverage to building those "network guard dog" intelligences in the future. The method is already being used in LLM architecture internally, whereas mine is done via the usual semantic interface in conversation space, no engineering required.

Murray Kopit's avatar

In a few short weeks since I wrote that on May 27, I'm now using swarms of agents with Claude Code CLI. Things are moving fast.

Murray Kopit's avatar

Neal, your observation that animals do what they do because of their motivation, their survival imperatives, cuts right to the bone of the difference between AI and sentient life.

In a paper I just sent to JCS, I argue that what we recognize as sentience is this very thing: the anticipation of action, rooted in a recursive drive for survival. I call it the Survival Recursion Theory of Sentience. It’s not about how well an animal can mimic intelligence, but how it must act to persist.

AI, by contrast, has no survival recursion, it just computes, with no consequence if it doesn’t. That’s why it can talk like a wise old dog, but it’s still just a parrot, and I argue that sentience is not destined emerge from AI by scale alone.

Thanks for writing this piece. It’s rare to see someone acknowledge that what makes animal intelligence real is precisely what AI can’t emulate: the geometry of survival itself. Let me know if you’d like to read more. I think it aligns well with what you’re circling here.

Anjuli Pierce's avatar

I know someone who is up to the task of training predator AI models...