This is a cleaned-up version of a brief talk I gave at a meeting on 18 June 2025: the “Ship Your Research Impact Summit” organized by the Laude Institute in San Francisco. The audience was mostly AI researchers, many of whom had one foot in academia and one in the startup world. Thanks to Andy Konwinski and the rest of the Laude Institute crew for inviting me.
The Primer
The most relevant aspect of my work to the theme of this meeting was my novel The Diamond Age, which was published about thirty years ago. At the beginning of this book we see a conversation between Lord Finkle-McGraw, who is an Equity Lord in a futuristic neo-Victorian society, and John Hackworth, an engineer who works in one of his companies.
Finkle-McGraw is a classic founder. He didn’t come from a privileged background, except insofar as having a stable family and a decent basic education confers privilege. But when he was young he was brilliant, ambitious, hard-working, and had a vision. He built that into something valuable and as a result became rich and powerful. As so often happens, he used his money to make life good for his children by sending them to the right schools, connecting them to the right people, and so on.
He wasn’t entirely happy with the results. His kids didn’t end up having the traits that had made him successful. He suspects it’s because they didn’t have to work hard and overcome obstacles. Now he has a granddaughter. He knows that the parents are going to raise this girl in the same way, with the same results. He can’t interfere in a heavy-handed way. But the parents can’t possibly object if he gives his granddaughter an educational book. So he commissions Hackworth to make the Young Lady’s Illustrated Primer, an interactive book that will adapt as the user grows and learns. This book is powered by molecular nanotechnology, but any present-day reader will immediately recognize it as an AI system.
As the plot unfolds, three copies of the Primer are made and bestowed on girls from very different backgrounds. In two cases the result is a sort of fizzle. The Primer works as it’s supposed to for a while, but these girls lose interest and set it aside. The third copy falls into the hands of a girl from an abusive and underprivileged background, and it ends up giving her close to superhuman abilities.
Thirty years on, I think I have enough distance on this to grade my performance. I’m happy with the fact that the Primer, as described in the novel, doesn’t invariably produce great results. That seems like a measured and realistic outcome. Nevertheless it’s clear that when I wrote this thing I was influenced by a strain of techno-utopian thinking that was widespread in the mid-1990s, when the Internet was first becoming available to a mass audience. In those days, a lot of people, myself included, assumed that making all the world’s knowledge available to everyone would unlock vast stores of pent-up human potential.
That promise actually did come true to some degree. It’s unquestionably the case that anyone with an Internet connection can now learn things that they could not have had access to before. But as we now know, many people would rather watch TikTok videos eight hours a day. And many who do use the Internet to “do research” and “educate” themselves are “learning” how Ivermectin cures COVID, the sky is full of chemtrails spewed out by specially equipped planes, and vaccinations plant microchips in your body.
AI and Education
Now the cycle of enthusiasm and disillusionment is repeating itself with AI. This time, though, it’s happening a lot faster, because we all have a kind of preloaded cynicism about what new technology can and can’t do for us. This is happening on many fronts, but I’m going to confine myself here to education.
I won’t attempt to provide an account of how the use of ChatGPT and other systems has affected the education system, because that is being exhaustively discussed elsewhere. I’ve found the teachers subreddit to be particularly informative, since it’s populated by people who actually have boots on the ground, as it were, working in classrooms every day. Here’s a representative post on this topic. But discussion of this is all over the place now. The gist of it is that the system we’ve traditionally used for evaluating students’ performance - homework and tests - just happens to be exquisitely vulnerable to being hacked by students who simply use conversational AI systems to do all the work for them. And they are doing so on a massive scale, to the point where conventional education has essentially stopped functioning. The only way to fairly evaluate how much a student has learned now is by marching them into a classroom with no electronics, handing them a pencil and a blank blue book, and assigning them an essay to write or a math problem to solve. Even this is impractical given that many students never really learned to write by hand. And that is setting aside the greatly increased burden of work that it would impose on already stressed and underpaid teachers.
During the brief time that I was preparing this talk, two relevant articles came to my attention. One is Your Brain on ChatGPT, a 206-page research paper out of MIT in which researchers hooked students up to EEGs and had them write an essay. They were divided into three groups: the LLM group, which was allowed to use conversational AI. The Search Engine group, which was allowed to Google things. And the Brain-only group, which did not have access to any such tools. The results weren’t terribly surprising: the people in the Brain-only group showed a lot more cerebral activity. Even though these results might have been predicted, it’s valuable to see hard data.
The second article was a piece in the New York Times describing an initiative by OpenAI to create an AI-driven “study buddy” that would become integrated into university education. It’s not really clear how this would work. The article is careful to mention possible ways it could go wrong. The fear raised by this is that it would just be the same broken, brain-stunting state of affairs we have now, overlaid with a thick glaze of PR. The hope, however, is that it would leapfrog our current, hopelessly out of date system for evaluating student performance.
If that hope is well founded, what would such a system look like?
This question sent me down a rabbit hole on the topic of self-reliance. After all, if AI-driven education does nothing more than make students even more reliant on AI, then it’s not education at all. It’s just a vocational education program teaching them how to be of service to AIs. The euphemism for this role is “prompt engineer” which seems to be a way of suggesting that people who feed inputs to AIs are achieving something that should be valorized to the same degree as designing airplanes and building bridges.
If such a system actually did its job it would have the paradoxical effect of making students less, rather than more, reliant on the use of AI technology.
Self-Reliance
I had the idea of turning to Ralph Waldo Emerson’s essay Self-Reliance, which had a big influence on me when I was fresh out of college. In those days I pored over this essay, notebook in hand, and copied out several passages. I still have that old notebook. The essay contains a few bangers, the best known of which is “A foolish consistency is the hobgoblin of little minds.”
One line that I memorized at the time is “In every work of genius we recognize our own rejected thoughts: they come back to us with a certain alienated majesty.”
And another one that I passed over at the time, but that seems painfully relevant to much Internet discourse, is “If I know your sect, I anticipate your argument.” Followed up a few paragraphs later by “Leave your theory, as Joseph his coat in the hand of the harlot, and flee.”
My thought last week was that Self-Reliance might contain some wisdom applicable to the challenge of how to educate people in the modern world to rely upon their own knowledge and skill set rather than using AI all the time.
Reader, I did not find anything like that upon re-reading this essay. More the opposite. The overall drift of what Emerson is saying here—and he says it over and over—is that each mind is uniquely positioned to see certain insights. The self-reliant person shouldn’t ignore those merely because they don’t match the conventional wisdom. “The eye was placed where one ray should fall, that it might testify of that particular ray…God will not have his work made manifest by cowards….He who would gather immortal palms (i.e. be honored for great achievements) must not be hindered by the name of goodness, but must explore if it be goodness. Nothing is at last sacred but the integrity of your own mind.”
That is all intoxicating stuff for a smart young man who styles himself as a free thinker and nonconformist, which is why, when I was in my early twenties, I inhaled it like fentanyl fumes off hot foil. But during the same years as I was poring over this essay and jotting down quotes in my notebook, I was writing by far the worst novel I have ever written—a book that has never been published and never should be.
Emerson grew up in Boston, attended Boston Latin and Harvard, then traveled around Europe and visited England where he hung out with Wordsworth, Coleridge, and Thomas Carlyle. His brain was preloaded with the best knowledge base that could possibly have been given a young person of that era. He’d been trained to think systematically and rigorously and to express himself with great fluency in English and probably Latin and other languages as well.
So, yes, when an idea popped into Emerson’s head, chances are it was a pretty damned good one. His own advice about self-reliance was actually worth taking in his own case. And I’d guess that the audience for this essay was similarly well educated. By the time any young person happened upon Self-Reliance, they were probably 99% of the way to being an intellectually mature, highly capable person, and just wanted a bit of self confidence to follow through on good ideas that were coming into their heads—as a result of being that well educated and trained.
When the same advice falls on the ears of people who are not as well informed and not as good at thinking systematically, though, it’s rubbish.
When I first read Self-Reliance, only a few years had passed since the premier of the first Star Wars movie. There’s a pivotal moment in that film when Luke Skywalker is piloting his fighter through the trench on the Death Star, making his bombing run against impossible odds, and he hears Obi-Wan Kenobi’s voice in his head telling him to use the Force. Luke switches off his targeting computer to the consternation of the brass in the ops center. We all know the outcome. It’s a great moment in cinema, and it perfectly encapsulates a certain way of thinking emblematic of the 1970s late hippie scene: the seductive proposition that no one needs a targeting computer, that all we need to do is trust our feelings. Who doesn’t love to hear that? I loved hearing it from Ralph Waldo Emerson, and spent a couple of years of my life building a terrible novel on that foundation.
Grit
So Self-Reliance didn’t get me any closer to answering the question of how AI could improve education—how we could actually make students self-reliant, as opposed to AI-reliant. I got on the plane to San Francisco intending to throw this out as an open question for the AI researchers and entrepeneurs at the Laude Institute meeting. A challenge for them to address, as opposed to a pat solution that I would serve up. Some would call it a cop out.
Shortly before the plane started its descent into SFO, I was browsing Substack and found a post that Niall Ferguson had just put up. Niall starts by quoting the exact conversation from The Diamond Age that I alluded to above and then describing his own vision for how education in the age of AI might work. I won’t recount it in detail since all you have to do is click on the link. Since it literally fell into my lap just before I returned my tray table to its upright and locked position, I plugged it in my talk at Laude and I’m plugging it here. The gist of it is that students ought to spend a substantial part of each day in an electronics-free environment reading books and interacting directly with teachers and fellow students (“the Cloister”) and then, at other times, avail themselves of everything that AI and the Internet have to offer (“the Starship”).
It seems to me that this general plan would work if it could be implemented. But why would it work? What’s the essential skill that students need to be learning, such that when they get out of school they are more capable humans than when they went in?
Going back to that fictional conversation in The Diamond Age, I think that the answer—the thing that Finkle-McGraw acquired during his upbringing, that he failed to confer on his children, and that he wants to give his granddaughter—isn’t simply a body of knowledge to be memorized or a set of skills to be mastered. It’s a stance. A stance from which to address the world and all its challenges. A stance built on self-confidence and resilience: the conviction that one has a fighting chance to overcome or circumvent whatever obstacles the world throws in one’s path. The way you acquire it is by trying, and sometimes failing, to do difficult things. It can be discouraging, but if you have good mentors, and if you’re collaborating with friends who are in the same boat, you can find ways to succeed, and develop a knack for it. That’s true self-reliance.
From that standpoint, the most insidious thing about AI is that it solves problems for the user and never places them in a situation where they have to overcome failure. Problems might get solved in the end, which sounds good, but the “prompt engineers” who cajoled the AIs into solving them don’t understand how those solutions were produced, since it all happened inside a black box, and didn’t acquire the kind of self-reliance that matters.
All of that is a natural outcome of an AI industry that demonstrates its usefulness, and raises funds, by showing that it can solve problems. There’s no reason in principle why AI couldn’t be turned to a different problem: making students more self-reliant. The paradox is that you learn self-reliance through failure, and AI tools construe failure as a malfunction. AI’s purpose, as currently configured, is to make things easy for humans. And humans who’ve had it easy from birth don’t have the grit to deal with challenges.
.
There are also upsides to the current state of AI. I work with exceptional young people on the edge of science and tech, and what I can see is that for the autodidact, it is now much easier than ever before to get to the frontier of knowledge in a field.
Any freedom of will or self-reliance worth wanting isn't an on-off switch. As though some animals have it, or they don't. It seems more likely that it comes in degrees and is not a permanent state won and thereafter kept with no effort, but a fitness that must be maintained.
Some institutions will foster it and support it, some won't. We can all agree on that--Emerson included, though he did have a penchant for antinomianism in all things.
What I find surprising is the hidden assumption that you believe schools prior to the invention LLMs inculcated epistemic self-reliance. The Great Stagnation (or Innovation Starvation, if you will), may have been caused by the end of the Cold War, or even over-regulation, but one main factor surely was our education system from top to bottom. And money or more teachers wasn't the issue. We simply don't know how to educate most people. This is the age of blood-letting.
Let me have the temerity to add my own reading to that scene in the Diamond Age: what Finkle-McGraw is saying is that creativity is not IQ, nor robotic mastery of past achievements, nor ascention through a hierarchy of prestige. It is a separate faculty, as Wordsworth in his Prelude suggests. In the psychology of creativity to date, researchers at basically at a loss. There is no predictive test--like an IQ test--that can measure a person's creativity. All we have are case studies of creative people. We do not know how John and Paul became the Beatles or how Einstein became Einstein. They can tell us what they did and maybe something of their process, but it's a mystery when it comes to learning from them or what they learned that others in their classes didn't.
Our education system has always been blind, and I would say inimical, to this faculty. That's been a far bigger problem for longer than children cheating on their essays.
This gave me some useful perspective about this current round of "AI." I've lived through at least two prior ones - including in 1984-1988 when my work computer was a Lisp Machine.
As a (comfortably retired) EE in technical software - mostly Electronic Design Automation - who first got paid to program in 1967, I regularly get asked about computer technology by people who watch too much "news" on the TV (which I gave up about 20 years ago).
I tell (or remind) a lot of them about Eliza. I personally see most of today's "AI" as either its successor or, worse, son of Clippy! I treat "AI" answers roughly like all search results (and Wikipedia) - a source of pointers, many wrong, but some of which can be useful.
I've yet to try "AI" as leverage for programming (son of IntelliSense?) but find that a bit intriguing. Software automation has been an unfulfilled dream of mine for over 50 years.
I try to avoid the idiot chat bots (ubiquitous on web sites now) and have little interest in ChatGPT (which autocorrect tried to turn into catgut!) and its ilk.
The "AI" that I use is in roughly three things: (1) speaker independent voice recognition and response - "Echo, set the heat pump to 72" and (2) increasingly powerful "satellite" navigation systems, which I use regularly for traffic based routing decisions and (3) language translation HELPING with my not-so-good German and Spanish.
ANYWAY... Your piece also goaded me into finally paying for Niall Ferguson's Substack! So there!
At 77 I don't have children and am unlikely to do so but my friends who do (generally with grandchildren now) are dealing with this issue on a regular basis.