There are also upsides to the current state of AI. I work with exceptional young people on the edge of science and tech, and what I can see is that for the autodidact, it is now much easier than ever before to get to the frontier of knowledge in a field.
Any freedom of will or self-reliance worth wanting isn't an on-off switch. As though some animals have it, or they don't. It seems more likely that it comes in degrees and is not a permanent state won and thereafter kept with no effort, but a fitness that must be maintained.
Some institutions will foster it and support it, some won't. We can all agree on that--Emerson included, though he did have a penchant for antinomianism in all things.
What I find surprising is the hidden assumption that you believe schools prior to the invention LLMs inculcated epistemic self-reliance. The Great Stagnation (or Innovation Starvation, if you will), may have been caused by the end of the Cold War, or even over-regulation, but one main factor surely was our education system from top to bottom. And money or more teachers wasn't the issue. We simply don't know how to educate most people. This is the age of blood-letting.
Let me have the temerity to add my own reading to that scene in the Diamond Age: what Finkle-McGraw is saying is that creativity is not IQ, nor robotic mastery of past achievements, nor ascention through a hierarchy of prestige. It is a separate faculty, as Wordsworth in his Prelude suggests. In the psychology of creativity to date, researchers at basically at a loss. There is no predictive test--like an IQ test--that can measure a person's creativity. All we have are case studies of creative people. We do not know how John and Paul became the Beatles or how Einstein became Einstein. They can tell us what they did and maybe something of their process, but it's a mystery when it comes to learning from them or what they learned that others in their classes didn't.
Our education system has always been blind, and I would say inimical, to this faculty. That's been a far bigger problem for longer than children cheating on their essays.
As the parent of teenagers who are in the education system now, the reliance on AI is merely the current point on the trajectory that started with social media and was amplified by "remote learning" during covid. We are bewildered as parents. Our kids have lives that are substantially more removed from our own childhoods than ours were from our parents. Facebook debuted in 2005; we're the first generation to try to raise kids in the shadow of ubiquitous access to all the delights of the global internetwork. And yes this resonates--the need for "A stance built on self-confidence and resilience: the conviction that one has a fighting chance to overcome or circumvent whatever obstacles the world throws in one’s path." I return over and over again to psychologist Carol Dweck's classic book Mindset about fixed vs growth mindsets. Self-reliance requires a growth mindset--the sense that there are solutions to problems and that obstacles can be overcome and that personal change and progress is possible. I haven't figured out how to instill that. I don't think kids actually like feeling helpless--they want self-actualization.
This gave me some useful perspective about this current round of "AI." I've lived through at least two prior ones - including in 1984-1988 when my work computer was a Lisp Machine.
As a (comfortably retired) EE in technical software - mostly Electronic Design Automation - who first got paid to program in 1967, I regularly get asked about computer technology by people who watch too much "news" on the TV (which I gave up about 20 years ago).
I tell (or remind) a lot of them about Eliza. I personally see most of today's "AI" as either its successor or, worse, son of Clippy! I treat "AI" answers roughly like all search results (and Wikipedia) - a source of pointers, many wrong, but some of which can be useful.
I've yet to try "AI" as leverage for programming (son of IntelliSense?) but find that a bit intriguing. Software automation has been an unfulfilled dream of mine for over 50 years.
I try to avoid the idiot chat bots (ubiquitous on web sites now) and have little interest in ChatGPT (which autocorrect tried to turn into catgut!) and its ilk.
The "AI" that I use is in roughly three things: (1) speaker independent voice recognition and response - "Echo, set the heat pump to 72" and (2) increasingly powerful "satellite" navigation systems, which I use regularly for traffic based routing decisions and (3) language translation HELPING with my not-so-good German and Spanish.
ANYWAY... Your piece also goaded me into finally paying for Niall Ferguson's Substack! So there!
At 77 I don't have children and am unlikely to do so but my friends who do (generally with grandchildren now) are dealing with this issue on a regular basis.
I'd say it's worth spending some time interacting with an LLM to get a sense of what they can do. I'm also an EE and do lots of programming. I understand the math behind these models, at least in theory. But they are astounding. They're not Eliza or even Alexa with an upgrade. Eliza was algorithmic, with a tokenizing machine and pre-programmed scripts for processing conversation and responding. You could debug Eliza; it was a program. Alexa and Google Maps are more apropos--optimization machines with neural network processing, but they're purpose-built--Alexa as an audible speech processor and Google Maps for route determination. Huge neural nets like ChatGPT were only enabled with the development of massively parallel high-speed computational machines originally intended for processing video game scenery. And they are amazing--they could easily pass the Turing test against a naive interlocutor. (Certainly there are ways to trip them up, but you have to be trying. For normal relatively short conversations, they're very human-like.)
For programming, yes son of IntelliSense is a good description, but IntelliSense uses deliberately added meta-data and real-time compilation. Co-Pilot will anticipate what you're trying to do from just a variable name and auto-populate a whole function. It's able to write simple code snippets ("Write a class to hold temperature units with conversion routines built in") and it provides invaluable syntax and API support. It's imperfect, but to a skilled programmer it's probably a 30% increase in productivity.
Like, these machines the future now--the jetpacks we were promised. We have forgotten how to be amazed. It's easy to point out the flaws and engage cynically, and certainly there's no guarantee that AI will be a net boon to humanity. (A lot of the cynicism comes from AI being used in ways that showcase its flaws, e.g. customer service for which the bots are good at helping you with the problems you can solve yourself but bad at solving the problems you're actually likely to call customer service for.) I suspect that one of two things will happen: 1) AI will sell out to advertisers (it's hard to believe this hasn't already happened) and enshittification will make them awful and useless, 2) We'll start to get a sense of what they're good at and direct their use more toward those things.
I fully realize that Eliza was completely programmed and the Alexa, Google Maps, and - I suspect - Google Translate are more neural network based. [I won't wander down into the rabbit hole of issues with neural nets though!]
I also realize that massive parallel processing with GPU type technology is used for much of the LLM work. [Parallel processing in general has been an interest for many years since my ancient EE degree in "computer design."] I also know of at least one researcher claiming that a lot of this is possible with much less heavyweight computation. Others say that claim is bogus.
Speaking of bogosity (international unit: Micro-Lenat) I actually worked in the same organization as Doug back in the '80s and almost went to work for him directly about 10 years ago. RIP, interesting guy but kind of left in the weeds by recent developments.
New technology is certainly interesting to me but I've been through enough hype-cycles to be pretty cynical about the breathless claims - especially from "tech" reporters who I suspect were unable to pass science courses in school. [Yeah, that's a very unfair generalization - especially from a guy (me) who flunked way too many courses in college for... reasons...]
Meanwhile, having started with paper tape, I personally have wanted better tools for software development for well over 50 years but the automation tools that were touted in the '80s were awful. I've had mixed experience with IntelliSense - mostly because I've never been paid to learn and use that environment.
It's kind of embarrassing to say the number one productivity tool for me has been color syntax highlighting editors - from Emacs to Visual Studio Code!
I did spend four years using Lisp Machines based on work at MIT (both Symbolics and LMI). While weird, it was easily the fastest prototyping environment I ever used. It just failed in about every other measure - especially cost! The companies funding us eventually screamed at us to STOP THAT and made us switch to UNIX and the X Window System which was pretty painful in the late '80s.
About 10 years ago I took a detour into Ruby on Rails as a text driven process that seemed to offer a massive productivity leverage. That ended badly when an application written by contractors that I hired was completely unfathomable and utterly failed when updates were needed. We ended up tossing out the whole thing and starting over, in JavaScript and PHP. Oops.
I also spent a lot of time over the years swimming around in first Perl and then Python. Fast prototyping but, like Ruby, too easy to be unreadable later.
NOW... I'm tempted to wade into Claude etc. but, having no real need at age 77 and no prospect of actually getting paid, I will probably sit this one out. Maybe not.
Meanwhile, I continue to be astonished at the terrible quality of so many systems - 99.9% web based - that I have to use, especially for financial and medical matters.
You can really tell the ones that were outsourced with poorly written specs to some overseas low bidders. Looking at you insurance industry!
The cloister/starship distinction sounds good, but i think it is better if you only get to go into the starship after hours of hard labor and a single mistake locks you out. Learning is just evolution of the mind, and you need death - eg, failure - for that to happen. If we get rid of the absurd idea that facts and values can be divorced, we can start explicitly teaching values again, since what evolution is learning is the same thing correct values represent - how to survive in a chaotic entropic cosmos.
The notion of a “cloister” might be somewhat effective today, given most kids aren’t autodidacts and don’t use LLMs for optimum learning. But if you believe that we will reach AI capable of surpassing human teachers in certain domains, then the line between pedagogically valuable tools and today’s homework solvers will likely blur - at least in the interim. And since 1:1 teaching is evidently the gold standard, this is definitely not something we want to shy away from or hinder the development of.
One aspect of the primer that I’m most excited to see realized is discovery learning - there’s that excellent scene with the chain logic puzzles that paints this picture perfectly. Nell is never directly taught logic. Instead, she discovers how AND, OR, XOR and NOT gates behave by experimenting within a system that rewards understanding through exploration.
Thinking about the intended purpose of the primer - the stance of self-reliance - I’d posit that giving more children the chance to experience the art of discovery greatly increases their chances of developing agency. That could come in the form of discovering Boolean logic, Pythagoras’ Theorem or the infinitesimal as a tool to do new math. That experience, repeated enough, can shift how learners approach future challenges, fostering a deeper sense of intellectual independence and agency.
There are also upsides to the current state of AI. I work with exceptional young people on the edge of science and tech, and what I can see is that for the autodidact, it is now much easier than ever before to get to the frontier of knowledge in a field.
Any freedom of will or self-reliance worth wanting isn't an on-off switch. As though some animals have it, or they don't. It seems more likely that it comes in degrees and is not a permanent state won and thereafter kept with no effort, but a fitness that must be maintained.
Some institutions will foster it and support it, some won't. We can all agree on that--Emerson included, though he did have a penchant for antinomianism in all things.
What I find surprising is the hidden assumption that you believe schools prior to the invention LLMs inculcated epistemic self-reliance. The Great Stagnation (or Innovation Starvation, if you will), may have been caused by the end of the Cold War, or even over-regulation, but one main factor surely was our education system from top to bottom. And money or more teachers wasn't the issue. We simply don't know how to educate most people. This is the age of blood-letting.
Let me have the temerity to add my own reading to that scene in the Diamond Age: what Finkle-McGraw is saying is that creativity is not IQ, nor robotic mastery of past achievements, nor ascention through a hierarchy of prestige. It is a separate faculty, as Wordsworth in his Prelude suggests. In the psychology of creativity to date, researchers at basically at a loss. There is no predictive test--like an IQ test--that can measure a person's creativity. All we have are case studies of creative people. We do not know how John and Paul became the Beatles or how Einstein became Einstein. They can tell us what they did and maybe something of their process, but it's a mystery when it comes to learning from them or what they learned that others in their classes didn't.
Our education system has always been blind, and I would say inimical, to this faculty. That's been a far bigger problem for longer than children cheating on their essays.
As the parent of teenagers who are in the education system now, the reliance on AI is merely the current point on the trajectory that started with social media and was amplified by "remote learning" during covid. We are bewildered as parents. Our kids have lives that are substantially more removed from our own childhoods than ours were from our parents. Facebook debuted in 2005; we're the first generation to try to raise kids in the shadow of ubiquitous access to all the delights of the global internetwork. And yes this resonates--the need for "A stance built on self-confidence and resilience: the conviction that one has a fighting chance to overcome or circumvent whatever obstacles the world throws in one’s path." I return over and over again to psychologist Carol Dweck's classic book Mindset about fixed vs growth mindsets. Self-reliance requires a growth mindset--the sense that there are solutions to problems and that obstacles can be overcome and that personal change and progress is possible. I haven't figured out how to instill that. I don't think kids actually like feeling helpless--they want self-actualization.
This gave me some useful perspective about this current round of "AI." I've lived through at least two prior ones - including in 1984-1988 when my work computer was a Lisp Machine.
As a (comfortably retired) EE in technical software - mostly Electronic Design Automation - who first got paid to program in 1967, I regularly get asked about computer technology by people who watch too much "news" on the TV (which I gave up about 20 years ago).
I tell (or remind) a lot of them about Eliza. I personally see most of today's "AI" as either its successor or, worse, son of Clippy! I treat "AI" answers roughly like all search results (and Wikipedia) - a source of pointers, many wrong, but some of which can be useful.
I've yet to try "AI" as leverage for programming (son of IntelliSense?) but find that a bit intriguing. Software automation has been an unfulfilled dream of mine for over 50 years.
I try to avoid the idiot chat bots (ubiquitous on web sites now) and have little interest in ChatGPT (which autocorrect tried to turn into catgut!) and its ilk.
The "AI" that I use is in roughly three things: (1) speaker independent voice recognition and response - "Echo, set the heat pump to 72" and (2) increasingly powerful "satellite" navigation systems, which I use regularly for traffic based routing decisions and (3) language translation HELPING with my not-so-good German and Spanish.
ANYWAY... Your piece also goaded me into finally paying for Niall Ferguson's Substack! So there!
At 77 I don't have children and am unlikely to do so but my friends who do (generally with grandchildren now) are dealing with this issue on a regular basis.
I'd say it's worth spending some time interacting with an LLM to get a sense of what they can do. I'm also an EE and do lots of programming. I understand the math behind these models, at least in theory. But they are astounding. They're not Eliza or even Alexa with an upgrade. Eliza was algorithmic, with a tokenizing machine and pre-programmed scripts for processing conversation and responding. You could debug Eliza; it was a program. Alexa and Google Maps are more apropos--optimization machines with neural network processing, but they're purpose-built--Alexa as an audible speech processor and Google Maps for route determination. Huge neural nets like ChatGPT were only enabled with the development of massively parallel high-speed computational machines originally intended for processing video game scenery. And they are amazing--they could easily pass the Turing test against a naive interlocutor. (Certainly there are ways to trip them up, but you have to be trying. For normal relatively short conversations, they're very human-like.)
For programming, yes son of IntelliSense is a good description, but IntelliSense uses deliberately added meta-data and real-time compilation. Co-Pilot will anticipate what you're trying to do from just a variable name and auto-populate a whole function. It's able to write simple code snippets ("Write a class to hold temperature units with conversion routines built in") and it provides invaluable syntax and API support. It's imperfect, but to a skilled programmer it's probably a 30% increase in productivity.
Like, these machines the future now--the jetpacks we were promised. We have forgotten how to be amazed. It's easy to point out the flaws and engage cynically, and certainly there's no guarantee that AI will be a net boon to humanity. (A lot of the cynicism comes from AI being used in ways that showcase its flaws, e.g. customer service for which the bots are good at helping you with the problems you can solve yourself but bad at solving the problems you're actually likely to call customer service for.) I suspect that one of two things will happen: 1) AI will sell out to advertisers (it's hard to believe this hasn't already happened) and enshittification will make them awful and useless, 2) We'll start to get a sense of what they're good at and direct their use more toward those things.
Thanks for such a detailed reply.
I fully realize that Eliza was completely programmed and the Alexa, Google Maps, and - I suspect - Google Translate are more neural network based. [I won't wander down into the rabbit hole of issues with neural nets though!]
I also realize that massive parallel processing with GPU type technology is used for much of the LLM work. [Parallel processing in general has been an interest for many years since my ancient EE degree in "computer design."] I also know of at least one researcher claiming that a lot of this is possible with much less heavyweight computation. Others say that claim is bogus.
Speaking of bogosity (international unit: Micro-Lenat) I actually worked in the same organization as Doug back in the '80s and almost went to work for him directly about 10 years ago. RIP, interesting guy but kind of left in the weeds by recent developments.
New technology is certainly interesting to me but I've been through enough hype-cycles to be pretty cynical about the breathless claims - especially from "tech" reporters who I suspect were unable to pass science courses in school. [Yeah, that's a very unfair generalization - especially from a guy (me) who flunked way too many courses in college for... reasons...]
Meanwhile, having started with paper tape, I personally have wanted better tools for software development for well over 50 years but the automation tools that were touted in the '80s were awful. I've had mixed experience with IntelliSense - mostly because I've never been paid to learn and use that environment.
It's kind of embarrassing to say the number one productivity tool for me has been color syntax highlighting editors - from Emacs to Visual Studio Code!
I did spend four years using Lisp Machines based on work at MIT (both Symbolics and LMI). While weird, it was easily the fastest prototyping environment I ever used. It just failed in about every other measure - especially cost! The companies funding us eventually screamed at us to STOP THAT and made us switch to UNIX and the X Window System which was pretty painful in the late '80s.
About 10 years ago I took a detour into Ruby on Rails as a text driven process that seemed to offer a massive productivity leverage. That ended badly when an application written by contractors that I hired was completely unfathomable and utterly failed when updates were needed. We ended up tossing out the whole thing and starting over, in JavaScript and PHP. Oops.
I also spent a lot of time over the years swimming around in first Perl and then Python. Fast prototyping but, like Ruby, too easy to be unreadable later.
NOW... I'm tempted to wade into Claude etc. but, having no real need at age 77 and no prospect of actually getting paid, I will probably sit this one out. Maybe not.
Meanwhile, I continue to be astonished at the terrible quality of so many systems - 99.9% web based - that I have to use, especially for financial and medical matters.
You can really tell the ones that were outsourced with poorly written specs to some overseas low bidders. Looking at you insurance industry!
The cloister/starship distinction sounds good, but i think it is better if you only get to go into the starship after hours of hard labor and a single mistake locks you out. Learning is just evolution of the mind, and you need death - eg, failure - for that to happen. If we get rid of the absurd idea that facts and values can be divorced, we can start explicitly teaching values again, since what evolution is learning is the same thing correct values represent - how to survive in a chaotic entropic cosmos.
The notion of a “cloister” might be somewhat effective today, given most kids aren’t autodidacts and don’t use LLMs for optimum learning. But if you believe that we will reach AI capable of surpassing human teachers in certain domains, then the line between pedagogically valuable tools and today’s homework solvers will likely blur - at least in the interim. And since 1:1 teaching is evidently the gold standard, this is definitely not something we want to shy away from or hinder the development of.
One aspect of the primer that I’m most excited to see realized is discovery learning - there’s that excellent scene with the chain logic puzzles that paints this picture perfectly. Nell is never directly taught logic. Instead, she discovers how AND, OR, XOR and NOT gates behave by experimenting within a system that rewards understanding through exploration.
Thinking about the intended purpose of the primer - the stance of self-reliance - I’d posit that giving more children the chance to experience the art of discovery greatly increases their chances of developing agency. That could come in the form of discovering Boolean logic, Pythagoras’ Theorem or the infinitesimal as a tool to do new math. That experience, repeated enough, can shift how learners approach future challenges, fostering a deeper sense of intellectual independence and agency.