If You Want to Know What Our AI-Filled Future Might Be Like, Just Look at Chess
Where else have we been using smarter-than-human AI for decades?
Another week off from the chess club, so I’m taking this opportunity to… write an essay. That one isn’t even really about chess. I promise: even if you don’t know anything about chess, this essay could appeal to you. I had originally figured I would pitch it around to some magazines, but… I have a Substack! Why bother with that excruciating process when I can just write and publish it myself?
Let me know what you think and if you’d like more of my musings about the world beyond chess. I have plenty of them.

“Why are we doing any of this work at all?”
Okay, so. One of the biggest questions being asked in our society right now is this one: what the hell are human beings going to be doing in the future?
We’re asking this because of generative AI, which now appears to be able to do so much that, previously, we thought only humans could do. It’s reasonable to assume that, no matter what you think of “AGI”1, this trend will only continue, with AI systems like Claude and ChatGPT getting better, more versatile, and more integrated into everyday life.
Ezra Klein encapsulated this dilemma well when he recently wrote: “I don’t know what the economy or society is going to want from [my children] in 16 or 20 years. And if I don’t know what it’s going to want from them, what it’s going to reward in them, how do I know how they should be educated? How do I know if the education I am creating for them is doing a good job? How do I know if I’m failing them? How do you prepare for the unpredictable?”
Great questions! I have the same concerns about my ten-month-old son, though at the moment, “education” mainly consists of watching him learn to put a wooden egg in a wooden cup. But I’ve already started wondering what skills, ideas, and philosophies might best prepare him for a world in which highly intelligent AI is everywhere.
There’s a temptation, when considering this hypothetical world, to choose one of two different paths: 1) panic!!!, or 2) throw up your hands and surrender to the overwhelming unknown-ness of it all. As John Hermann points out, we don’t really know how much things will change: hyperbolic predictions of the “end of work” and so on have accompanied most of the major innovations of the modern era.
At the same time, it’s pretty clear that things will change, at least to some extent, and anecdotally, this is already happening: Klein cites the vertiginous drop in students’ reading abilities, even at the elite-college level.2
So what should a parent do? Should we totally re-invent the ways we’re educating our children? Or should we trust that the skills that have always mattered will be the ones that continue to carry the day in the future? Even answering this question with any confidence is basically impossible, because who can say that we’ve really been doing a good job of educating our children in the past, anyway?3 There are as many educational pedagogies as there are religions, and there seems to be just as much likelihood that we’ll ever agree on one being “right” above the others.
Hands again thrown up, we’re back to admitting that this is a “personal” decision, highly susceptible to individual preferences. The future is uncertain, every child is unique, and every parent is dealing with a particular set of circumstances. But there is a telling issue that Klein raises, and I think it’s the one that offers a path forward in this discussion:
If you have this technology that not only can but will be doing so much of this for you, for us, for the economy, why are we doing any of this work at all? Why are we reading these books ourselves when they can just be summarized for us? Why are we doing this math ourselves when a computer can just do it for us? Why am I writing this essay myself when I can get a first draft in a couple of minutes from Claude or from ChatGPT?
Why indeed? The problem isn’t that we have to ask this question: the problem is that we haven’t been asking this question all along. Why do we do the things we do? Why are they worth doing? Why shouldn’t we just outsource them to others, whether they’re machines or people who we can coerce? Why read? Why write? Why live?
Is there such a thing as “AI art”?
That feels better. We’ve taken the conversation from the realm of science fiction — how do we prepare for a future that we can’t possibly know? — and back into territory that we’ve been walking as long as there have been homo sapiens. These are not new questions. They’re old questions in a new wrapper. And the answer has to be the same: we need to decide that doing these things is worthwhile. Not to the economy, or to capitalism, or to some nebulous concept of society. To us. As human beings. We need to consciously choose to do them.
What might this look like, in the age of AI? Well, chess offers a great case-study. Chess is distinct among our endeavors in that we’ve had better-than-human AI for decades now. Deep Blue beat Kasparov in 1997, but by 2006, programs that you could run on your home computer were just as good. And yet: chess is more popular than it’s ever been.
This should be shocking. It seems to contradict literally every other conversation we’re having about AI right now. The truism usually holds that, if AI becomes better than us at something, we must cede the field. This is implied by Klein’s musings on reading and essay-writing among students, but it’s even better-represented in the discussion about AI art.
Generative AI is capable of making a great deal of “art” that basically resembles human production, and it will likely only get better at doing this. Is this art good? I don’t know: is it even art? This takes us into deeper waters: we aren’t having a conversation about what is possible so much as a conversation about what is worthwhile and desirable. AI already can make a painting that could sell for millions of dollars, and it can write fiction, and it can make crude approximations of film and TV that will one day be indistinguishable from the real thing. But does anyone… want that?
In my personal opinion, culture as a whole won’t be interested in AI art until we create compelling AI “artists” with unique personalities, perspectives, and storylines — because anyone who pays even the slightest bit of attention to the way that we consume and engage with art knows that it’s just as much about the artist as it is the art itself. Nobody would be one percent as interested in Taylor Swift’s music if Taylor Swift didn’t hover over it all as a sort of grand overarching meta-narrative.4
I’m not saying this is impossible. I’m saying it hasn’t happened yet, and I think there are some elemental challenges inherent to it — enough that, at the very least, we will continue to have economic reasons for valuing human-generated art. But what we’re talking about here, also, is the consumption of art. AI has absolutely no bearing on the experience of creating art. If we believe that this experience has any inherent benefits or appeal, then it can withstand the encroachments of AI. It might be impacted by these encroachments — especially if you’re trying to make money from doing so — but it won’t be destroyed by them.
Compare this to, say, truck-driving. Pretty soon, AI will be able to drive long-haul trucks, totally eliminating the need for humans to do this. One knee-jerk reaction to this might be, “That’s bad.” Jobs disappearing! But it turns out that truck drivers’ suicide rate is 15 times higher than the general population’s. So maybe it isn’t good that human beings have to be long-haul truckers. Maybe it would be nice to outsource that to AI. Maybe that is a field we should cede, and maybe we can find other, more life-affirming ways of spending time for those people, and anyone else forced to work some brain-rotting job just so they can pay the bills. Wouldn’t that be good?
Maybe those people could make art!
In sum, what I’m saying is:
AI will make some jobs obsolete, but
just because AI can do something “better” than us doesn’t mean we need to stop doing it, and
this could be a good opportunity for us as individuals and a society to reconsider some of our assumptions.
I thought you were going to talk about chess
But again, this is just an opinion, right? This is my perspective.
Well… sort of. This is why I’m bringing up chess. Chess demonstrates my point. Despite having AI that is superior to human beings at every aspect of the game, we don’t watch AIs play chess. We watch humans play chess, and we follow along with the human drama that comes with it. These humans, or a few of them, anyway, still make a living from doing so. And despite AI being better than us at chess, we keep playing it.
But why?
Because we like to. It’s worth it. We derive some inherent value from doing so.
Again: why do we do anything? It’s either because we have to — compelled by society, necessity, or biology/psychology — or we like to. AI is going to change the balance of this equation, but it isn’t going to impact the essential elements. That’s the argument I’m making with regard to art, and chess proves it. We like to play chess, so we play chess.
Why do we read? Is it because we should, because we have to? Then, sure, we’re going to read less. But I don’t read for that reason. I don’t even write for that reason anymore. I do it because I like to, because I derive meaning and satisfaction from it — because, to introduce a nebulous and controversial concept, it feels good for my soul.
And maybe we should be a bit more focused on communicating to young people that you should read and write because you enjoy it rather than because you must in order to get into college, or whatever. Maybe, in fact, AI could be an opportunity for us to shift the bulk of our thinking toward the question of what is worth doing rather than what we should do. Maybe there isn’t actually that much value to writing formulaic five-paragraph essays for no reason other than “you will one day have to for the SATs”? Maybe it would be better to teach kids to write poetry, which facilitates creative expression and helps cultivating innovative thought, and not grade them on it?
What if we could even see education as an incubator for our most human traits — creativity, ingenuity, expressiveness, emotionality, spirituality — and admit that machines have been and will continue to be better than us at the more mechanistic parts of life? Might that not just be good for our souls, but actually good for the economy and society?
Of course, this would require us to totally rethink the way that we raise young people and how we motivate them and what we expect of them.
But… maybe that’s not such a bad idea?
That all being said, chess is also an excellent case study because it’s not like we’ve dismissed AI entirely. Quite the contrary: it’s become integrated into literally every aspect of the game. Commentators use AI to comment on the games. Players use AI to review their performance. And some people do use AI to play for them — which most of us call “cheating.”
Once again, I’m happy to speak from my own personal experience — I use AI every day to review my games, and I find it very helpful. But I also fear that it’s a crutch, and I wonder if I might be better-served by doing some of the thinking that AI is doing for me. And going further, I’m quite aware that I view the AI’s feedback as tantamount to the final word. It’s the truth, because it’s coming from an intelligence so superior to mine that I don’t even feel like I can question it.
I think this is the inflection point we’re going to face as a society. We’re going to have to decide how much of our human functioning we want to give up to AI. Because we’ll be able to let go of quite a lot, and the allure will definitely be there. On top of that, we’ll be tempted, even inclined, to see the decisions these AI make as being essentially indisputable — the word of God.
But they won’t be. They aren’t even in chess. Engines’ thinking have evolved over the years, as they’ve gotten stronger; they’ve grown to approach the game differently, changing, as a result, the way that the best players play. At any given time, we tended to assume that this was the best way to play chess, but the later versions proved that it wasn’t. This will be true of AI in any given sphere, and the reality is that most areas of life are not as bounded as chess is, meaning they’ll be more open to interpretation and debate. God only knows how the computers will be playing chess ten years from now.
Some questions have no right answer
Another example: let’s say some country appoints an AI leader. Conceding that they might be able to make some judgements with more “accuracy”5 than a human president — around the economy or infrastructure, any domain in which data reigns paramount — what about ethical ones? Is there a “right” approach to capital punishment? Or a geopolitical decision akin to the Trolley Problem, where different permutations of lives are at stake?
This all ends up leading us back to conversations around first principles. Do you believe in a soul? God? Consciousness as epiphenomenal or a fundamental aspect of the universe? Where do our morals and ethics derive from?
Ha! Those are tough ones! And if we have to answer them correctly to ever use AI well in certain capacities, then… maybe AI will never quite be qualified to fulfill these roles. But that doesn’t mean it won’t be doing a lot of things, and soon.
So this leads me back to a few basic lessons that chess can teach us about AI:
AI will not replace humans in practices that humans find worth doing, even if it becomes superior to us at those practices.
AI can help us do these things better.
But we need to be careful how much of our agency we cede to AI if we really want to get better at these things.
Let me give another example. I’ve started using AI to analyze my dreams from a Jungian perspective. It’s very good at this, and the insights it has been providing have been really impressive.6 But! I’m doing this with the goal that it will get me back into analyzing my dreams myself. I hadn’t been doing this already, so the AI isn’t replacing a valuable activity of my own; in that sense, it’s providing value.
At the same time, though, if I never start to practice this myself, I won’t truly be integrating the lessons I’m learning from the AI: I won’t absorb the meaning at a deeper level. It will remain outside of me, only accessible through the medium of the technology. This is the whole purpose and pedagogy of my game-review practice as well: I use the AI to reveal insights to me that I then try to apply on my own.
We’re all students, and we all have a choice to make: do we learn to pass the test? Or do we learn to truly absorb and inhabit the knowledge? This is going to play a huge role in how we engage with AI going forward.
But what about cheating?
There is an interesting wrinkle to all of this: cheating.
A lot of people cheat at online chess, which is hilarious to me. Why bother? As I’ve written about before, your online chess-rating matters so little. Obviously, there are deep-seated questions of ego and identity to be debated here, and I don’t really have the energy; this essay is already pretty long. The important point to draw from here is that the rise of AI in chess has caused a huge explosion in the frequency of cheating and the tendency to cheat.
The question of cheating is also implied in Klein’s piece. A significant portion of the debate over ChatGPT and so on has revolved around its ability to help students cheat on their homework. I mean, you have to be realistic about the fact that assigning anything to be done at home basically invites students to use AI to help them do it.
Some people, like Tyler Cowen, argue that this should actually be encouraged.7 And I actually don’t think he’s wrong. We have to be realistic about the fact that these tools are here and they’re not going anywhere. People are going to keep using them. That is guaranteed. If we try to act like that isn’t the case, and like students should just keep behaving as if they didn’t exist, we’re setting ourselves up for failure.
This gets us back to the central issue of this piece.
Okay, so
What we need to do is create a compelling argument for why students shouldn’t use AI in certain situations. We need to identify the skills that we’re helping them train, and how the use of AI might prevent the development of those skills. We need to make a compelling argument that there are certain aspects of our human-ness that are worth cultivating.
We should have been doing this all along! But we haven’t been. So see this as an opportunity for us to course-correct: instead of insisting to kids that they have to learn something because they just have to, let’s teach them why various skills and techniques and modes of thinking are important, what they can contribute to their experience of the world, and what the point of a good life actually is. This is the whole point of education! Just ask Plato!
And again, if you want proof that this works, just look at chess: more popular than ever, even though we humans are totally washed next to the god Stockfish. We play chess because it’s fun, because it offers a coherent and progressive realm for improvement, and because it has a deep tradition with very human emotions and priorities baked in. It doesn’t matter that AI does it better than us. It’s worth doing.
I think that’s enough for now. But please, let me know if you agree, disagree, or whatever. We’re just starting to have this discussion, and no doubt it’s going to get considerably more interesting and complicated as these tools develop.
My attitude tends to be loosely in line with that of Max Read’s and John Hermann’s: nobody actually knows what AGI is, and the term is basically meaningless, but that doesn’t mean there isn’t something very significant and impactful going on here.
I have to admit that I’m pretty suspicious of this kind of thinking, though. Once again, the idea that kids these days are dummies who are inferior to their predecessors is a long-running trope, dating back, quite literally, to the ancient Greeks, who believed that there had been a prior Heroic Age, when men were real men and women were real women and gods real gods. I’m not saying there isn’t any truth to the idea that our habits are degenerating: I’m just saying that we have to be very mindful of the lens through which we’re looking at the problem. Also: remember CliffsNotes?
I, for one, would argue that we have not been, and that the most valuable part of my time at Duke, one of the top schools in the country, came from working at the student newspaper, via the lessons I was taught by my peers, rather than through my “education.”
This is not, at all, a knock against Taylor Swift’s music. I just think that pop-cultural artifacts tend to be inseparable from the narrative context of their creation and consumption.
Of course, even this requires us to agree on the factors we’re trying to maximize and the values we’re prioritizing — i.e. economic growth versus equity among citizens or treatment of the environment.
Bet you didn’t think this piece would get into Jungian dream analysis. Such are the benefits of writing your own Substack!
Cowen writes quite a bit about this subject in his book Average Is Over. He particularly emphasizes the coming importance of human-computer teams, which I think is a great point. But the approach feels a bit different than the point of this piece, which is about why certain things are worth doing rather than how to do them most efficiently and effectively.
I encourage you to watch this Veratsium piece on “The Effort is the algorithm”. Worth considering given your main points. (https://youtu.be/0xS68sl2D70?feature=shared)
As far as AI I see it being about two separate things. The first is current AI, the second is future AI.
Current AI is a really good parrot. Able to process a lot of information and find correlation/key ideas. They are prediction machines that simply try to give you what they predict you’ll want. How you ask will change their output significantly.
Using the prediction machine now is like using calculators in math class. Initially it was frowned upon and slowly became more accepted.
Future AI may be different. If we get to AGI then the machine will be able to learn on its own. It will be less about predicting what you want and more like giving you what it thinks. THAT will be a game changer
A both thoughtful and enjoyable piece. I confess that I tend to step back from these questions because of the real elephant in the room.
What if the decision makers are simply motivated by greed, self-interest and power? The well-being, long term interests and flourishing of society simply don't matter. what then?
And given that the dominant voices presently include the USA and China, do we have any grounds for optimis?
Brave New World or 1984....take your pick.