On Mistaking the Map for the Territory
Do you know what you don't know? Or do you not know what you know?
“Let’s see how I play when I think my moves might actually be good!”
This is how I ended my piece last week. Which is hilarious. Because do you know how I play when I think my moves might actually be good? REALLY BAD.
I mean, that’s based on a sample size of one, and I am exaggerating the results for comic effect. But I did suffer the worst OTB defeat of my short career last week. Fortunately, I have a sense of what happened, as well as a larger takeaway that I/you/we can draw from it. And by larger, I mean, I’m going to get back into “state of the culture” thoughts this week. Hooray!
What happened?
My game last week was against a nine year old who’s already rated 1800. That makes him the 30th-highest rated nine-year-old in the country. That’s… pretty good. And it turns out I’m in strong company as far as struggling with nine-year-olds, considering that Magnus just drew with one on Titled Tuesday.
I had the black pieces, and I played the Caro-Kann, because I’m still in the process of getting my new Black opening up to OTB speed. My opponent met this with the Accelerated Panov, an opening that I’ve only faced once online and never IRL. On my seventh move, I found myself here:
What do I know about this opening, if anything? Well, I know that in the Caro-Kann, I tend to try and get my light-squared bishop out to g4 before I play e6, if allowed. But I also know that in the Panov, you often don’t develop the light-squared bishop before playing e6. But, but, I also don’t know how Panov-y the position we’re in actually is — this isn’t a line I’m familiar with, and my queen move to a5 was pure improvisation, borrowing from the Scandinavian.
So I thought, what the hell, I’ll play Bg4. Still seems good. Pin the knight, get the bishop outside of the pawn chain. I don’t want to play moves mindlessly from another line just because I vaguely remember them. I wrote about that!
How’d this go?
Egads. Move eight and my goose is already cooked. After playing Qb3, White has a nasty discovered attack. I can defend either the f7 pawn or the b7 pawn. If I defend b7, I allow a brutal check on f7. But if I defend f7, then my rook is a goner. I’ll spare you my decision because it really didn’t matter: the kid went on to thoroughly annihilate me. There are still pieces of my ego scattered around the San Marino Masonic Lodge. At least it was over quickly, and I got a good night’s sleep afterward.
So what happened? I had known what to do in the Panov: leave the light-squared bishop at home, at least to start. Play e6, Be7, 0-0, and Nc6. Maybe play b6 and develop the bishop to b7. But I didn’t know why you did that. I hadn’t understood that White had this bishop-queen idea available thanks to the open c-file, which is generally not a feature of White’s play in the Caro-Kann.
Without knowing why, the information was easy to disregard. It didn’t prove resilient or adaptable. Instead, by dismissing prep that I mistakenly judged irrelevant, I reached for other prep that I mistakenly judged relevant — even though it wasn’t. This wasn’t just incomplete prep: it was bad cognition.
The Illusion of Explanatory Depth is back
A couple of weeks ago, when I tried to understand the difference between good and bad prep, I wrote about the Illusion of Explanatory Depth: “our belief that we understand more about the world than we actually do. It’s often not until we are asked to explain a concept that we come face to face with our limited understanding of it.”
In the context of that piece, I was using the Illusion of Explanatory Depth to illustrate why having incomplete opening knowledge can get you in trouble: you think you know more than you do, and then you default to the fragments you have without properly considering the situation you’re in.
But this situation wasn’t exactly that: I actually did know what to do. If I had just played according to the fragments of prep that I had in my mind about the Panov, I would’ve been fine. In fact, if you had stopped the game at that point and asked me what I knew about the position, I probably would’ve underestimated my understanding of the situation.
I already identified the reason for this: I didn’t know why you were supposed to play that way. Now, this connection comes as no surprise to me. If you read basically any of the literature about how to learn information in a manner that helps you retain and integrate the new material, it always comes back to the necessity of comprehension versus rote memorization. You see it in the method of elaborative interrogation, you see it in the self-explanation effect, and you see it in the generation effect.
They all provide different versions of the same takeaway: if you can explain why something is true, then you will be able to recall it much easier and more deeply, and implement it in practical situations much more effectively, than you would if you just knew it mechanically.
Again: not a novel insight. As I touched on in my previous piece, every good chess player will remind you that really knowing an opening is more about having a sense of the middlegame plans than it is memorizing a bunch of lines. But what struck me as somewhat noteworthy was the disconnect I experienced between “recalling” the “plan” — meaning the general moves — and grasping the rationale behind them.
Turns out, I had run head-first into one of humanity’s greatest logical fallacies: "mistaking the map for the territory.” Which made me think about just how relevant this is becoming in every aspect of our everyday life.
Mistaking the map for the territory
Coined by Alfred Korzybski, the idea of mistaking the map for the territory is a succinct way of describing a common trap: believing that the representation of something can stand in for the thing itself.
This is a provocative idea to apply to chess. On the surface, it’s pretty hard to distinguish the map of a chess game from the territory of the chess game. In theory, you can represent a chess game with 100 percent accuracy and no information loss by providing a map of it, whether it’s an illustration or even just a list of moves.
But what my error in the game helped reinforce for me is the sense that, in chess, the question of the map versus the territory is less a representative question and more a meaning question. It’s easy to think of chess as basically a math equation: you’ve got your pieces set in a certain relationship to each other, and one side might have more or less value than the other, and it’s all very literal and objective. You make the moves and the moves are either good or bad. But in truth, as I’ve written about over and over, chess strikes me as more like language — even poetry.
You can give a thousand people a poem and ask them what it means, and you will receive a thousand different answers. It’s the same with a chess position: if you give a thousand people a chess position and ask them to describe it to you, you’re going to get a thousand different evaluations. Many of them will likely have a great deal in common, particularly depending on the rating range of the respondent, but none of them will be exactly the same.1 Compare this to a math equation, where there is no such thing as interpretation: the meaning is the equation, and the equation is the meaning.
The more I play chess, the more I believe that the task of every chess player is to keep going deeper — to keep expanding the amount of information they can derive from any given position, and to keep strengthening the extent to which they can synthesize this information into a coherent interpretation of the game… allowing them to see what the other player doesn’t.
At its most fundamental, chess becomes an almost existential art: in any given game, your goal is to put the pieces in the places that most express their essential purpose. Where can the knight be the best version of a knight? The queen the best version of a queen? The bishop the best version of a bishop? And so on.
To do that, you have to grasp the essential nature of each piece and the environment of the board. It isn’t just about knowing some moves: it’s about intuiting the heart of a system. It’s about understanding what the moves mean in the context of this larger structure — how all of the words add up to make the poem.
And I don’t think this kind of thinking has ever been more relevant to the world at large than it is now.
The system only dreams in total darkness
A month ago, I wrote about why I think chess has a lot to teach us about the future of life with AI. I won’t reiterate the points I made in that piece, but suffice it to say that I continue to think often and at length about what role AI will and should play in our lives, and also chess — so, inevitably, these two things end up colliding.
What’s the connection here? Well, you might have noticed that there is a debate — just beginning but sure to only grow in ferocity — about what using AI is doing to our brains. The latest salvo came in the form of an MIT study that suggested using ChatGPT is making us stupider. While I tend to agree with Tyler Cowen’s critiques alleging that the study itself mistakes the map for the territory — i.e. using EEGs of the brain as a stand in for how the brain is actually working, which is a big problem with neuroscience in general — I think the bigger trap these explorations often fall into is that, in the interest of creating scientific replicability and measurable results, they ignore the basic guideline of common sense.
As with any tool, it isn’t a question of whether you use ChatGPT: it’s how. If I used ChatGPT to write the text of this piece, then I would become a worse writer, since I would be spending less time writing. Not a difficult idea to appreciate, and one that lines up with everything we know about how you get better at a skill or practice. But if I used ChatGPT to help me uncover material that I would not have otherwise found — or that I might have found through using Google, but which I am able to find easier and more precisely — and then I take the time to engage with those primary sources, not just accepting whatever ChatGPT’s interpretation is at face value, then ChatGPT might make me a better writer.
Let me put this slightly differently. If I ran everywhere instead of driving, I would definitely get faster and healthier. But there are places I wouldn’t be able to go, and places I wouldn’t be able to reach fast enough to make them worth going to. This is the point of having a car in the first place. By using a car instead of running everywhere, I’m making the argument that the efficiency and versatility I gain because of the car outweighs the loss to my health. Now, would I be better off driving fewer places and relying on my own powers of ambulation more often? Probably! But should I get rid of my car entirely? I don’t think so! Not in Los Angeles, anyway!
So just like using a car is not in and of itself a problem if it lets you do things you couldn’t otherwise do, using ChatGPT isn’t a problem if it lets you do things you otherwise couldn’t do.2 The problem, then, becomes when we use ChatGPT in one of two ways. The first is when we let it replace the primary task, i.e. writing for a writer, coding for a coder, illustrating for an illustrator. To continue our metaphor, this would be like saying I want to become a better runner and then going for a drive instead of going for a run. If you’re a writer and you’re using ChatGPT to literally write for you, you’re going to become a worse writer — I promise.
The second way is when we accept whatever AI provides us without taking the time to understand why it might be true. Now, this isn’t just a problem because, often enough, what AI tells us isn’t actually true.3 It’s also a problem because it affects the depth of our understanding of whatever it is that ChatGPT tells us. To return to the car metaphor one last time, this would be like claiming that you have as good of a feel for a city by spending twenty minutes driving through it on the highway as you would by spending two days exploring on foot.
The best way I can illustrate this without using cars is with an example from my own life: I have now, more than once, suggested to someone that they read a long piece of writing, and they’ve instead responded by telling me that they asked ChatGPT for a summary. This isn’t just not the same as reading the piece yourself: in my opinion, it’s worse than not reading it at all.4
The whole argument I’m making in this essay is that the real substance is derived from the time spent with an idea and the depth to which we understand it. Merely having the information is not equivalent — and it likely creates the Illusion of Explanatory Depth, giving you the sense that you do understand the information when, in actuality, you don’t.
“The System Only Dreams in Total Darkness” is the name of a song by the band the National. First released in 2017, this song has become fascinatingly prophetic.(“All night you’re talking to God.”) AI is a system. It cannot dream. Only humans can dream. This is crucial, because dreams have an inherent meaning to them: they are built out of symbols derived from our lived experience. Dreams are all why. The only way in which AI can dream is in total darkness, i.e. when we close our eyes and ignore the reality of how they actually work.5 This is happening. A lot. And it’s happening in far more quotidian ways than that.
TL,DR: There’s nothing wrong with using ChatGPT, or Claude, or whatever. But it’s all about how you use it. And in the way we related to AI, we are widely mistaking the map for the territory.
Okay, so what does this have to do with chess?
What does it have to do with chess INDEED. As always, I’m trying to connect chess to the wider realms of performance, and I think this subject gets at the heart of what is going to divide the good performers from the bad in our coming (present?) future:
Do you know what you don’t know? Or do you just not know what you know?
I’m paraphrasing the Socratic idea here. But it’s more relevant than it has ever been. Human knowledge and cognition will be predicated upon the ability to step back from the AI-generated answer and take the time to grapple with it. Is it actually right? Why is it right? o3 “reasons,” so will we just outsource our reasoning to AI, or will we continue practicing it?
This tendency is already a problem in chess: it’s easy to run your games through AI Review, briefly glance at the results, and feel like you understand. But you don’t. It’s easy to read about an idea, like the minority attack or the bishop pair, and feel like you understand it. But until you see why it works, you don’t. And it’s easy to memorize some opening lines — but if you don’t have a sense of why those moves are being played, you don’t really know the opening. This is why I’ve put so much effort into reviewing my games. I want to understand why chess works the way it does. I think that’s one of the main keys to getting better, at least for me.
Is it easy to do that? No. But is it worth the effort? Yes. It’s the only way you’re going to hang with the nine-year-olds.
The operative difference here would probably be that some of the answers re: the chess position will be objectively better than the ones re: the poem. I mean, some of the answers re: the poem will definitely be better than the others, but not necessarily objectively.
For example: I use ChatGPT to generate the illustrations in this newsletter because I can PROMISE YOU that I would not be drawing them myself.
The hallucination issue has gotten better — but the problem is that, as long as it’s more than 0 percent of the time, it has to be treated as if it’s always possible, particularly if you’re doing anything where the accuracy of your information matters… which is most things worth doing.
In terms of time invested, this is actually exactly the same as the car metaphor: you might spend 20 minutes reading a summary of a book versus 20 hours reading the whole book. If you think reading the summary is equivalent to reading the book — or even worth more than the fraction implied by that relation of time spent, which is 1.67 percent as much — you are, frankly, wrong.
If you want to read more about this idea, I thought this piece was a terrific articulation of it:





