I. But my map says you can’t be mugging me
In the graphic novel Logicomix, the authors puzzle over the “curiously high rate of psychosis in the lives of the founders of logic.”1 The book, which follows the life of the logician Bertrand Russell, documents his and other founding logicians’ encounters with madness. The famed set theorist Cantor rambles in an asylum; the “father of analytic philosophy” Gottlob Frege unleashes paranoid rants about the Jews; Godel dies because “he refused to eat out of fear that the hospital staff was attempting to poison him.”
Was there actually a high rate of psychosis among the founders of logic? Some counterarguments suggest these examples are literal nut-picking, but it’s hard to say conclusively. More on that later.
Regardless of Logicomix’s conclusions, there’s a line in it that I keep revisiting.
One of the authors, Christos, is traveling through his childhood home of Athens with a friend. When the friend tries to find their location on a map, Christos dismisses the need: “Please! I walked to school through these streets for six years! The map of the area is engraved in my neurons.”
But the city has changed since Christos’s childhood. Landmarks have been raized and replaced; the streets and squares are unrecognizable. A neighborhood he would once safely walk to school, now populated with prostitutes and con artists, has become decidedly less kid-friendly. Misled by overconfidence, he find himself lost and then, to really rub it in, gives away his phone to one of the con artists.
Reflecting on his mistake, Christos says:
[I thought] I knew an area of Athens just because I had, as I said “its map was engraved in my neurons.” Good grief! And then, strangely this brought back my earlier comment to Anne on “map-makers” and the heroes of this “Logicomix” we’re trying to make. And I thought: “Sure, Frege, Russell, Whitehead were excellent map-makers, but maybe eventually they confused their reality with their maps.”
For anyone who strives to see the world clearly, I find this is a crucial warning. However good your map of reality may be, it’s just that: a map. It’s a simplification of the world’s intricacies, a distilling of infinitely complex things down to broad categories and rough borders. If you become too attached to it, you can lose sight of the way really things are. You contort reality to match your map.
II. What are you looking at? Have you never seen a hypocrite before?
I write about this because, in my previous post, I made this exact mistake.
The post was about two things.
One, it was about “math people” — people who see the world through math, logic and reason — and their struggles to account for other people. They (really, we, since I’m also a math person) can focus so deeply on logic and technical problem-solving that we neglect everything else. We forget to tell other people what we’re doing, or persuade them that it’s worthwhile, or even consider their responses in our conclusions and decision-making. We’re shocked, shocked I tell you, when other people are unaware, confused, frustrated, or even outraged by our ideas and actions.
Two, the post was about the OpenAI board. When the OpenAI board-Altman debacle exploded, it fit snugly into the first topic. The board of the premier AI company — surely math people if there ever were any! — had made a decision that baffled Altman, OpenAI employees, and the general public. Clearly, I saw, they had made the classic math people mistake of failing to take into account how other people would respond to their actions.
I’d long planned to write about math people, well before the OpenAI drama. But the OpenAI drama felt like the perfect, globally-recognizable example of a trend I already wanted to discuss. So I mashed the OpenAI story into the math people post and published it.
Comments started to come in on Reddit. Here’s one from u/whoguardsthegods:
I don't have any insight into things, so maybe this is true. But there are alternative explanations:
The board considered the reactions but misjudged how extreme it would be. Perhaps they expected 20% of the company to revolt but not 95%.
The board knew the risks but pulled the trigger anyway.
The board knew people would want an explanation but didn't think any of the messages they proclaimed publicly would help their cause.
This interpretation of events here: https://old.reddit.com/r/MachineLearning/comments/1812w04/openai_we_have_reached_an_agreement_in_principle/kabk73s/.
Maybe the board was exactly as obtuse as you think but maybe not.
Other comments had similar underlying messages, like this one from u/gwern:
You don't understand what [the OpenAI board] had available, what options were cut off by 'the game', what they knew when, what they wanted, what they were willing to do, or what parts came as a surprise to everyone - but you're willing to confidently evaluate them anyway.
The obvious hypocrisy hit me. I don’t know anything special about the OpenAI board. I had no secret insider knowledge of their motives or decision-making. I didn’t even know if they were “math people”. But I confidently, offhandedly felt I could diagnose the entire situtation.
My post advised math people to account for other people’s perspectives, motives, and reasoning, so naturally I… reached some sweeping conclusions despite having minimal insight into the OpenAI board’s perspectives, motives and reasoning. Whoops.
Scott Alexander has a rule that goes something like, “Make sure that you are not committing the exact mistake that you are warning others about, in the most hypocritical way possible.”
I knew this rule. I even thought to myself before, during and after writing the post: am I about to step onto a hypocrisy landmine? And yet I blindly and confidently strode forward thinking: Nope! This giant cultural event just happens to perfectly match what I wanted to write about anyway! And stepped squarely on a landmine.
It was kind of surreal to re-read the piece, which I wrote literally days before, and cringe. I was so young and foolish two days ago, I thought. How could I miss the hypocrisy?
III. Watch out for the grooves
I have a pretty good idea how it happened.
Although, since I’m trying to avoid making the same mistake twice, please caveat all conclusions in this post with “I mean, maybe, maybe not, I could later discover I’m being a total hypocrite, please keep my writing safe from logical fallacies, o Bertrand Russell.”
I still stand by my previous post’s description of, and advice for, math people. (Someday, I might excise out that portion into a standalone piece.) We prefer formal logic and technical problem-solving, and that sometimes results in treating other people’s thoughts, preferences, and reactions as an afterthought. This pattern is part of my little map of reality.
When I saw the OpenAI board make a decision that baffled the public, it fell neatly into place. They were surely math people who had failed to account for other people’s reactions, just like I had seen happen a million times. I wanted what happened at OpenAI to match the pattern I already believed in. It felt right; it made sense of the situation; it matched my model of the world.
In short, confirmation bias. It feels satisfying to slot things into place on our maps. It makes the world feel legible, and makes us feel smart for being able to read it.
But this bias gently nudges us to force things into places they don’t quite fit. We end up miscategorizing and misunderstanding events, causes, and other people. We rearrange reality to conform to our map so that we can feel good about our mapmaking skills.
I’m not sure if, as Logicomix suggests, “confusing your map for reality” is particularly a problem for logicians/math people. Or, for that matter, that doing so leads to asylum-level insanity. Logicomix cites several of logic’s founders’ madness as evidence. Other commentators counter that statistically, logicians weren’t especially likely to go mad. Maybe, in yet another bit of irony, the writers of Logicomix itself were so enamored with their ‘logic and madness’ theme that they contorted the world to fit into it. Hard to say!
Nonetheless, I think we all, math people or otherwise, can benefit from recognizing the limits of our mental models of the world.2 Consciously or unconsciously, we identify patterns, rules and categories within the world and engrave them into our minds. If we’re not careful, we chisel these engravings so deeply that everything starts to fall into their grooves. I did it here, and I’m sure I’ll eventually do it again.
This is a small blog, so the stakes for this error are so, so low. Still, it feels important to acknowledge that I, someone who prides myself on being MAXIMALLY REASONABLE, happily contorted reality to match my map. It’s a little example of what not to do, one that hopefully other people and future me can learn from.
(I mean, maybe, maybe not, I could later discover I’m being a total hypocrite, please keep my writing safe from logical fallacies, o Bertrand Russell.)
Itself a reference to Gian-Carlo Rota, likely this line from “Indiscrete Thoughts”: “It cannot be a complete coincidence that several outstanding logicians of the twentieth century found shelter in asylums at some point in their lives: Cantor, Zermelo, Gödel and Post are some.”
Also, I wholeheartedly recommend Logicomix regardless. Even the other commentators who quibble over its conclusions about logic ⇒ madness do too.
Oh man! Logicomix! That takes me back. Happy to be reminded of it. When I was a teenager I was much more of an art person than a math person and I loved graphic novels (still do). Logicomix was really formative for me; the comics really helped me become interested in philosophy and even though I don’t think I ever became a math person Bertrand Russel was one of my main role models for some years there.
It’s really such a beautiful work of art and such an unlikely project. Never read anything like it