When I first heard the news of Sam Altman’s oh-so-brief ouster from OpenAI, my first thought was the same as everyone else’s: is this Sam Bankman-Fried all over again? Did Altman commit fraud? Or was this something new, something worse? Theories ran wild. ChatGPT was on the precipice of becoming sentient and he covered it up! ChatGPT already was sentient! ChatGPT was sentient, capable of feeling human love, and had entered a romantic relationship with Altman which required he step down!
Over the following days, it became clear that the ouster was a more standard power struggle, albeit one over the future of AI. Zvi Mowshowitz’s the Battle of the Board is the best rundown; I’ll quickly summarize.
There were two camps within OpenAI. One, led by Altman, wanted OpenAI “to mostly function as an ordinary Big Tech company in partnership with Microsoft,” as Zvi puts it. The other camp wanted OpenAI to move cautiously, prioritize safety, and avoid spurring an AI race. This simmering disagreement, further heated by Altman’s moves to push out an opposing board member, boiled over. The board decided his leadership was a sufficient threat to their vision of OpenAI and therefore to the development of safe AI. So they voted him out.
In announcing this, the OpenAI board made their rationale cryptically unclear. They only said that he “was not consistently candid in his communications with the board” and that they “no longer [had] confidence in his ability to continue leading OpenAI.” They gave no further details. They gave no examples of the “candid communications” he had not consistently provided, nor any explanation of why they had lost confidence.
Predictably, in an information vacuum, wild speculation rushed in.
The board was left listing everything Altman hadn’t done wrong. No, Altman didn’t commit fraud, or have a sex scandal, or cover-up AGI. They said that Altman had been “so deft they couldn't even give a specific example” of misconduct, according to the Wall Street Journal. The board kept making the case against Altman look weaker.
The question became: what was the board thinking? How did they think Altman, OpenAI, Microsoft, and the general public would react to this seemingly unwarranted upheaval at one of the world’s most discussed companies?
While the world looked on quizzically, I felt a pang of recognition. The board’s behavior reminded of a tendency I’ve seen before, one I’ve been guilty of. The tendency to focus so deeply on internal logic that you fail to account for other people. The tendency of math people.
I, myself, am a math person.1 Math people instinctively think of the world in the language of math, logic, and reason. They are good with, and love, numbers and computers and brainteasers and anything coldly, rigorously logical. They feel an itch to problem-solve, to create a structured model of the world. They are often mathematicians and computer scientists and engineers and physicists, although they can be anything.
Math people are, in many ways, wonderful. If there’s some technical problem you needed solved, math people will, without provocation, spend hours tinkering toward the optimal solution. (They make easy prey for nerd-sniping.) They strive to see the world clearly; they often see problems in a light others cannot and find intricate solutions others might miss.
They have their foibles, sure. Sometimes they become a tad too convinced of their own model of the world. When math people want to convince you of something, they often argue with the certainty that might accompany a mathematical proof. They are confused when people disagree with them, because they have a mathematical proof that they are right, didn’t you see?
Still, many math people are open to new ideas, provided they are presented in the language of logic. Because that is the ultimate arbiter: logic.
A related story. In my first post-college job, my director wanted to merge two similar-but-distinct teams. Many people on the team, myself included, thought doing so would be a mistake. I didn’t fully understand my director’s reasons, but I knew he thought the teams’ work overlapped and that therefore merging them would increase our efficiency. Unfortunately, the actual overlap was smaller than he thought. The merger would double everyone’s workload for little gain.
I went to him with the logical case — the mathematical proof, as I saw it — that this merger was a mistake. I laid it out step by step. He asked questions throughout, and, by the end of our conversation, said my argument made sense. Logic triumphed.
Then he went ahead and merged the teams anyway, because he wanted to. Logic failed.
(A year later, when it became clear that merging the teams had been a mistake, the director launched the “2.0” version of the merger, which featured the game-changing innovation of re-separating the merged teams.)
This is the hard truth that math people must learn: in the real world, the ultimate arbiter is not logic, but other people.
This is a tough pill to swallow for two reasons.
One, it means “playing the game”. It means identifying other people’s motives and reasons and refining your actions to account for them, even if you think they’re selfish or, worse, irrational. And, I agree, sometimes this feels like bullshit.
But if you, like me, want to draw an accurate map of reality, you can’t just scribble “here be egomaniacs and idiots” over the lands of other people’s behavior and call it a day. Other people, however selfish or irrational they are, are a huge part of reality. Maybe the biggest part. If you want to succeed, but also if you want to understand why the world is the way it is, you need to understand other people in all their messiness.
Two, focusing on other people means accepting that your logic might not be as irrefutable as it feels. Maybe other people reached a different conclusion not because they’re selfish or irrational, but because real-world logic can’t be irrefutable.
My logic usually feels pretty irrefutable to me. But pull on its threads and you’ll see it’s tangled in countless personally-influenced and hard-to-spot assumptions. There are no axioms everyone agrees on. We’re all forced to base our reasoning on assumptions that make perfect sense to us but look baseless to others. Using different assumptions, two smart people can be maximally logical and reach drastically different conclusions.
Math people also prefer to focus on logic rather than other people’s perceptions because, well, that’s our strength.
Other people’s perceptions are inherently unpredictable. The human experience is too varied, personalities too complex, and opinions too fickle. Nobody can account for all of them into a single logical structure.
For a math person, logic and reason feel much sturdier in their consistency. There is something reassuring, even soothing, about the certainty that logic provides. It cannot change its mind or contradict itself. Once proven, always proven. Other people’s perceptions flip-flop on a single contextless tweet, but logic is forever.
Unfortunately, this preference for logic can lead math people to view managing perceptions, feelings, and even relationships as bullshit.
To see this tendency in its worst form, consider the other headline-grabbing tech CEO of our time, FTX’s Sam Bankman-Fried (SBF henceforth). SBF is a prototypical example of math person id: excellent at technical problem-solving and dismissive of anything involving other people. In Going Infinite, SBF constantly deems anything related to management, marketing, and the humanities “bullshit”. A sample quote:
But every time [SBF] flipped through books or articles on management or leadership, he had roughly the same reaction he’d had to English class. One expert said X, the other said the opposite of X. “It was all bullshit,” he said.
Because he deemed these to be bullshit, he was naturally a terrible manager. Don’t take it from me, take it from his at-the-time friend Nishad Singh: “Sam was a very bad manager. He was genuinely a terrible manager.”
While SBF’s distaste for interpersonal matters wasn’t the sole cause of FTX’s implosion — there was, you know, the fraud — it was part of it. Following Sam’s lead, FTX was a poorly-managed, barely-supervised breeding ground for mistakes, hubris, and theft. Moreover, SBF’s dismissal of soft management and leadership skills bleed into a dismissal of basic fiduciary responsibility:
“There’s a functional religion around the CFO,” said Sam. “I’ll ask them, ‘Why do I need one?’ Some people cannot articulate a single thing the CFO is supposed to do. They’ll say ‘keep track of the money,’ or ‘make projections.’ I’m like, What the fuck do you think I do all day? You think I don’t know how much money we have?”
Later, he would claim that he did not know much money they had.
SBF is an extreme case. He dismisses pretty much anything that doesn’t interest him as bullshit. That includes English classes, being a decent manager, and basic organizational and financial responsibilities. What interests him are probabilities and technical problem-solving, so that’s where all the focus goes. The chaos and total lack of oversight that reigned in FTX stems from SBF’s dislike for mundane interpersonal and organizational matters.
Math people sometimes make much more minor, far less harmful versions of the same mistake. They focus on logic, on technical problem-solving, on math, because it makes sense to them and because they’re good at it. In the process, they can treat other people’s thoughts, preferences, and reactions as an afterthought.
In many situations, though, other people are everything. Want to get promoted but never tell anybody what you’re doing? Get ready to hear: “You’re doing great work, but you need to be more visible.” Your job performance is ultimately only as good as your peers, your manager, or some committee think it is. Or maybe you’re an introverted person who can’t find a relationship because other people never see your best qualities. Outside of standardized tests and mile times, success is gated by other people.
The OpenAI board, I think, made the traditional math person mistake. Altman was reshaping the board to match his vision, his vision could increase the likelihood of an evil AGI, and therefore he needed to be voted out. They had their logic.
But their logic did not take into account how other people would respond to their actions. You can see, plain as day, in their initial nonexistent justification and subsequent sputtering that they had not considered how Altman, Microsoft, OpenAI employees, or the general public would respond. They didn’t even seem to realize that other people would expect an explanation.
Of course, of course, everyone responded with total bafflement. It just seems stupid to fire someone so prominent without giving a clear reason. The subsequent failure to explain their decision made the board look even worse, which made it even easier to rail against their decision and doubt their competency.
I don’t claim the board would’ve definitely won with better comms. Maybe, as the board claims, the events that informed their decision are subtle and hard to explain. But clearly there were enough instances that they thought Altman needed to go. Giving any of them, while acknowledging their complexity, would be better than giving literally no specific reasons.
It’s also fair to point out that Microsoft and OpenAI employees have their own motives, some at odds with the board’s. Microsoft wants OpenAI to act more like a standard tech company that launches endless new GPTs, and eventually merges with it. The board hates these ideas. Employees, meanwhile, have an interest in stability given upcoming rounds of funding.
Rather than absolving the board, though, these opposing motives make the board’s miscalculation more frustrating. By not correctly accounting for other people’s views, the board made it exceedingly easy for the opposition to get their way. If other people have reasons to stand against you, it’s all the more important to present your case in the most persuasive light.
Suppose you’re an OpenAI employee who thinks we should be cautious with AI development, but also likes stability, wants more funding, and has a vaguely positive opinion of Altman. You hear the board fired Altman, which jeopardizes stability, funding, and throws out a guy who seemed okay to you. In return, the board gives you: no reason to support their decision. Where will your allegiances lie? Of course they’ll lie with Altman. And worse, by being cagey about the reasons, the board elevated expectations for wrongdoing — fraud? sex scandal? secret human-love-feeling AGI? — in a way that made the actual reasons feel inconsequential. If the board had given clear cogent reasons to replace Altman, they would’ve had a fighting chance.
Or, if the board felt they couldn’t defend the firing publicly, I would argue that they shouldn’t have pulled the trigger. Not necessarily because it was the wrong choice, but because the predictable backlash might leave them in a worse position than they started. Which it did.
I know, I know, hindsight is 20-20. Yes, it’s a bit unfair to judge their actions knowing the outcome.
But it really feels like the initial fallout was predictable. Of course everyone would want to know why the public face of AI was fired. With no reason provided, of course everyone would turn their attention to the board and wonder about their competency. Of course Altman, feeling blindsided and undermined, would be angry and try to undo the coup. What happened from there depended on harder-to-predict political maneuvering. But the board should have been worried about that scenario, and prepared accordingly. Instead, they came armed with a two sentence explanation and no backup.
The board’s attempted coup has damaged the public’s perception of them, AI-doomers, and EA in general as secretive, out-of-touch, and maybe a bit wackadoo. Sam Altman is still the CEO of OpenAI and, if anything, strengthened his grip on the company. The board took a 4-2 majority and, against all odds, converted it into their own abdication and Altman’s re-coronation.
This is the cost of ignoring other people in your calculus. Success often hinges on other people, their perceptions, their preferences, their motives. The only logical path is to give them the weight they deserve.
You arguably could use a term like rationalist, as many math people become rationalists, and their values are largely the same. However, I think of rationalists as active subscribers to a school of thought, while a math person is a natural disposition that happens to generate a lot of rationalists.
Hey. Good article. I'd like to see the same filter applied against Silicon Valley memes, and what I see as as the near-fatal lack of education in history, languages and arts. Just see Elon for a person who can really do amazing engineering, but is so stupid about foreign affairs that he thinks he can weigh in on Russia and the Middle East.
I am a big believer in the 'fail-fast' methodology - for software - and maybe rockets - but there is a huge hypocrisy in ignoring and failing to learn from the psychology and culture failures. And of course there is the ever-present scourge of financial incentives that are orthogonal to community good.
I think "math person" is just a euphemism for "high functioning autistic person".
As a self-aware rationalist blogger once said: "highly autistic communities might have some systematic epistemological problems related to the high prevalence of autism.”