How to (not) Kill a King… of AI
And General Lessons Regarding Political Tact
Sam Altman has been fired from OpenAI! Oh wait, I’m a bit late to that. In fact its already been covered so many times already.
Sam Altman to join Microsoft! Oh, guess, I’m a little late for that too.
Source: Appropriately using Open AI Dall-e-3 model with prompt ‘Sam Altman as the King of AI in photo form fending off an assassination’
I’ve always believed that real life can be way more entertaining than TV. Just think about it, wonder-kid Stanford drop-out CEO of the fastest growing and arguably most important startup gets fired out of nowhere. Then he makes a prodigal return within a week. You literally can’t make this stuff up, even though fiction can come close. This has even been described as “Sam Altman returning to OpenAI after a day is like Steve Jobs returning to Apple after 12 years, but for the TikTok generation”.
However, there are a few key differences between the circumstances of Altman’s firing and Jobs’. In fact, “key” is probably the best word to as the reasons why his firing “failed” so quickly and spectacularly can be summarized in this great video.
Now, I’m not here to figure out or judge the reasoning behind the board’s decision. I believe that there are thoughtful points that can be made for either side. A leading theory is that the board felt that Sam was perhaps moving too fast with the company, skewing away from the organization’s original non-profit purpose, and accelerating the development of a technology that did not have the requisite safeguards in place yet. A further stretch theory is that artificial general intelligence (AGI) was being achieved and the board felt that Altman was not the right person to shepherd this. I won’t discuss in detail how much I agree with the board’s actions. I will instead focus on the efficacy said actions in achieving their goal. In purely that regard, oh boy did they fumble the bag.
The video discussed, “The Rules for Rulers”, documents the way in which people of influence accumulate and most importantly keep their power. The technique is depressingly simple. First, find people / groups that are most crucial in governing your kingdom and collecting treasure. The video refers to them as your “keys” to power while traditional business lingo would probably just call them stakeholders. Second, reward them handsomely. This serves the dual purpose of incentivizing keys to keep your kingdom running well and dissuading them from backing someone else.
If you do fall out of favor with a critical mass of these key supporters, then it’s already too late. Any power you have is now symbolic, and even that will likely not last that long. Now the tricky part is figuring out who you actually need to help you govern. For a natural resource rich country, it is probably just a few expert engineers, managers, and military staff. After all, pulling something from the ground isn’t that “hard” once you’ve learned how to do it once. If you’re a political candidate, it might be a little trickier. A country is fractured so you likely have to get the support from a few different, sometimes mutually-exclusive groups. It could be old / young people, college-educated / non-college educated, minorities / dominant races, upper / lower class, farmers / city-folk, the hedge fund billionaire funding a campaign / the other hedge fund billionaire funding the other campaign. For a young knowledge based tech company, like say OpenAI, the list is actually pretty short. They are employees, investors, and customers.
In most businesses, the customer is king. This is true for OpenAI as well. However, the company’s customer base is diverse and big enough that they don’t really have any influence upon the company. After all, it’s pretty hard to make some high school student cheating on their essay to coordinate with a software developer using it to help them write code, much less with Khan Academy who are using to build a virtual tutor. Therefore, the two keys that really matter here are the employees and investors. The board not only failed to sway, but even deliberately elected not to communicate with either of these parties.
It became very quickly obvious how much power employees had and their loyalty to Sam as demonstrated by a wall of repeated tweets and heart emojis. Employee power is obvious given that AI development is a highly technical and new field. Building up the requisite skillset could take years of dedicated education and experience. However why were they so loyal so Sam, and particularly not the board? There could be a variety of reasons but a potential candidate could be that Sam had built the company, whose employees were shareholders, to an eye-watering $80 billion and rapidly growing valuation while the board’s message was perhaps the company was actually moving too fast and wanted to bring it back to their nonprofit purpose. Again, as mentioned, there are merits in these concerns. However, most people, when it comes down to their own financial well-being or some philosophical platitudes, will choose – well it shouldn’t be too hard to figure out.
A similar logic could be used to evaluate the decision making of the investors, Microsoft being the largest. Honestly, I don’t think I have to carefully explain why venture capitalists and large multinational corporations are generally in favor of seeing a big return on their capital investment and opposed to big management challenges without their knowledge.
I alluded before that this situation was different from when Steve Jobs left Apple. The difference is precisely in the support the current CEO had from their key supporters. Sure, the lore for Jobs is that there was a heated argument between him and his board about the direction of the company. In this fiery passion, Jobs was let go. Although technically true, this argument was simply the match that ignited a barrel of already existent issues at Apple. Similar issues were not present at OpenAI. Steve Jobs was an asshole, Sam Altman was beloved by his employees. Apple, at the time, was not doing financially well, failing to live up to sales expectation for the Lisa and Macintosh. OpenAI, had literally just conducted their first developer conference highlighting the successes of the company and was even turning customers away due to too much demand for their product. I mean, come on, could you really have timed a firing any worse?
Of course there was still a path that could’ve potentially forced Altman out. Over a decade ago, another technology executive, Mark Hurd, “the man credited with saving Hewlett-Packard” was “successfully” forced out. However this was “after a sexual harassment inquiry unearthed false expenses claims designed to cover up a ‘close personal relationship’ with a former contractor.” I am certainly not advocating for some type of character assassination plot nor highlighting the circumstances of this harassment. Additionally, sexual harassment of any time should be taken and dealt with seriously. However for better or worse, something of this magnitude or greater would have been required. Not this “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” weak sauce. To take down someone that literally at the peak of his power, you’re going to need a lot more gravy.
Generously, we can think that the board considered this these ideas but came to the conclusion that there would be no way to convince their employees or their investors. Perhaps, the move was just a “Hail Mary” on their part to do something they believed in. I can be sympathetic to this belief but still criticize its naiveté. If this indeed were the case, perhaps the board should have realized that the timing was wrong, stayed patient, and re-evaluated their strategy moving forward.
Harshly, I characterize the board as way out of their depth. This lack of competence does not stem from their beliefs about the dangers and potential power of AI, but from the lack of tact in dealing with the incentives and power structures of their fellow humans. Perhaps the board thought that since they had the legal and, in their minds, ethical authority to conduct the firing, that there would be no questions asked. Obviously, they were swiftly shown how gravely mistaken this belief is. Like a great character on TV once said, “You really think a crown gives you power?”
The irony is that if the goal was to rein in the Sam Altman’s power and bring OpenAI back to its non-profit roots, this fiasco achieved the opposite. OpenAI now has a brand new board, one that includes a seat for Microsoft, which is definitely more aligned in what Sam wants to do. Meanwhile, old members have had to realize their tactical mistakes way too late, embarrassingly walk back on their choices, and become the butt-end of memes.
It is also worth considering how and why they got themselves here in the first place. Of course seeing the results, it is easy to be critical in hindsight. However, as I discussed, even without hindsight, there were clear indications that power move would not likely work, especially at the time it was attempted. Was this a case of too much experience in coding and ivory tower waxing about the theoretical concerns of machines and not enough Machiavellian political acumen that is actually required in managing powerful stakeholders?
Silicon Valley has a reputation of developing savants who have the “ability to change the world”. A criticism is that these same savants develop enormous egos and complexes, believing that their thoughts and actions are always right. In many instances they are indeed right. After all, to succeed in the Valley, you need high conviction. However, sometimes just sometimes, it would benefit themselves and everyone else if they just took a little bit of introspection and considered how their actions would be received by other; especially if those others are the employees and investors they claim to care so much about. Or if it really is that important to convince people of a certain view or to take a certain action, maybe it is best to first learn some basic strategy and persuasion techniques instead of just assuming “if you espouse it, they will come”.