I am inhaling every published word about the Open AI debacle. Unlike most Silicon Valley stories which seem so “inside baseball,” this one definitely isn’t. There were three big stories last weekend, as far as I was concerned, Gaza, Rosalynn Carter and Open AI.
I first opened my eyes to OpenAI a couple of years ago while working with a company that was using its Chat GPT precursor GPT-3 to synthesize and synopsize tens of thousands of customer comments in natural language. Articles in the NY Times and elsewhere had already appeared about the unbelievable capabilities of GPT-3 and as I came to learn more about it, I realized that what I was seeing and experiencing was the tip of the iceberg. What I didn’t realize at the time was that AI would literally revolutionize everything.
When ChatGPT launched, in November 2022, I – like everyone I knew – started playing with the free version. I asked it to write poems about my dogs, to solve impossible problems, and when Dall E launched, to generate my superhero avatar. And, in no time, artificial intelligence was everywhere.
By mid-2023, only a few months later, Open AI founder Sam Altman, understated and almost professorial, headed to Congress, to explain AI to our lawmakers, and clarify its potential, and its risks. It was so important to explain this stuff to Congress to avoid the situation we now find ourselves in with the last tech phenom that they never got a handle on – social media. His performance seemed rational and honest. Sam Altman’s focus on containing the fallout of a potentially all-powerful technology BEFORE it took hold seemed like a very smart idea. Making it all the more surprising (unlikely, even) that he’d screw things up in some way.
Furthermore, Altman seemed motivated by something bigger. Wasn’t Open AI a non-profit? Wasn’t the idea to make this technology available to all of us? According to a post by Julia Angwin, noted business/technology writer, he was onto something. “Making cloud computing into a public resource that anyone could use for a modest fee, like public libraries, could spur huge innovation,” she suggested. This model – Altman’s original model – would have kept the company independent, unaligned with a mega for-profit company like Microsoft. OpenAI would not be the only prominent nonprofit in the AI space, she added, and would not have been forced into Microsoft’s arms. “It has the making of the kind of coup I could support: one where the public took back the power of computation.”
So, the Open AI Board’s accusation that Sam Altman was less than transparent raises new questions. While out there raising billions, was he carried away like Elizabeth Theranos? Was the lure of big money (not for himself but for computational power) enough to push him into the arms of partners like Microsoft? Kara Swisher is asking herself the same question – I heard her podcast interview with Satya Nadella today. Why did the Board act so hastily, without consulting any of Open AI’s partners?
We don’t really know, yet. In Satya Nadella’s mind, nothing had really changed. On Friday, he said, he was working with Open AI and Sam Altman, and yesterday, he was working with Open AI and Sam Altman. And that’s all that counts. And now, with Sam Altman back in place, and a new more supportive board, will anything change? The questions this whole embroglio raises about the stakes – whose AI is it anyway – is bigger than any one organization. Maybe the Open AI board was acting – albeit clumsily – in the interest of humanity.