Recent developments at OpenAI have sparked significant discussions. Most notably, Sam Altman’s departure as CEO, which some speculate is linked to the platform’s performance issues, has been identified as ‘not passing the sniff test.’
Users have observed a noticeable decline in response speed and accuracy, impacting the usability of OpenAI’s products, particularly ChatGPT. These changes, observed over a short span of a few months, have been readily evident without rigorous testing.
This decline underscores a need for potential regulations in the AI industry. Customers should arguably be assured of consistent service quality, at least similar to what they experienced at the time of their initial subscription. There’s a growing concern about “shrinkflation” in AI services, where the quality of service diminishes over time without a corresponding decrease in cost.
Specifically, issues like longer response times, higher error rates, and increasing system unavailability have raised questions about the company’s capacity to manage demand surges. These issues have been exacerbated by new user interface elements many perceive as nothing more than eye candy: Existing only to distract users from noticeable performance drops.
Sam Altman’s exit raises questions about its correlation with these performance issues. The narrative that his departure is unrelated to these challenges seems implausible. The speculation is that the operational difficulties and the management’s response to them could be factors.
There’s also skepticism about the official reasons given for Altman’s departure, such as him being “not consistently candid with the board.” Many find this explanation unsatisfactory, given his consistently open public stance and previous communications. The sum of which has led many to consider him an extremely honest and candid person. A person not engaged in an endless quest for further billions.
This is supposedly the person pushing for faster development?
The swift hiring of Altman by Microsoft post-departure has further fueled speculation about the internal dynamics at OpenAI.
The above represents the most even-handed reporting of the situation I could gather in a few words. Below the margin break is this author’s take on it.
Going into this, we already knew OpenAI as an organization which operated a bit of a cloak-and-dagger office over there in San Francisco. We already (kind of) knew it as an organization which had some of the smartest of the smart.
But we also knew (if we were smart, that is) that nothing stops a smart person faster than thinking they’re smarter than they are. This is surely a case in that point. How am I sure of this?
Because it hasn’t been four days and there’s a hell of a lot of walking back being attempted. Which doesn’t happen unless someone is saying, “oops.”
This is how we can pretty much know for a certainty that this was not a “deliberative review process.” We wouldn’t even need to similarly know that 49% OpenAI stakeholder Microsoft knew nothing about Altman’s dismissal before it happened. We wouldn’t need to know how angry Nadella was.
The fact that Brockman wasn’t even at the table when it happened smacks of a more “thieves in the night pilfering the castle” narrative.
And all of this together serves nothing better than it serves to renew calls for transparency from OpenAI management, and transparency in AI development AND management.
Haven’t users been demanding explanations for the decline in service quality? Shouldn’t there be a broader discussion on the need for regulatory frameworks to ensure service consistency in AI platforms?
Is it somehow nonobvious that any user(s) which manage to find themselves preferred by OpenAI VIPs can and will enjoy improved system performance, and that contributing money through financial back channels is the simplest and least traceable way for them to find themselves so preferred?
There are only a few hundred different ways that could happen, of course. But then, this is that extra-special rare “for profit nonprofit” which — well, I’m not clear…does that mean we should trust its management more, less, or about the same as a classic for profit?
None of the multimillionaires seated on the board would ever go so far as to abuse their position to line their pockets, of course. They might go so far as to sack their spokesperson in the middle of the night and piss of their 49% shareholder, but that’s nothing the other 51% should concern themselves with, right?
#snicker