I don’t have any particular special insight or visibility into what happened over the weekend with OpenAI, but I wanted to comment on this Futurism piece, because I think their overall suggestion is a good one, whether or not its the true root cause…
You’ll notice a risky throughline between those side projects as well: they’d both be swimming in the same financial waters as OpenAI, with the chipmaker potentially selling its hardware and the Jony Ive one likely using its API.
I won’t quote the whole thing, but they link out to the OpenAI letter from the board. Some interesting bits:
In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission.
While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
I’ve written about their Charter before, because my observation of the for-profit company’s behavior has been that they seem to be moving away from core principles of it, at least in my eyes.
For one, the organization’s Charter mission commits it to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.”
From the Futurism piece:
Per Bloomberg, Altman’s side hustle, dubbed “Tigris,” appears quite ambitious. Nvidia has a chokehold on the semiconductor marketplace, as its popular GPU chips remain the favorite among AI startups for their computing power; Altman, according to Bloomberg, wants to take some of that market share away from Nvidia by introducing his own lower-cost Tensor Processing Units, or TPUs, to the industry. This would not only stand to displace the market incumbent but would also give OpenAI more control over its production, likely making its products cheaper in the long run.
I guess on the one hand there is a positive argument to be made for bringing new entrants to the marketplace, to make it more competitive. But I don’t have illusions that it wouldn’t just become an oligopoly instead of a near-monopoly.
And to my eyes, this whole thing, while it may be market efficient for the org to control its hardware production, does seem very strongly to “unduly concentrate power.” There’s obviously a weird mish-mash at play here between the non-profit’s mission and the for-profit’s interests, making me again skeptical we’ll get a truly “safe” AI out of a profit-driven underlying development model.
I happen to side strongly in this case with the “benefits all humanity” camp regarding AI, and in favor of its open development. I am in general not into what I’ve seen regarding Effective Altruism. I am a here-and-now-ist and a practicing Practicalian. While I am a sci-fi writer, I find the EA fictions to to be the wrong ones to focus on in the development of AI. I think we should be turning away from the exclusively STEM-based insider club of AI development that seems to be emerging and find concrete specific practical ways to integrate artists, writers, and all kinds of people. Citizens assemblies. The League of Earth Libraries putting out their own free open source queryable AI based on all collective human knowledge.
I think if we’re going to deal in fictions about the futures that we want, let’s first acknowledge they are fictions, in order to be able to better understand our own and one another’s particular attachments or commitments to any of them.
It seems unpopular as a hot take on this situation (a tepid take?) that, if what the public board statement says is on its face true, and that the cause of action was misalignment with mission over benefit to humanity and not unduly concentrating power… well, let’s just say I would be cool with that. If that’s the case, they could do a substantially better job communicating that, and then instituting their own safeguards and perhaps stronger incentive mechanisms to correct for future occasions of the same.
Here’s hoping the plan, as it emerges, is to put the truly “open” back in OpenAI. I’m not sure a $20/mo subscription is what gets us there. But then, my ideals often seem misplaced with conditions on the ground. Which I guess is what makes them ideals, and not “reals” – because they guide you from the real now to the could be real soon or one day as you work towards their actualization. I don’t think it’s a crazy or stupid goal for this technology to genuinely benefit all humanity.
And for the love of god – if nothing else – bring back 4-up image results for Dalle3!