Not to pick on OpenAI, but I think their position requires that they can withstand legitimate criticism. I keep coming back again and again to their charter, which gives a shape to what their organization is supposed to aim for. This part just echoes in my brain:

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Unduly concentrating power.

To quote a ZDNet article on the release of GPT-4:

In the GPT-4 technical report published Tuesday, alongside the blog post by OpenAI, the firm states that it is refraining from offering technical details because of competitive and safety considerations. 

“Given both the competitive landscape and the safety implications of large-scale models like GPT-4,” it writes, “This report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

Let’s put aside the safety question for a minute, because I personally don’t believe that any closed source single-owner AI system can ever even truly be termed “safe” in a meaningful way.

Instead, let’s just focus on that mention of “competitive landscape.”

Closing not just your source, but even technical explanations of your model, because of competition is literally an example of unduly concentrating power.

If their goal is moving towards AGI which benefits all of humanity, they’re already on the wrong path. Can they get back on the right one, one in line with their own charter?