Via a Substack I follow:

In my opinion, at least the large, all-encompassing “statistically stochastic knowledge synthesizer libraries,” the so called “foundational models”, could be operated by the public sector to ensure safety, ethical production and prevention of abuse. Running foundational models by the public would also ensure data transparency and work against the “black boxing” of this tech. I’m not sure or convinced that this approach would be practical or feasible to do, but i think it would provide the most stability and transparency.

Anthropic’s Jack Clark has proposed something similar, and I’ve already integrated it into my AI TOS proposal.

While I think it’s a good idea, I don’t think we would be correct to automatically assume that a public-run option would “ensure safety, ethical production and prevention of abuse.” It’s equally possible that we have an unsafe, unethical, and abusive system that is simply “run by the public” – whatever that even exactly means.

That said, I still think that future at least puts this all out into the open, and outside the exclusive control of closed for-profit enterprise. At least it would be an *attempt* at accountability and transparency, and distributed control. If we realize it’s not living up to our expectations, we would at least have the theoretical power to modify it… which is more than we have now (apart from open-source models, obvs).