Tend to like Jack Clark of Anthropic’s thinking (and his sci fi) – which is why I still hold out some hope for Claude – in particular this bit about there being a need for a “public option” for AI:
Governments have a very limited period of time in which they can develop their own regulatory capacity to give them leverage with regard to the private sector developers and deployers of AI systems. Right now, I think the default state of affairs is that private sector companies are going to ‘wirehead’ governments and do mass regulatory capture while building ever more advanced systems, therefore carrying out a quiet and dull transfer of power over the governance of potentially the world’s most important class of technology.
I don’t think we want this to happen. I, perhaps naively, believe in the possibility of a ‘public option’ for superintelligence. By public option I don’t mean state-run-AI (as this has its own drawbacks), but I do mean a version of AI deployment which involves more input and leverage from the public, academia, and governments, and I mean something different to today where most decisions about AI are being made via a narrow set of actors (companies) in isolation of broader considerations and equities. It seems worth trying to do this because I suspect a public option has less longterm societal risks than what we’re doing today and may lead to better social and economic outcomes for everyone.
“You have a right to a super-intelligence, if you cannot afford one, a public super-intelligence will be appointed to represent you…”
I’m imagining this almost something like old school PBS crossed between a library, but… AI? Some fun things to think through here.