There are two items in particular that make me crack up for any AUP for an LLM, via Meta’s announcement. (I still loathe calling them “Meta” but that’s an issue apart.)

Item 3. a.:

Generating, promoting, or furthering fraud or the creation or promotion of disinformation

Let’s be real. LLM’s are basically machines that generate false information. There’s a principle in cybernetics, coined by Stafford Beer: the purpose of a system is what it does. Wikipedia attributes to Beer the quote:

there is “no point in claiming that the purpose of a system is to do what it constantly fails to do.”

And a longer one:

According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.

I wholeheartedly agree with this. It’s absurd to use as a starting point for analyzing a system some theoretical idea about what it *might* be capable of, and wish and hope that people won’t use it for the more obvious clear functions that it serves: in this case, inventing wrong information. I have no idea why this idea is so damn ignored today because it’s more relevant than ever. I guess the reason is really simply being honest with ourselves about the true limitations of the things we build.

Lastly, I wanted to mock, er, I mean comment on item 4 in the AUP section:

Fail to appropriately disclose to end users any known dangers of your AI system

From what I can tell by skimming this document, Meta themselves are explicitly violating by this provision in their own AUP, in that they are nowhere disclosing any known dangers in the AI system they are releasing into the wild. So why should end users be held to a standard apart from the developers themselves?

Instead, dressing up this document as an “Acceptable Use Policy” is a back-handed way of passing the buck on what are clearly and obviously known dangers, er, I mean functions of the AI system, and asking “pretty please” that people don’t use the system to do the things the system does naturally.

Screw the honor system. My car needs me.

Meta, like every other company that uses this sleight of hand policy trick, is shunting responsibility for the dangers and potential harms of a system they created off to users, rather than simply taking the time and extra effort to design unwanted functions out of the system.

All that said, I think it is probably right and even possibly “good” to release these models open source into the wild. Call me crazy though that I’m not willing to welcome Meta (or any company for that matter) as my AI savior; I haven’t forgotten all the shitty things they’ve built their empire on, nor am I for a minute fooled into thinking they’ve put all that behind them and turned over a new leaf. The purpose of a system is still what it obviously does. And that applies triply to corporations.