I still like Anil Dash, especially these observations about the many many failures of generative AI at a deep fundamental level to adhere to the most basic design & engineering principles:

Amongst engineers, coders, technical architects, and product designers, one of the most important traits that a system can have is that one can reason about that system in a consistent and predictable way…

The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging…

Now we have billions of dollars being invested into technologies where it is impossible to make falsifiable assertions. A system that you cannot debug through a logical, socratic process is a vulnerability that exploitative tech tycoons will use to do what they always do, undermine the vulnerable.

So, what can we do? A simple thing for technologists, or those who work with them, to do is to make a simple demand: we need systems we can reason about. A system where we can provide the same input multiple times, and the response will change in minor or major ways, for unknown and unknowable reasons, and yet we’re expected to rebuild entire other industries or ecosystems around it, is merely a tool for manipulation.

The whole post is worth a read.