Thought this quote by Harry Law, via the Montreal AI Ethics Institute was worth capturing:
A third way might be to expand the sociotechnical assembly concept to include the people who use the systems––not just those who help build them. Too often, labs demark their position in sterile terms about governance, terms of service, and user policies that obscure the very human perspectives of the millions of people who now use today’s AI technologies daily. Yet while criticism has so far rightly focused on a failure to acknowledge the role of people (in this instance, artists) who have provided the data from which today’s AI draws its predictive power, there is little said by researchers about the person on the other side of the computer screen.
That the actual end users are either actively ignored, or treated in openly hostile terms by companies is actually an epidemic problem across all of technology, but feels especially intense and important as AI ramps up.