This is interesting:
LLMs model their output on the texts they have been trained on, which is more or less the writing of the entire Internet, including all the biases – the prejudices, racisms, and sexisms –that constitute much of it. Countering this means either censoring the output, as is done (to a degree) with ChatGPT, and thus rendering it potentially unusable. Or, as is also practiced, filtering the data set for its undesirable components – and thus feeding the model with a better world. This is an eminently political decision. Detoxifying AI necessarily involves formulating a social vision.
There’s a lot to tease out in this article, but this idea described above strikes me as a problem related to dimension reduction.
I’ve been having free ranging discussions with ChatGPT on some related problems around the design of my latent space navigator concept, and recently it offered this simple explanation of dimensionality reduction:
Dimensionality reduction is a technique used to reduce the number of variables or dimensions in a dataset while preserving the relationships and structures within the data.
So there’s something to having a large dataset with many dimensions, and having to reduce it to lower dimensionality for some specific intended use…
Leave a Reply
You must be logged in to post a comment.