I’ve read and watched a ton on generative AI and what it’s all about, and I’d place Kevin Kelly’s article, “Dreams are the default for intelligence,” at or near the top of this list. It’s the most exciting and adventurous hot take on these topics I’ve seen.

Some choice excerpts:

I have a proto-theory: That our brains tend to produce dreams at all times, and that during waking hours, our brains tame the dream machine into perception and truthiness. At night, we let it run free to keep the brain areas occupied. The foundational mode of the intelligence is therefore dreaming.

And:

At any length, the AI stuff feels like dreams.

My conjecture is that they feel like dreams because our heads are using the same methods, the same algorithms. so to speak. Our minds, of course, are using wet neurons, in much greater numbers and connections than a GPU cluster, but algorithmically, they will be doing similar things.

It is possible that this whole apparatus of generation is actually required for perception itself. The “prompt” in ordinary sight may be the stream of data bits from the optic nerve in the eye balls, which go on to generate the “vision” of what we see. The same algorithms which generate the hallucinations for AI art — and for human dreams — may also be the heavy-duty mechanisms that we use to perceive (vs just “see”.) If that were so, then we’d need additional mechanisms to tamp down and tame the innate tendency for our visual system to hallucinate. That mechanism might be the constant source of data from our senses, which keeps correcting the dream engine, like a steady stream of prompts.

And:

During waking moments with the full river of data from all our senses, plus the oversight our conscious attention, the tendency of the generative engine to hallucinate is kept in check. But during the night, when the prompting from the senses diminish, the dreams take over with a different kind of prompt, which may simply be the points where our subconscious is paying attention.

Last one:

I suggest that the default state of this engine is to dream, and that it is managed during the day to not hallucinate. To dream, then, is not a higher order function, but the most primeval one, that is only refined by more sophisticated function that align it with reality.

He goes into some I think unnecessary stuff about eyeballs that I think is safe to exclude from the conversation, but the whole piece is fascinating regardless.

One thing he does not seem to entertain the possibility of here though: that dreaming itself is aligned with reality, in that reality fundamentally has dreamlike qualities.

There’s also a worthwhile Aldous Huxley reference here to the notion of the brain as reducing valve for ‘Mind at Large.’

Each person is at each moment capable of remembering all that has ever happened to him and of perceiving everything that is happening everywhere in the universe. The function of the brain and nervous system is to protect us from being overwhelmed and confused by this mass of largely useless and irrelevant knowledge, by shutting out most of what we should otherwise perceive or remember at any moment, and leaving only that very small and special selection which is likely to be practically useful. According to such a theory, each one of us is potentially Mind at Large.

The Doors of Perception (Huxley)

I don’t see any reason it can’t be both: a filtering mechanism from the universal consciousness down to this specific one I’m inhabiting, and a generative engine to create experiential states based on sensory, perceptual, emotional, and cognitive inputs, etc.

I have more to say on this, but wanted to first set this post as an anchor on its own to refer back to.