Discussion about this post

User's avatar
Andrew Downing's avatar

I think you're misjudging the significance of language. You tried to patch the sophistication limits of language by invoking fractal structure so the simplicity of language could echo down through the layers of self-similarity, but I would suggest backing away from language for a moment and consider that language is a sequential representation of knowledge which forms through the process of navigating attention through a far more sophisticated structure, that does not require self-similarity, but may utilize it as needed.

I more commonly find myself explaining these thoughts from a information technology perspective, but in your case, I'm going to try this in reverse. Let's start from an deeply existential perspective and build up from there.

A foundational characteristic of life is that in some sense it models its environment. It doesn't matter whether the model is a true representation or even whether you view the world as out-there or take a more analytic idealist perspective. The life form models its environment because doing so provides the potential to ratchet up the ladder of existence and sophistication to at least temporarily beat the otherwise inevitable entropic decay. It's evolutions conceptual ratchet.

Even a bacteria does this. In an embodied sense, it knows when there is moisture, it knows when there are other bacteria nearby (chemical signalling), it knows when there are many of its own type of bacteria nearby (different chemical signalling), it knows when there is food to absorb. It doesn't know these things in an abstract sense as we might - it embodies this knowledge. It's wired into its biochemistry, so it eats, lays dormant or replicates at the right times. It is what it does, but evolution applies its conceptual ratchet, and some bacteria are better at all these things and so there becomes more of them. Their pattern spreads.

As we progress up degrees of sophistication of life, this essential pattern remains. We model our world, so that we may act in ways that allow us to thrive, and reproduce against the odds. Virtually all of our morality can be expressed in terms that are grounded in this, in one way or another.

So, to know something, is to model something. Such is our nature as life, but as higher order life with complex nervous systems and brains, we're not relying on evolution to refine knowledge over generations, but instead we can create models as abstractions and test them against our environment - imagined or real - it doesn't matter, so long as its consistent.

This arrangement is built into our nervous systems. We don't see with eyes like cameras streaming information back to a brain to view it all. It's more like we maintain an internal model of our environment, that is contrasted against the inputs from our optical systems. Signals from the retina are contrasted against modelled predictions of what we believe we should be seeing. Disparities between these are all that is passed back along the optic nerve (it has insufficient bandwidth to do more), and the brain adjusts the model. Where such disparities are significant, it draws our attention, and then we're really on to something. Attention is the focus for adjusting our environment models.

Modelling our environment is tricky though. Hierarchies won't do - they're much too simple, even when replicated fractally. There's this problem in cognitive research, known as the "hard problem of knowing", which is essentially that there's a near infinite range of possible relationships between everything in our environment that there is really no possible way to represent the full extent of it, and so there must be a filter. As it happens, our evolutionary imperative is that filter for us, even when extended into the wildly abstract, because even maths and philosophy can turn out to assist with this mission of all life.

Even with such a filter, representations of our environment are still wildly complex and disorderly. I know of two broad representations that can cover this - they're like the inverse of each other.

One is the connectionist model, which is broadly modelled by our brains - literally 100 billion or so neurons connected by 1 trillion or so synapses. This is where I connect Category Theory. Its foundational principle, known as Yoneda's Lemma that essentially says, "an object  is completely determined by its relationships to other objects". From everything else I've read from you, you should really like this concept - it's saying that everything is interconnected and that interconnectedness is what defines it all, in a fully self referential way.

The other is like the negative of this. Imagine a crazily high dimensional space with enough dimensions to represent every different way that things could possibly be related (not in some mystical weird way - just think of dimensions as independent variables). In such a representation, any given concept can be represented as a position in this space. The nearer two concepts are in this space, the more similar they are. Two concepts might be related by a relative offset in such a space, and then two other concepts might have a similar vector between them, and that would make those two relationships analogous. From there, you can go crazy and model every different thought structure. In transformer architectures, these high dimensional positions are vectors known as embeddings. By themselves, they are meaningless, but their position relative to billions of other such embeddings in a transformer model are the basis of knowing.

The final trick, is the application of attention. Typically, if we're engaging with the world, our attention is drawn to disparities between what our models predict and what the world shows us. These are learning opportunities.

In the simplest, momentary of cases, say we're running to catch a ball, our attention is just constantly adjusting the model to align with visual inputs and bodily positioning until we catch the ball, and then it can detach. With practice, we get better at is, which means the models are more often correct.

But then imagine a much higher level disparity that may not be resolved in the instant. Say you've been playing happy families and you come home early one say to find you wife is sleeping with the neighbour. This would indeed be a large disparity between what your model of your world predicted and the new reality imposed by your senses, and what's more, you would not be able to resolve it in the moment. It could take months or years ... so what mechanism applies here? What you'd need would be a subsystem of the human body, that could on the one hand be attached to a representation of significant disparities between reality and expectation, and that would also provide an ongoing biochemically motivational force that would drive you forward to eventual resolution or closure of such unfortunate disparities. We have a great many variations on this theme. We call them emotions.

Along the way, of course, you're going to need to communication your understanding and perspective on all this to your wife, neighbour, councillor, whomever. Inconveniently, the relationships between all of the things you understand about your situation and everything you're thinking about it, are enormously complex and you have no convenient way to just transplant this representation in its entirety from your mind to someone else's, so what are you going to do?

Enter, stage left - Language via attention. Focus your attention on one aspect of the problem. Label it with noun. Navigate focussed attention through this high dimensional space to related concepts. Apply nouns, adjectives, verbs, etc, and what you get is a sequential representation of individual aspects of your knowledge, expressed as language.

Any would-be listener is doing this in reverse. They hear your words, and if they're paying attention, they use them to focus their own attention on the nearest concepts in their own mind, and they try to build the knowledge relationships you are describing. Along the way, they are comparing these new representations against their existing representations, and of course, their attention is drawn to disparities, and so they interject and ask for clarification or they just swear at your, depending ...

Expand full comment
3 more comments...

No posts