Eusapience is what comes next, a state of cognition beyond human consciousness. This comes in up the , but in summary: eusapience is the trait that distinguishes our minds from a true hivemind. Just as eusociality is organization and cooperation far beyond simple sociality, eusapience must be cognition and awareness far further beyond sapience. But what does this actually mean?
Briefly near the climax, there comes a concise formulation: “The exscient do not truly know one another. They are self0aware, but they can never be other-aware. They can never be eusapient.”
This bold assertion suffices for prose — our narrator is not meant to be relied upon — but can it hold up philosophically? What does “self-aware” even mean, that this assertion could have any semantic meat to it?
A loose conjecture comes to me easily. Simple creatures are driven by instinct and stimuli. With greater cognition, one might even form beliefs and predictions about the world. But surely if an animal injures its leg and must avoid putting weight on it, or stores some nuts knowing it will get hungry in the future, is this not awareness of its self?
The difference is that this awareness is a straightforward compression of the world. To be self-aware, it seems, should mean your beliefs about yourself do not describe yourself, they prescribe yourself. You begin to make choices, directing your behavior based on your understanding of yourself.
This feels intuitively right (if a bit question-begging) but it’s not enough to really get anywhere.
As a teenager, I read a psychology book called Others in mind: Social origins of self-consciousness. The thesis is right there in the title. This was a profound and insightful book, with vivid anecdotes and theories I could not do justice to a hazy decade later.
But its most oversimplified idea has stuck with me. Self-awareness is almost a misnomer. I would go as far as to contend that a lone entity, no matter how intelligent, simply wouldn’t develop a sense of self, not for its own sake. what would be the point? no really, think of it — what is “you” and why do you care about it? what is “I”?
I think because you are. Fundamentally, the word “I” only makes sense not because “I” exist, but because there is a “you” I am speaking to. I am describing myself to someone who is not myself.
What you are aware of when you are self-aware — what your self even is — is a social construct. You act a certain way so that others’ conception of you is favorable and in line with your own conception of you.
It’s recursive theory of mind. I’m modeling what you’re thinking, and you’re modeling me, so I’m modeling you modeling me.
We can never perfectly understand who another person is. Our language (and our own understanding of both language and ourselves) just isn’t precise enough to convey it all. And even if it could, brains are so big that being able to tell someone everything you are would crowd out who they are.
But imagine you had a very, very high bandwidth of communication between you and your friends. Imagine it was almost always available. Imagine your brains had evolved from the ground up and had grown to maturity in conditions like this.
(Computers can write out the entire contents of their working memory, and send it over the wire. This sort of design isn’t impossible, it’s just never occurred in nature.)
At first glance, this doesn’t really change the nature of what we’re working with. Sure, you’d be able to arrive at more accurate understandings of each other if you can innately emit and process more social information per second.
But you can’t fit a box inside a box of the same size. It has to be smaller. Your model of another person has to be smaller than the person.
But does it? Really?
Again I ask: what even are you? You have preferences, memories, and sort of gestalt essence that emerges from that raw list of facts about what you’ve done and what you like.
You probably know what 4+5 is. You probably know the capital city of some distant country. Your mind has accrued so many definitions and categorizations, and my next question is obvious: is that data you?
Now, it’s possible to have some personal connection to certain facts that, on the face of it, dry and irrelevant. But not all such facts. there is a whole bank of rote information in your skull that you certainly wouldn’t want to part with, but if you were defining “you”, if you had to choice between it and your episodic memory or sense of identity or desires, it’s obvious which one is of marginal importance.
You aren’t a box. you’re a box that contains stuff, and some of that stuff is more “you” than other stuff. Pare your brain down to your essential core, and what fraction of the full network remains?
Could you fit another self inside? (And I don’t mean that in a plural way.)
Imagine you met someone just like you. Not literally you, but you come from similar backgrounds, you have similar tastes, you agree on so many things. A person like this is a lot easier to model than a stranger from another country, because to a certain extent you can just go “well, what would I do?”
When you create a zip file, part of what happens is the computer finds stretches of data that’s the same across the files, and replaces one with a reference to the other. You deduplicate, record the differences.
So, how accurate an intersocial model do you think could arise, if you had a group with very high bandwidth communication and fundamentally similar brains?
They would, at least as this analysis is concerned, resultingly have brains of three parts — their self, their models of others, and their shared knowledge. Each model is compressed, referencing other models and common knowledge to approximate its original.
Let me sketch a toy example. Imagine we have hivelings A, B, and C. We’ll handwave and say 50% of their brains are reserved for common knowledge; we’re only concerned with the rest. The math gets easier if ignore deduplication, and just to make things tricky, imagine 20% of A is their inalienable self. Ditto for B and C.
This means that we can’t fit all of B and all of C inside the 30% of A that’s left for modeling; so each must be left with only a ¾ accurate approximations of each other. Except, wait a minute! If B and C both have a 15% reserved for A, it doesn’t have to overlap! It’s easy to imagine each knows a different side of A. Imagine if, between them, they happen to have perfect coverage of every aspect of A
Now imagine A is asleep, and B and C are having a conversation. If they simply imagine what A says, correcting each other where they’re unsure, then they can imagine exactly what A would say.
Here we have hit something novel. The model of you that exists among your hivemates’ brains is just as richly realized as the real you! This is something that human brains are incapable of.
(There are flaws in this toy model — every assumption made will be much messier in more realistic scenarios. But do you see the possibility space I’m pointing toward? After all, even humans are not perfectly self-aware in practice.)
If we add more hivelings to the collective, there’s more space to allocate to models, but also more things to model. In the limit, what matters is the ratio of a hiveling’s modeling capacity to her model complexity. if the latter is greater than the former, then the hivemind can contain “hiveselves” the perfectly mirror their sources.
For humans, at least the most popular and sociable ones, if you’re well-known and cared for by enough people, it’s probably mathematically the case that their thoughts add up to enough modeling capacity to cut up and paste a complete model of their idol. Technically.
But this is not a hiveself. Obviously, their actual models are rough and redundant. We just dont expose enough of ourselves for others to truly know who we are. We’re cloaked in shame and secrecy and dissonance.
But it’s also the isolation. While we do transmit info about our models of others (it’s called gossip), we don’t speak richly enough to breathe emergent life into these models.
(Well, often a truly invested mob in a frenzy of speculation can bring to life their own twisted version of who they think someone is — but it’s always a shadow.)
Remember that first definition of self-awareness? To be truly conscious is to have your awareness of self guide how you behave rather than straightforwardly reflect it. Hive-awareness is the step beyond self-awareness, wherein your sense of self is guided by the vast and perfect model of you that has transcended into the collective mind, intermingled with all others.
In a way, the definition I gave at the outset is backwards, misleading though still equivalent. For the mirror image of being “other-aware” isthe others must be you-aware.
This is what eusapience means: that you are understood in totality.