Epistemic Status: Likely, unfinished
zero →
There is this famous memetic hazard known as ‘dust theory’, infecting minds since the publication of Greg Egan’s Permutation City. It’s what happens when you take quantum suicide into the realm of Tegmark IV multiverses.
I’d like to deliver to you a line of reasoning which converges to dust theory from a different direction.
Here’s some necessary background. As the posts points out, a problem with conventional agent formalisms in that they are Cartesian, i.e. they think they are crisply separated from the environment, a la Decartes’ famous mind/body dualism.
← one
While the logical scaffolding which produced this idea has long since evaporated, I once built a nice little idea which does some work towards solving the problem.
The current gold standard for learning and reasoning about environments is Solomnoff Induction, which essentially works by using a strictly decreasing prior over the space of all Turing Machines in a fixed language, ordered by the length of their specification, then Bayesian updating over every observation you make from then on.
The way this is typically (abstractly) implemented is AIXI, which is an (uncomputable) algorithm which simply tacks on a input stream, and output stream, and a utility function over every possible input. The algorithm then does game theoretical reasoning over the aforementioned Turing Machines, coming out with the provably optimal strategy.
From what I gathered from the lw posts exploring phenomenological bridging, the biggest problem is that AXI is uncomputable, and AIXI simulates every computable model of reality, and therefore can’t, by definition, model itself as possibly dying or self-modifying.
One work around here is allowing states of mind to be observation, which is the way it is for humans.
The components of AIXI are: it’s utility function, it’s probability distribution, it’s input/ output stream.
So at any timestep, the states of each one of those three pieces of AIXI are observation. ‘Death’ would be modeled as