Epistemic Status: Rhetoric
i.
Yudkowsky writes we should think like reality; that if you are ever surprised by something which actually™ happens, that is a artifact of your cognition, not the world around you.
This is, in essence, the philosophy of Bayesianism. No matter the priori distribution[1], you posteriori distribution will always approach reality.
We can go farther with this. Slowly your model become closer and closer to reality, and, at the limit, is indistinguishable. It becomes truly a part of you. More provocatively, a true Bayesian will eat reality, and gained the capacity to vomit forth fragments and approximations whenever they deem it useful.
For all it’s flaws, the human brain is a decent Bayesian, and it is my contention that it can, will, and does eat its reality. As the saying goes, the person can leave the culture, but the culture will never leave the person.
There are some problems with this. For one, you may notice I hedged and said ‘eat its reality’, rather than quoting the title. I did so because perception is overwhelmingly likely to not be homomorphic to reality. In English, this says that the structure the brain imposes on sense-data needs to be evolutionary fit and viable, but not necessarily accurate. Generally, preserving the exact structure of reality is extraneous and inefficient.
Another reason is humans are social, cultural creatures first and foremost. Thus , the significance of these spheres are massively overrated in our model of the world.
The last reason is an implicit subset of the previous, but bares stressing. Religions, aesthetics, and generally cultures, act as filters, distorting lens that emphasize and underscore phenomena, and erase and ignore others. There is a hypertrophy of this psychosis, and it can be observed as cherrypicking arguments, observing the merely letter of the law, and the infamous clever arguer.
Clever arguing allows you to great liberty to diverge from reality. As I observed in Another Argument Against Solipsism and Where Two-Valued Logic Fears to Tread, it is always possible to construct ever more complex arguments for the same conclusion. Feed these arguments and only these arguments into a brain and it will eat something other than reality – pure fantasy.
(it can be argued that only inductive regularity in your dataset is a potential bias, in a manner similar to how anything unrandomized in studies can be potentially confounded in some weird way)
[1] An argument could be made that perverse distributions where certain facts are artificially given 1 or 0 probability defeats this. Not necessarily, although this doesn’t defeat every argument that could be made in with that form.