“Chronology is a harsh master.” Opens one of Scott’s more famous posts. “You read three seemingly unrelated things at the same time and they seem obviously connected …”
From Slate Star Codex:
I know Douglas Hofstadter is very interested in building artificial intelligences that understand metaphors, thinking they are the key to human cognition. And a lot of people seem to think that even if we create some sort of very smart AI type thing, it will be less powerful than generally believed because we won’t have solved the problem of creativity.
I suspect creativity will be a relatively tractable problem. My guess is that humans, in a sense, have negative creativity. Their brains are specifically designed to make it hard to get out of a rut, because ruts represent well-worn cognitive pathways and things outside of them are probably useless and crazy.
This picture is mildly interesting because instead of immediately collapsing into one rut, your brain hangs suspended between a rabbit rut and a duck rut. We nod and call this Ambiguity. But unless you Sit Down And Think About It For Five Minutes, you’re not going to notice that it could be a hairdryer that has been split open, let alone an erotic BDSM picture of a clothespin attached to a female breast. Maybe if you caught it right out of the corner of your eye, without time to think, or if it was disguised by visual noise, you would notice one of the latter two immediately – at the cost of not being able to see the duck or rabbit.
Researchers are probably right when they expect the first AIs to have zero creativity, but zero creativity might be so much better than us negative creativity humans that they won’t need the crutches we use like metaphors and dreams. If they have to, maybe they can just actually generate random noise in hypothesis-space and see where it takes them.
From Lesswrong:
If you put me in a box and feed me chess positions and get probability distributions back out, then we would have - theoretically speaking - a system that produces Yudkowsky’s guess for Kasparov’s move in any chess position. We shall suppose (though it may be unlikely) that my prediction is well-calibrated, if not overwhelmingly discriminating.
Now suppose we turn “Yudkowsky’s prediction of Kasparov’s move” into an* actual chess opponent,* by having a computer* randomly* make moves at the exact probabilities I assigned. We’ll call this system RYK, which stands for “Randomized Yudkowsky-Kasparov”, though it should really be “Random Selection from Yudkowsky’s Probability Distribution over Kasparov’s Move.”
Will RYK be as good a player as Kasparov? Of course not. Sometimes the RYK system will randomly make dreadful moves which the real-life Kasparov would never make - start the game with P-KN4. I assign such moves a low probability, but sometimes the computer makes them anyway, by sheer random chance. The real Kasparov also sometimes makes moves that I assigned a low probability, but only when the move has a better rationale than I realized - the astonishing, unanticipated queen sacrifice.
Randomized Yudkowsky-Kasparov is definitely no smarter than Yudkowsky, because RYK draws on no more chess skill than I myself possess - I build all the probability distributions myself, using only my own abilities. Actually, RYK is a far worse player than Yudkowsky. I myself would make the best move I saw with my knowledge. RYK only occasionally makes the best move I saw - I won’t be very confident that Kasparov would make exactly the same move I would.
Now suppose that I myself play a game of chess against the RYK system.
RYK has the odd property that, on each and every turn, my probabilistic prediction for RYK’s move is exactly the same prediction I would make if I were playing against world champion Garry Kasparov.
Nonetheless, I can easily beat RYK, where the real Kasparov would crush me like a bug.
The creative unpredictability of intelligence is not like the* noisy* unpredictability of a random number generator. When I play against a smarter player, I can’t predict exactly where my opponent will move against me. But I can predict the end result of my smarter opponent’s moves, which is a win for the other player. When I see the randomized opponent make a move that I assigned a tiny probability, I chuckle and rub my hands, because I think the opponent has randomly made a dreadful move and now I can win. When a superior opponent surprises me by making a move to which I assigned a tiny probability, I groan because I think the other player saw something I didn’t, and now I’m about to be swept off the board. Even though it’s exactly the same probability distribution! I can be exactly as uncertain about the actions, and yet draw very different conclusions about the eventual outcome.
From Qualia Computing:
We can see that nootropics seem to have their main action on the dimension of clarity. What is clarity? A good guess might be that clarity refers to the signal-to-noise ratio that a mind experiences at the time of doing mental operations. The sort of mental activity that you perform does not tell you how noisy the mind is while attempting to do it. Some drugs may somehow diminish your ability to filter and eliminate noise. Others, may enhance those processes. Stimulants for the most part activate the inhibitory control involved in thinking about implications of premises. Thus clarity is experienced: Strong and robust symbolic manipulation of implicit ontologies and concepts. This, although generally good, may incidentally make the mind be locked in a state with fixed ontologies and background assumptions. Thus, the mind can get in conceptual prisons by getting lost in the implication of ontologies. Taking a strong cholinergic nootropic in the morning may result in a whole day of a mind fixated on a given problem. Thus too much clarity can be a problem, too.
[…]
A particularly interesting cross-section of the data is the interaction between Spiritual euphoria and Clarity. Why? Because, on the one hand, Spiritual euphoria, in a way, comes when one gains a certain sort of awareness and imaginative capability that enables the conception of entirely new ontologies. Hence, one’s models of personhood, morality, wellbeing and even logic can break down and be reconstructed during a psychedelic experience. On the other hand, conceptual clarity of the sort shown here happens when pre-existing ontologies are navigated and used efficiently, effortlessly and robustly. Hence, it is not hard to see why states of consciousness in which both conceptual clarity and conceptual revolutions are happening are very uncommon. More so, no known drug induces states of consciousness with those two qualities at the same time. This is made evident by the empty upper right quadrant of this space:
This picture is mildly interesting because instead of immediately collapsing into one rut, your brain hangs suspended between a rabbit rut and a duck rut. We nod and call this Ambiguity. But unless you Sit Down And Think About It For Five Minutes, you’re not going to notice that it could be a hairdryer that has been split open, let alone an erotic BDSM picture of a clothespin attached to a female breast. Maybe if you caught it right out of the corner of your eye, without time to think, or if it was disguised by visual noise, you would notice one of the latter two immediately – at the cost of not being able to see the duck or rabbit.