Scott Alexander has become obsessed with the predictive processing theory of cognition. If you start clicking on those links in order, as I have, you will become obsessed too. It will change how you think of everything because it’s a new understanding of how you think of everything.
I should write a careful review of the relevant literature. But the literature is hard and careful review takes time. That post is coming up, but not today. Today, I’ll engage in reckless speculation based on a single chart I don’t quite understand, generalize universal truths from personal anecdotes, and throw out 2×2 matrices like they’re on sale. This may be my Ribbonfarmiest post yet.
To make sense of predictive processing, you should read Scott’s post or Andy Clark’s own review of the topic. I’m going to jump quickly into the weirdness.
This chart is from Clark (a different one), Watson, and Friston’s What is Mood? A Computational Perspective. Here’s the computational perspective:
In this view, positively valenced brain states are necessarily associated with increases in the precision of predictions about the (controllable) future – or, more simply, predictable consequences of motor or autonomic behaviour. Conversely, negative emotions correspond to a loss of prior precision and a sense of helplessness and uncertainty about the consequences of action.
What feels good is being able to predict things better, both the goings on of the world at large (prediction) and the results of your own action (control). That awesome feeling of having your mind blown by a great Putanumonit post? That’s your brain acquiring a top-level model that can explain (and thus predict) a lot of previously inscrutable inputs. It also frees up your neocortex to process other things, improving your prediction accuracy globally.
Since most of what your brain does (according to PP) is predict things, it’s naturally going to predict how predictable (and thus positive-feeling) the world is going to be. That’s mood. The distribution of that prediction is summarized by two statistics: expectation (how good you’ll feel on average), and precision (how certain you are of that). On the chart, expectation is on the the vertical axis and precision is on the horizontal.
Precision is important because it constrains how responsive the prediction is to evidence. Predictive processing says that perception at every level is a combination of top-down prediction and bottom-up inputs. When you have a very high and precise prior, inputs have less room to shift it.
Let’s try it! Can you find a capital city name on the following line?
gukknardicbgfanangbangkokkfrtuislonyotr
That probably wasn’t very hard, but it took you a little while. Now see how long it takes you to find the word “bangkok” on the next line?
icgratolnxhobangkooktianafgrusilopugsewa
If you felt the word “pop out” in the second case, that was your brain predicting with higher precision that you will see it. You found it quicker, even though it was misspelled with an extra “o”. Predicting that you will see it changed what you actually saw, the same way that your prediction that the word “the” doesn’t come twice in a row probably caused you to miss it in the paragraph that starts with “Since most”.
According to the Clark et al. paper, we can describe long-term moods by the mean and precision of their prediction of how nice (predictable/controllable) the world is:
And on the 2×2:
Here’s what Scott wrote:
Depression is a prediction of bad outcomes with high confidence. Mania is a prediction of good outcomes with high confidence. Anxiety (or “agitated depression”) is a prediction of bad outcomes with low confidence. There’s a blank space [at the top left] where it looks like there ought to be an extra emotion; maybe God will release it later as DLC.
I spent a long time staring at that chart and then I realized – the top left is my corner. It’s where I’ve lived all my life.
Top left means that I’m very optimistic in general, but am quick to update to negative conclusions on the particulars.
I have a strong sense that my life will turn out marvelous, but I don’t have a single-minded mission or plan for making it so. I’m quick to jump on opportunities and quick to let go. The high prior on predictability makes me think I’ll be awesome at every skill I haven’t tried yet (like the finger drum I bought last week). The low precision and quick updating make me think that I’m quite mediocre at everything that I actually have tried: comedy, French, basketball, writing. I’m a double immigrant, and I live in New York City because my prior is that I’ll make it anywhere.
Top left is the opposite corner from depression. Depressed people have low motivation, low confidence, move slowly and seek out dark and quiet surroundings. I’m motivated but unfocused, confident, I love sports and play them with no concern for physical safety, I prefer bright colors, loud music, and spicy food.
A commenter on Scott’s post wrote: “If there’s one thing I can say for sure about depression, it’s that no matter how much you expect disappointment, life has a way of always surprising you with it.” I very rarely feel disappointed. Even when I suffer setbacks like a breakup or getting fired from a job it takes me about 3-4 days to revert to optimism and I conclude that what happened was surely for the best.
Many people responded to my Antiantinatalism post by saying: “Jacob, you don’t get it – life just sucks.” That’s true, I really don’t get it. I didn’t prove that life doesn’t suck, it’s just self-evident to me in my top left mood.
In case you were wondering – the top left is a fun place to be. This could be purely due to the luck of my circumstances – being a lackadaisical optimist may have gotten one killed in the ancestral savanna or in modern day Afghanistan. But in the developed world, life is rapidly changing but generally agreeable, the costs of failure are low, and flexibility is rewarded. The top left corner is where it’s at.
It’s interesting to see how scholars of predictive processing themselves are situated in different mood corners, and how it affects their view of the theory. Philosopher Andy Clark, the author of Surfing Uncertainty, is by all accounts a colorful, optimistic character who’s comfortable with chaos and uncertainty.
But here’s his colleague Jakob Hohwy, writing in Andy Clark and his Critics the view from the bottom left:
A lot of us feel that we are not very much in tune with the world, the world hits us and we don’t know what to do with the sensory input we get. We are constantly second-guessing ourselves, withdrawing, and trying to figure out what is happening.
Something that is very familiar to a lot of people, certainly myself, is social anxiety. We are trying to infer hidden causes—other people’s thoughts—from their behavior, but they are hidden inside other people’s skulls, so the inference is very hard. A lot of us are constantly wondering, Did I offend that person? Do they like me? What are they thinking? Did I understand their intentions?
I think a lot about mental illness. We forget what a high per cent of us have some mental illness or other, and they’re all characterized by the internal model losing its robustness. One per cent of us have schizophrenia, ten per cent depression, and then there is autism. The server crashes more often than we think.
The common view of the brain among people coming from evolutionary psychology is that it’s as much of a mess as any other evolved system: an agglomeration of ad-hoc modules kludged together by time and necessity, each executing their own adaptations with no regards for the overall design of the system. The computation mood paper explicitly argues against this view:
Traditional hypotheses propose that the neurobiological underpinnings of a variety of disorders arise from structural or functional abnormalities in the brain consequent on a combination of environmental stress and genetic vulnerabilities […]
The brain, like other biological systems, seeks to maintain its physiological (and psychological) state in the face of a constantly changing internal and external environment and must therefore minimise entropy over external states (where entropy is a mathematical measure of uncertainty or expected surprise). Directly computing surprise is intractable, but by appealing to variational principles, we can calculate an upper boundary on surprise, namely free energy, which systems will (or will appear to) minimise (Friston et al. 2006).
You know the classic stereotype of a physicist encountering a new field? This is Karl Friston (physics undergrad) explaining neurobiology by using the word “entropy” a lot and condensing everything into a single model (free energy).
You know who else studied physics in undergrad? Me! So let’s see how many disparate phenomena I can explain by saying “prediction” a lot and condensing everything into a single model of mood corner.
Here’s a classic top-left-mood story: after I got my BSc in physics I felt great about it and asked my dad, a professor of physics, where I should apply for physics grad school. My dad told me that I’m not that good at physics and should look to do something else with my life.
Here’s how I think different quadrants would have responded:
I immediately agreed with my dad, applied for jobs in finance (which I knew nothing about), and started studying for the GMAT.
Let’s see, where does each quadrant invest their money?
The bottom right knows that any investment is doomed to failure, and keeps their cash hidden. Bottom left leaves a small window for optimism but doesn’t trust the market too much and invests in bonds. Top right is confident they’ll be millionaires next year through their own ingenuity. Top left buys a little bit of everything and hopes to get rich slowly.
What sort of politics appeals to each quadrant?
How’s that for a political compass? The top left is classic liberal: humans will work things out if left to their own devices, so let free markets and a free society take us there. The bottom left is more skeptical and wants redistribution, safety nets and a centralized government to mitigate the worst-case scenarios. The bottom right thinks that the worst-case scenario is the likely one, and you should stock up on food and ammo.
The top right believes in a glorious future for humanity and has a clear ideology plus 27-step plan/manifesto to get us there. Anarcho-capitalism, neocameralism, accelerationist singularitarianism, any other “ism” with at least ten letters in front of it and an associated colored pill.
Let’s take it a further step up. Instead of asking how predictable and controllable the world is to you personally, how much sense does the world make in principle? Is there objective truth? Is there meaning to anything? Is goodness real?
The bottom right answers the above questions with a “Hell no, are you kidding?” The top right has a single, positive answer for all three: truth, meaning, and goodness are X, where X is “our lord and savior Jesus Christ”, “the proletarian revolution”, or “whatever my cult leader says”. David Chapman, whom I would place in the far left and middle, calls the top right view eternalism, in the sense of a single ideology claiming to give an answer that’s eternally true (across time and context).
Postmodernism arose in the last century as a reaction to the eternalist ideologies that dominated pre-modern times. Postmodernism sees people gravitate towards identitarian tribes, constructing self-serving narratives and ideologies to gain power over others. Anyone talking about objective truth or goodness in this view is simply bullshitting.
The top left is rationality. Rationalists believe that truth exists, but that our biased monkey brains can only make baby steps in approaching it. Rationalists care about goodness, but freely admit that our moral intuitions are a mishmash of evolutionary adaptations, and our moral stories are often a cover for the selfish elephants in our brains. Truth and goodness exist, but we’re not very confident we know what they are.
A lot of Eliezer’s early writing in the Sequences posed rationality as opposed to religion and other dogmas. A lot of Scott’s recent writing on SlateStarCodex pits it against postmodernism and identitarianism.
I think that when a rationalist loses confidence in the tools of the craft, they see the rest of LW as naive optimists and gravitate towards post-rationality (on the Chapman left edge). When one gets too confident in their methods and forgets their skepticism, they end up in the Peterson zone up top.
You may have noticed a coincidence: I live the top left mood, and I also happen to think that the top left stance on each issue is reasonable and correct. Strange, isn’t it?
There are two possible explanations for it. The first one is that I shoehorned whatever I happen to believe into the top left corner so that I could claim it for myself. Perhaps career opportunism, free markets, trying out sports, rationality, and stock index funds don’t really have a lot in common, and this is all clickbait.
But the second explanation is a lot more interesting.
According to Clark et al, my prediction (expectation and precision) about the predictability of the world is the cornerstone of how my brain works, encoded in the very structure of my neurons. Everything I feel and believe is downstream from that.
I wasn’t born believing in index funds and overcoming bias. I learned those from my perceived experience, but both learning and perception are shaped by my fundamental mood. If I was born confident in my pessimism, I would see stock funds as a crash waiting to happen, and the pursuit of rationality as a naive delusion.
So where does that leave me? Deep down, I still believe in index funds, and in rationality, and all the rest. But I also can’t dismiss the anxiety that all my cherished beliefs are contingent on mood affiliation and the chance shape of my brain. That’s a very top-left place to end up in.
Very interesting. Top left feels related to a mood that people sometimes try to cultivate through meditation or relational practices, and use words like “surrender” to describe. It’s very curious that there doesn’t seem to be a short English word for it.
LikeLike
In the SSC comments, people tried to describe top left as everything from “hope” to “contentment” to “cautious optimism”. I don’t think any of those fit, and I don’t think any other word will fit either. So I just called it “top left” and tried to describe what it looks and feels like.
LikeLike
It’s hard for me to articulate how completely I identify myself as possessing the traits you classify with the top-left, but I am also extremely suspicious that this is convenient for me and confirms a lot of pleasant priors. In fact, there’s a contradiction so deep there that I also even did a couple of mental loops: “Well, this skepticism just confirms that the model is probably accurate to at least some degree. But no, that skepticism has no distinguishing power to determine the truth of the model, since it can’t identify the difference between a world where the model is true and one where I just want it to be true.” Etc.
LikeLike
I also match the traits in the top left very strongly. I’m very skeptical that isn’t all a coincidence to some degree though. I’m under the impression there are a sizable amount of depressed individuals in the rationalist community, which being the opposite corner from rationalism, should at least somewhat debunk the theory.
LikeLike
according to the surveys about 50% of rationalists are depressed
LikeLike
I just wanted to chime in and say that I don’t feel this model describes me. Since everyone seems to be validating this model I wanted to provide a dissenting opinion. I identify with multiple different categories. I wasn’t in the bottom right but if you gave more examples I probably would end up there eventually. For instance, I talk to people a suboptimal amount because (maybe) I don’t feel the potential for positive outcomes. I wasn’t sure how to say that, its definitely not social anxiety I don’t feel motivated (why I put it in the bottom right category).
LikeLike
If we think about this setup as each person having an innate starting position in the 2×2 and then later having their position moved by mental illness or large bad things happening to them during life, then this model would match up to what I’ve seen of rationalists. Namely that very many of them live in the Top Left while also being pulled into the other directions from time to time.
Especially: high confidence that things are knowable even if no one understands them yet, high confidence that unachieved things are achieveable, high confidence that some things that could be permanently illegible are still worth doing and can give large benefits (ex: meditation), optimism about Elon Musk, belief that AI Safety is worth pursuing despite any potential pessimism
The top left also seems to correlate with openness to experience (as it should). Low confidence that things will definitely go well, but high expectation that the effort is worth it or shouldn’t particularly be avoided.
LikeLike
Just some random commenter saying that he is also in the top-left quadrant and that it is a pretty good place to be in in our modern world. Totally agree with most of your points.
LikeLike
I have to apologise for being so late to the party but I’m having a hard time understanding how
“humans will work things out if left to their own devices, so let free markets and a free society take us there.”
Isn’t an anarcho-capitalist sentiment. Perhaps you object to intermediating markets and society with violence primarily because you believe it is inefficient, while an-caps primary object to the same on grounds that it is immoral?
LikeLike