The People of the Book
A long long time ago (by internet standards), in a faraway land (I dunno, probably California), a bearded man from a Jewish family sat down to write a book. And in that book he set out to teach people how to think well, so that humanity may at last achieve wisdom and salvation. This is apparently something that bearded men from Jewish families are prone to try every few centuries or so. And the art of thinking well he wrote about was known as Rationality.
And around that book, which was then known as “The Rationality Sequences”, gathered wise women and men who
accepted everything unquestioningly nitpicked every single sentence and equation and even the entire goal of the book. And yet basically everyone who read The Sequences agreed that they are an excellent guide to reasoning well, that everything in them is so simple and true that it all seems completely obvious in hindsight. Of course, this is exactly what the book warned them will happen. And this group of people who read The Sequences came to be known as the Rationalist Community. Although, being proper rationalists, the group kept arguing for years over whether that was a good name or not.
And lo, other people saw them reading The Sequences and having a good time. And the others spake thus to the rationalists: “LOL, you’re a bunch of nerds in a dumbass cult.” And the rationalists patiently explained that no, the entire art was about thinking independently. And that as he was writing The Sequences, Eliezer anticipated that they will be so fun to read that people will forget to be skeptical, and dedicated an entire huge section of the book to avoiding groupthink and cultiness. And that even though every two rationalists agree on 95% of the book’s conclusions, they spend all their time arguing over the 5% they disagree on, lest anyone accuse them of not being skeptical enough.
On the other hand, the rationalists confirmed that yes, they were a bunch of nerds.
And the others didn’t relent and spake thus to the rationalists: “So what are you nerds doing with your fancy rationality other than squabbling about it on an internet forum?” And the rationalists didn’t answer because they and their friends were too busy helping the needy, and spreading the art, and launching a bunch of start-ups, and advancing science, and saving humanity from extinction.
And yet the others persisted and spake thus to the rationalists: “OMG you guys, that book is like so 2007. Get with the program, the hip place to be now is post-rationality.” And the rationalists asked what errors there were in the book that it should be discarded in favor of something new? But the answer is that there isn’t anything wrong with The Sequences, and they successfully anticipated 9 years ago basically every challenge thrown at them since, and all of you should go and read them right now. But these very smug “postrationalists” did contribute to an annoying aura of unfashionability that formed around LessWrong and keeps new people from benefitting from it.
This is a good place to stop reading this post and start reading The Sequences – they’re pretty long (vita brevis ars longa and all that) and are also better written. In case you haven’t noticed yet, all the links in orange are to LessWrong and the rationality sequences, to give you a taste of the massive breadth of ideas they cover. If you don’t like clicking on links for some reason, I’ll give a short overview of how I see rationality and vent a bit about “postrationalists”.
From Huitzilopochtli to Rationality
Humanity went from thinking that the sun was a hummingbird-shaped warrior god requiring human sacrifice to using solar radiation pressure to power interplanetary spacecraft. We credit most impressive achievements like that to science, and some to Al Gore. Science started working when it noticed a couple of things:
- The hummingbird god sounds super cool, but coolness is a bad indicator of whether something is true. Instead, we can find out what is true by looking at it. Early scientists believed that “looking at it” meant “looking at the sun directly”, that didn’t work out well. Later scientists expanded that idea a bit to mean learning about reality from observing evidence.
- The best way to organize and express scientifically what we know about reality seems to be by putting numbers on it. Reality is very complex and our information is very limited, so we need to use numbers that represent incomplete knowledge. The use of numbers to talk about incomplete knowledge is described by probability theory.
It also turns out that if you ask probability theory how you should learn about reality from observing evidence, it will tell you that while the actual implementation may differ wildly from case to case, at the core of it you should be doing something that looks like Bayes’ theorem. Since in popular culture the label “rational” is usually applied to utterly irrational strawman characters, the term “Bayesian” is sometimes used in the rationality community instead.
Hey, look! Someone wrote a great book called Probability Theory: The Logic of Science.
“Figuring out what reality is like” is something that scientists get paid for, but non-scientists occasionally find uncovering the truth useful as well. Perhaps you want to know how long a project will take you to finish, how likely a roulette wheel is to come up red, whether you have breast cancer or not. That seems simple enough, but all the links you didn’t click on in the previous sentence demonstrate that humans systematically suck at answering these simple questions, and many many many more.
Why is it hard for our brains to simply reject things that are false and believe things that are true? It unfortunately turns out that instead of pristine engines of perfect reasoning inside our heads we are stuck will kludgy, squishy, meat-computer monkey brains. And monkey brains will believe an idea for many reasons:
- The idea seems nice and pleasant to believe in.
- It is politically expedient to believe in the idea.
- The idea props up our self esteem.
- People around us say the idea out loud and repeat it.
- The idea is something we try to convince others of, and we lie better when we believe in the lie ourselves.
- The idea fits in with other wrong ideas we lug around in our brains.
- An authoritative-looking person told us it’s true and we never bothered to check.
- On very rare occasion, we believe an idea because we have seen evidence that shows it to be true.
You may have heard of another great book, this one’s about the ways our brains are predictably wrong about stuff all the time, it’s called Thinking Fast and Slow.
The bad news is that trying to teach your kludgy monkey brain to overcome the fact that it’s a kludgy monkey brain is confusing, difficult and unpleasant. Intelligence, expertise and general awareness of biases don’t help much in this pursuit, and may even actively derail you. Not only is becoming fully rational quite impossible, even starting to make progress is a challenge: it requires overcoming the huge bias blindspot that prevents you from even accepting that you may be irrational on occasion. Your brain will continue to insist to you that it’s perfectly reasonable, even as it’s holding wrong, harmful and even contradictory beliefs.
The good news is that you aren’t alone: there are rationalists in the Bronx and in Bermuda, transgender mathematician rationalists and religious lawyer rationalists, polyamorous communists and conservative asexual rationalists. There are meetups on 5 continents and in three different cities in the Bay Area. Most importantly, The Sequences gave the community a common language to talk about rationality. Like the tribe who can’t tell quantities apart because they have no words for counting numbers, learning rationality would be almost impossible without the vocabulary.
I don’t know how I could explain why donating to basic income charity shouldn’t be compared in the framework of buying happiness if I hadn’t heard of fuzzies and utilons. In fact, I probably wouldn’t have even understood it myself. On the other hand, I spent extra time looking for possible negative consequences of basic income because I felt that I was in danger of falling into a happy death spiral around a cool idea. I found myself supporting basic income more after reading arguments that BIG reduces employment, because all these arguments are stupid. I had to remind myself that reversed stupidity isn’t intelligence: a bad argument against BIG doesn’t make BIG a better policy. I read a sophisticated argument with many steps explaining how BIG will reduce taxes in the US if it was implemented, and dismissed it as well. It seemed like a clear case of writing the bottom line first to support an apparent one-sided policy, and the multitude of necessary steps was clearly vulnerable to the conjunction fallacy.
I don’t even remember how it was possible to think about complicated things like economic policy without rationality training. I probably wasn’t thinking much at all, just falling in step with the correct blue/green position. If you asked me now whether I think BIG will increase the quality of life for Americans compared to the current welfare system I would say “75% yes, subject to appropriate updates after the research results are in”. Can you imagine a policy maker giving an answer like that? And yet any answer on such a complex topic that’s not in the form of a probability between 0 and 1 strikes me now as utter insanity. Browsing through my Facebook history, I’m embarrassed by 90% of the “political” views I held before discovering rationality. Not because they’re all wrong, but because I held and proclaimed them for embarrassing reasons.
I hope that all the examples I have given so far seem like simple common sense. Why do we need to go to all this trouble of learning about Bayesian probability and decision heuristics and the rest? I’ll let Scott explain:
I think Bayesianism is a genuine epistemology and that the only reason this isn’t obvious is that it’s a really good epistemology, so good that it’s hard to remember that other people don’t have it.
Probability theory in general, and Bayesianism in particular, provide a coherent philosophical foundation for not being an idiot.
Now in general, people don’t need coherent philosophical foundations for anything they do. They don’t need grammar to speak a language, they don’t need classical physics to hit a baseball, and they don’t need probability theory to make good decisions. This is why I find all the “But probability theory isn’t that useful in everyday life!” complaining so vacuous.
“Everyday life” means “inside your comfort zone”. You don’t need theory inside your comfort zone, because you already navigate it effortlessly. But sometimes you find that the inside of your comfort zone isn’t so comfortable after all (my go-to grammatical example is answering the phone “Scott? Yes, this is him.”) Other times you want to leave your comfort zone, by for example speaking a foreign language or creating a conlang.
When David says that [inferring the existence/nonexistence of God from evidence] doesn’t count because it’s an edge case, I respond that it’s exactly the sort of thing that should count because it’s people trying to actually think about an issue outside their comfort zone which they can’t handle on intuition alone. It turns out when most people try this they fail miserably. If you are the sort of person who likes to deal with complicated philosophical problems outside the comfortable area where you can rely on instinct – and politics, religion, philosophy, and charity all fall in that area – then it’s really nice to have an epistemology that doesn’t suck.
I’ll go even further than that: people make dumb, costly mistakes inside their apparent comfort zone all the time. I see people stuck in jobs they hate because their brain is too lazy to snap out of a false dilemma. In these jobs they work on projects that fall to planning fallacies and sunk cost bias, if they manage at all to overcome procrastination and akrasia and do anything. They then spend the money they earned on purchases that make them unhappy. They get into brainless arguments, fail to explain or understand ideas, repeat empty words that mean nothing as if they were deep wisdom, and find solace in ignorance.
If you’re cool with all of that then you probably shouldn’t waste your time with this book.
Post-Rationality: Almost as Good as Rationality
Ok, so the on the plus side, Rationality is an epistemology and a community dedicated to thinking better and achieving goals strategically. On the minus side, it’s an aspiration and not a state that’s actually attainable. There’s a reason why the community hub is lesswrong.com (currently being rebooted to fit the evolving community) and not perfectwisdominfoureasysteps.com (besides the fact that shorter domain names are better).
It makes sense that some people will embrace Rationality and study it. It makes sense that most people will say “Nah, I’m cool” and stick with their old philosophies – that’s the human default behavior. What I’m confused by is people who are part of the broader rationality community who say “Been there, done that. I figured out this rationality thing and moved on to something better now.” Let’s see what they don’t like about Rationality.
In a post called Postrationality, a Table of Contents Yearly Cider writes:
Rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”. Postrationality, on the other hand, says “actually, intuitions and feelings are really important, let’s see if we can work with them instead of against them”.
For instance, rationalists really like Kahneman’s System 1/System 2 model of the mind. In this model, System 1 is basically intuition, and System 2 is basically analytical reasoning. Furthermore, System 1 is fast, while System 2 is slow. I’ll describe this model in more detail in the next post, but basically, rationalists tend to see System 1 as a necessary evil: it’s inaccurate and biased, but it’s fast, and if you want to get all your reasoning done in time, you’ll just have to use the fast but crappy system. But for really important decisions, you should always use System 2. Actually, you should try to write out your probabilities explicitly and use those in your calculations; that is the best strategy for decision-making.
YC doesn’t cite specific examples of rationalists doing this, so we’ll look to the common base of Rationality, The Sequences, for an answer. Fortunately, The Sequences dispelled the above criticism seven years before it was written:
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren’t always true, and perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality”. Both systems can serve the goal of truth, or defeat it, according to how they are used.
RibbonFarm seems to have been labeled “postrationalist” based on this guest post by Sarah Perry. The only criticism of Rationality in it seems to be that it rejects the value of ritual. For what it’s worth, there are both made up rituals in The Sequences and actual rituals for the community.
Warg Franklin seems to argue that Rationality is near enough to impossible as to be a waste of time, and that common sense and tradition are better guidelines:
Some rationalists have a reductionistic and mechanistic theory of mind. They see the mind made up of a patchwork of domain-specific biased heuristic algorithms which can be individually outsmarted and hacked for “debiasing”. While the mind is ultimately a reducible machine, it is complex, poorly understood, very clever, and designed to work as a purposeful whole. You generally can’t outsmart your mind. It is therefore better to treat the mind as a holistic and teleological black box system, and deal with it on its own terms; experience, intuitively understandable evidence, good ideas and arguments, and actual incentives. The mind is already well-tuned by evolution, and can only become wiser with lots of specific knowledge and experience, rather than more rational with a few high-impact cognitive hacks.
We can’t really replace common sense and intuition as the basis of reasoning. Attempts to virtualize more “correct” principles of reasoning from math and cognitive science in explicit deliberative reasoning are unrealistic folly. We can learn useful metaphors from theory, and use mathematical tools, but theory cannot be the ultimate foundation of our cognition; practical reasoning is either based on reasonable common sense, or bogus.
There are good points here, but not nearly enough to condemn the pursuit of Rationality as useless. Yes, we know that Rationality is very hard, but there’s a guideline to doing impossible things as well. We know that the brain has been finely tuned by ages of evolution, but evolution is neither maximally-efficient nor aligned with the things we care about as humans.
Finally, rationality aims to extend common sense and not contradict it, except in the face of some problems against which common sense and intuition are powerless. While writing The Sequences Eliezer was (and still is) trying to develop mathematical frameworks for a superintelligent AI that will also fulfill human values. That’s pretty hard, since human values are a miniscule region in the universe of possible goals an AI may pursue. As a species, we may only have one shot to solve this problem and without extreme rationality it’s completely intractable.
Eliezer may not be sure that rationality is learnable by people who don’t dedicate their lives to a world-saving mission, but it doesn’t feel unattainable to me. I think that rationality in the service of making better soap buying decisions is still better than stupidity. With that said, I’ll probably donate more money to MIRI this year than I’ll spend on soap, rationality does have a tendency to light a world-saving spark in people.
 In pop Bayesianism, the Rule is evidently not arithmetic; it is the sacred symbol of Rationality…Occasions in which you can actually apply the formula are rare. Instead, it’s a sort of holy metaphor, or religious talisman. You bow down to it to show your respect for Rationality and membership in the Bayesian religion.
 Maybe Bayesianism is like acupuncture. It has little practical value, and its elaborate theoretical framework is nonsense; but it’s mostly harmless, and it makes people feel better about themselves, so it’s good on balance.
 This seems to be the case for Bayesianism also. Leaders pepper their writing with allusions to the obscure metaphysics and math, which are only vaguely related to their actual conduct of reasoning.
 It is widely noted that Bayesianism operates as a quasi-religious cult. This is not just my personal hobby-horse.
At this point Chapman notices this quote by Eliezer:
[Eliezer]: Let’s get it out of our systems: Bayes Bayes Bayes Bayes Bayes Bayes Bayes Bayes Bayes… The sacred syllable is meaningless, except insofar as it tells someone to apply math.
And apparently fails at reading comprehension:
[Chapman]: Right. So why doesn’t he get it out his system? Here he’s the one calling it a “sacred syllable.” Apparently he’s aware of the quasi-religious nature of what he’s doing. What’s up with that?
Who are these misguided Bayesian zealots? Did the people who accuse rationalists of being a quasi-religious cult talk to a single person who has read The Sequences? Does Champan really think that when Eliezer says “don’t be a cult, just do math” he really means “be a cult”? We’ll never know, because when Scott rebutted Chapman’s straw version of Bayesianism, Chapman suddenly turned and wrote:
It’s because I find so much right with LessWrong, and that I admire its aims so much, that I’m so frustrated with its limitations and (seeming) errors. I’m afraid my careless expressions of frustration may sometimes offend. They may also baffle, because I haven’t actually offered a substantive critique (or even decided whether to do so). I apologize for both.
It’s nice to get an apology, but it would be much nicer if Chapman deleted these quotes from his blog. The very best rationalists are people like Scott, Kaj Sotala and Vaniver who replied on Chapman’s blog with thoughtful, polite discussions of math and epistemology. The only fault I can find with that is that they should have said “Hey David, how about you stop calling Bayesians a religious cult and then we can start talking politely about math and epistemology?”
And no matter how much David Chapman will protest that of course he didn’t mean that Scott, Kaj and Vaniver are cultists, the harm is done. “cult” is the first suggestion when you google “LessWrong”. People mock LessWrong for being obsessed with obscure nerd topics like AI safety and cryonics. Now a super-popular mainstream blogger writes thousands of words about Rationality enlightenment, AI safety and cryonics, while fastidiously avoiding a single mention or link to LessWrong. Journalists who never read a page of The Sequences write pieces making fun of the community, one of these articles is how I discovered the site myself!
Posts like David’s disparage and smear the entire community, and having actual familiarity with the people and The Sequences he should know better. The “Bayesians are a cult” meme contributed to most LessWrongers moving away from the site into their own outlets on the “Rationalist Diaspora”, or dropping out of the discourse altogether. This robs everyone of the common base of insights and language that The Sequences provide and that allow us to share ideas and learn from each other.
Most damagingly, this slander pushes new people and casual readers away from LessWrong, preventing them from discovering a life-changing resource. It’s the reason why I have to spend 1,500 words discussing “postrationalists”, to keep curious readers from googling “LessWrong” and getting a hideously distorted impression.
Rationality helped me find an amazing girlfriend (more on that story later). Rationality gave me the intuition, the analysis skills and the confidence to take not even scientists at their word. Rationality lets me keep my cool and think of bell curves when I’m caught in a culture war. Rationality gave me the wisdom to change the things I can and accept the things I can’t. Rationality inspired me to write the only poem I ever have.
And now you can write bad poems too. Welcome to the Rationality society, may you be less wrong tomorrow than you were today.