The following is an inexact transcript of a conversation that happened exactly like this. The scene: a wine bar in Manhattan, on my second (and final) date with a Jewish girl. We’ll call her “Jewish Girl on Date”, or J-GoD for short.
J-GoD: You’ve changed your OKCupid religion status from “Jewish” to “atheist” since last week. What happened this weekend that proved to you that God doesn’t exist?
Jacob’s inner voice: Actually, I switched it to optimize my dating profile and avoid Jewish girls that give me grief about not being as Jewish as their moms expect me to be.
Jacob’s mouth: I don’t think that anything can really prove that God doesn’t exist. That’s partly because the definition of God will usually shift to accommodate any evidence.
J-GoD: So why do you call yourself an atheist if you can’t prove that God doesn’t exist?
Jacob: I give the existence of any specific god a low enough probability that I functionally behave as if I was sure no god existed.
Jacob: I give about a 1 in 10 chance for the existence of any popularly conceived supernatural beings, including humanity’s descendants simulating our reality. For some specific religion’s god, like the Old Testament Jewish God (we’ll call him J-God for short), something like 1 in 1,000,000.
J-GoD: How can you put a number on the existence of J-God?
Jacob: Umm, I have this blog about how you can put a number on almost anything… Anyway, probability numbers are how I represent how confident I am that something is true or not.
J-GoD: How the hell can you be exactly one in a million confident that God exists?
Jacob: I wish I could say that I calculated the prior of the Kolmogorov complexity implied by the description of J-God and updated on all available evidence. In reality, I just picked a really low number that matches how confident I allow myself to be on complex metaphysical questions.
J-GoD: So you’re just making up a number to say that you think that God doesn’t exist?
Jacob: No, no, the exact number is important. For example, if I was walking down the street and suddenly saw a bush burst in flames, and the bush burned but wasn’t consumed, and I heard a voice from the sky saying: “I am the God of your father, God of Abraham of Isaac and of Jacob“, I would definitely update my belief.
It’s possible that I could see a divine bush in a godless world as the result of hallucinogenic drugs or a convoluted prank involving VR, but I’m much more likely to see it in a universe in which J-God exists. In J-God’s universe pranksters and drugs still exist, but so does a divinity that is known for using burning bushes to impress people. Let’s say that a burning bush is one hundred times more likely in a J-God universe. So, I would update my belief in J-God by a factor of one hundred, from 1 in 1,000,000 to 1 in 10,000. That’s a high enough probability of J-God watching over me that I would at least make sure to never again boil a goat in its mother’s milk.
A second miracle would bring my posterior belief in J-God from 1/10,000 to 1/100, far above any other single supernatural being and high enough to give some real bite to Pascal’s wager. At three independently observed miracles, I will switch to living a life of humble devotion to J-God.
J-GoD: You think that people should only believe in a God after they see him perform exactly three miracles? That’s a perverse notion of belief! Belief in God has nothing to do with seeing miracles!
Jacob: Actually, the great medieval rationalist rabbi Moses Maimonides discusses in great detail the question of miracle-based belief in God. In Guide for the Perplexed, chapter LXIII he says:
You know how widespread were in those days the opinions of the Sabeans: all men, except a few individuals, were idolaters, that is to say, they believed in spirits, in man’s power to direct the influences of the heavenly bodies, and in the effect of talismans. Any one who in those days laid claim to authority, based it either [on reasoning and proof] or that some spiritual power was conferred upon him by a star, by an angel, or by a similar agency.
He basically says that for people who see magic in every charlatan and miracles every other Tuesday, a miracle should not constitute strong evidence. This is sound Bayesian reasoning. However, we are no longer “in those days”. As an educated rationalist in 2016, I don’t believe that supernatural wonders are common at all. Seeing a true miracle with my own eyes would provide solid grounds for changing my belief.
In Mishne Torah, Maimonides agrees that the performance of miracles should at least make you consider that you’re dealing with a genuine, Twitter-verified, message from the divine, i.e. a prophet:
Just as we are commanded to render a [legal] judgment based on the testimony of two witnesses, even though we do not know if they are testifying truthfully or falsely, similarly, it is a mitzvah to listen to this prophet even though we do not know whether the wonder is true or performed by magic or sorcery.
By “magic and sorcery” Maimonides means illusions and tricks, as opposed to true divine intervention. For example, hallucinogenic drugs and VR count as “magic and sorcery”. Now of course, Maimonides knows that 0 and 1 aren’t probabilities, so Bayesian updating on evidence cannot bring a man to absolute and total belief. As long as drugs or VR are a possibility, they cannot be completely discounted as the source of the observed miracle.
From Mishne Torah again:
The Jews did not believe in Moses, our teacher, because of the wonders that he performed. Whenever anyone’s belief is based on wonders, the commitment of his heart has shortcomings, because it is possible to perform a wonder through magic or sorcery.
Here’s a great (atheist) Jew explaining how a great (deeply religious) Jew proved that two smart Jews shouldn’t disagree on their picture of reality. Maimonides and I don’t have the shared knowledge required to reach consensus, but we are in complete agreement regarding the proper epistemology of miracle-based belief in J-God.
We differ in our moral value judgment on less-than-absolute belief: I believe that it is a virtue, Maimonides that it is a shortcoming. However, I am a moral anti-realist: I believe that moral value judgments are a fact about (my and Maimonides’) minds, not about external reality. Thus, our moral disagreement isn’t cause for concern for me that I am irrational on the subject.
J-GoD: What kind of atheist are you that you analyze in minute detail the biblical commentary of medieval rabbis?
Jacob: What kind of Jew would I be if I didn’t?
39 thoughts on “A Conversation with GoD”
Every day, we modern humans have woken up with this figment called “god” implanted in our brains as a concept, yet not on any day, for any person ever alive, has been there been a verifiable evidence that even hints at the existence of that concept.
The only figure that works is 0%.
LikeLiked by 1 person
a) 0% is the only figure that would never work even if “no evidence” was true, and b) by the Bayesian definition of evidence as explained above, even the mere existence of Christianity is slight evidence for Christianity being true; Christianity is slightly more likely to exist in a universe where Jesus Christ is Lord than in a godless universe.
That’s patent nonsense. All of us have to operate in a social reality with some levels of doubt about any and all conditions, but we must put aside unsubstantiated doubt and act according to our basic knowledge.
The “mere existence” of fabricated, unreal beliefs is not in any way evidence for them being even “slightly” true.
Who put such absurd notions in your brain?
notabilia, what evidence would you expect to see if it was true that 2,000 years ago in Judea people saw Jesus perform miracles and rise from the dead? What is likelier to happen in that world than in a world in which Jesus didn’t exist/do anything miraculous?
It’s not really controversial to say that from a pure Bayesian standpoint, books about Jesus performing miracles and apostles promoting those books are, however slightly, more likely to exist in a world in which Jesus performed miracles. That’s what we mean by “evidence”.
You’ve got to leave the Bayesian stuff alone – there is no evidence, no “evidence,” and no point whatsoever in parsing out the “slightly” aspects of this irrationalism – no miracles, no divinity, no god or gods, just give up the nonsense.
If you feel that you cannot, then that is simply your choice, but there is nothing, not Bayesian, not Pascal, not any form of warmed over religious self-justification that will work to assert the incredible, the not-true, and the discarded maxims of theism, mono- or otherwise.
Umm, can’t do that.
notabilia, do you understand that Rowan and I don’t disagree with you about religion, we disagree about proper epistemology. We don’t disagree on whether God exists or not, we disagree about what sorts of beliefs people should hold and how to arrive at them.
It seems like your logic is: [Christianity is false] -> therefore -> [the Bible isn’t evidence for Christianity]. To me, that is logically incorrect, regardless of the content.
Loved this post.
I might suggest that the right response to discovering that J-God is real is to organize opposition to their policies. But that’s an even worse date.
Really? You want to be leading the opposition to an angry, jealous, omnipotent 10-year-old known for casually killing people for what we percieve as minor annoyances?
I’d erect a statue in honor of your bravery, but I don’t fancy getting stoned for idolatry.
LikeLiked by 1 person
I am venturing a guess that Chebky has spent many years studying the stories of the Old Testament, and JRM probably hasn’t :)
I’m not really in the mental state to do the numbers properly, but it feels weird that the updating factor is constant and not diminishing with each subsequent miracle. The conditional probability P(G/M) includes division by the prior probability of observing another miracle for whatever reason. This prior probability goes up with each miracle, for example, as I realize that my VR-wielding prankster has a thing for making my life surreal.
Also, if the factor is constant, for many miracles the GOdds very quickly go to infinity, regardless of your prior. I’d expect them to reach a sort of saturation state, with a probability P that there is a god intent on making me believe and 1-P that there are miracles happening on a daily basis, sans God. (P depends on my prior and my prayer).
That is unless you define “miracle” as “something which my updated belief assigns an infinitesimal probability in the absence of God and a non-negligible one by His Grace”, but then things get complicated.
Thank you for making the smart nitpick of the math and not the obvious nitpick (that 1:9,999 * 100:1 != 1:99). I didn’t want to get into mathematical details, so I sort of handwaved “miracle” to mean “something that would make me update with a Bayesian factor of 100:1”. Seeing the same burning bush vision on the same street a week later is much weaker evidence than the first time, 100:1 would take something new and independent. You’re right, it’s hard for me to imagine right now what would give me that update factor at a point where I’ve already seen a bunch of J-God specific miracles, am leading a devout life, and my posterior is something like 80% J-God, 19% simulation, 1% dedicated prankster.
This leads to a more interesting question, which I want to write about soon: how close to 0 or 1 should you allow your beliefs to get? What’s your P(43 is a prime)? You can clearly imagine being convinced by enough evidence that 43 isn’t a prime, so what prior do you start from? This number is very important, as it would act as a confidence bound “outside” the model for your entire model of everything you think you know about the universe.
Hmm, now that I think about it, I used to have an experimental procedure which gave probabilistic results. Usually I’d give my result as the input X which gives a 50% chance of something happening, but sometimes I’d report “I can say with 95% confidence that there is a higher than 99% probability that an input of X or higher will cause the event”. And I’d go around lecturing people about the distinction between the latter (which is just a calculation. I can use 99.99 and give a different X) and the former, which is based on the model, the experiment, the sample size etc.
I didn’t realize at the time that A) There’s another level of confidence, which is how much the model the procedure’s based on even fits reality (although I didn’t neglect that completely) and B) There’s a deeper philosophical lesson to it about probabilistic models.
If I understand the definitions and algebra correctly then 43 has a 100% chance of being prime. The chance that 43 is not prime is the chance that I misunderstand these things sufficiently to make 43 not prime. I could try to explicitly calculate this explicitly in this context or I could try and figure out how often I get an “obvious fact wrong” compared to when I get an “obvious fact right.”
Until yesterday for a long time, I thought “the commons” (of the tragedy of the commons type that is common land in England) were historically not formally regulated. This turns out the be false as when this common land started appearing in England in the middle ages some lord would usually either put limits on individuals for what they can do (say this particular person cannot have more than 40 sheep on this particular commons) or put a tax on each unit of activity (for example, a tax of such and such per sheep) or some other regulations. In fact the man who popularized the concept of tragedy of the commons as it is known today has regretted the name he gave the concept and wish he gave it the name “tragedy of the unregulated commons”. The point is I thought I knew an “obvious” fact but it turned out that I was wrong.
I would say that about 18% of the time that, when I look up something that I think know, my previous knowledge was cleanly mistaken in some way based on what looked up (That is I thought something that was cleanly wrong, not that there is more to know. There is always more to know). I would also say that, when I look up something that I think know, my previous knowledge was significantly mistaken about 2% of the time (that is that the what I looked up leads me to believe the opposite of what thought was true is more correct then what I previously thought was true). I have also taken quite a few college courses in mathematics leading me to think I am 70% more likely to be correct about something about mathematics then a reasonable intelligent person with no such background. This would mean that I would say that my statement that “43 is prime” has a 5.4% chance begin somehow wrong in some strict technical sense and a .6% chance of being completely off base. These answers seam realistic to me.
Now, what are the chances that 42 is prime?
42 isn’t prime it’s the answer to the ultimate question of life, the universe and everything :)
I really like your analysis. Some of the numbers (like 18%) seem overly specific and made up, but that’s absolutely in keeping with the spirit of the blog! I still think that 99% confidence in “43 is a prime” is way too low. To me, the difference is between an independent bit of trivia and a fundamental fact. Whether the commons in England were regulated or not has little bearing on anything else, so you could conceivably get that wrong without the rest of your understanding of the universe being severely hampered.
“43 is prime” is entangled with a lot of other very important knowledge. Maybe I haven’t noticed that 43 = 6*7, but then a multiple of an even number doesn’t always end in an even digit, that’s a big problem. Maybe 6*7 sometimes equals 42 and sometimes 43. That’s an even bigger problem. Or maybe they’re the same problem: all math is entangled. If 43 isn’t prime, then I can use that at least to disprove everything I think is true about numbers theory and integers, maybe all of arithmetics as well. This means that every time I get some arithmetics correct is evidence in favor of “43 is a prime”, so I would hope I’m no more than 0.00001% likely to be wrong there, not 0.6%.
(First, notice how 18% is 1/6 rounded to the nearest percent. I thought 15% was too low and 20% was too high but 1/6 sounded correct. Your point at not confusing precision with accuracy is well taken. The odds I given are basically guesses.)
There are the set of things that “I am absolutely sure in.” Since I have been wrong in the past on things that “I am absolutely sure in,” then I can conclude that I am absolutely sure to be wrong with something that I am absolutely sure in. Not everything in the set of things that “I am absolutely sure in” are independent statements. My knowledge that 42 is composite is dependent on my knowledge that 43 is prime and vice versa. My knowledge that 43 is prime is a proxy for numbers working the way I think they do so, clearly in an informal way, my statement that I am giving a .6% chance that 43 isn’t prime is close to being the same thing as saying that I give a .6% chance that numbers work differently then I think they do on a fundamental level. This is opposed to thinking that numbers work differently then I think they do in a strict technical sense. I am absolutely sure that numbers work differently then I think they do in some strict technical sense but 94.6% sure this is in some way that doesn’t effect my knowledge that 43 is prime.
The set I am interested in is some subset of things that “I am absolutely sure in” in which things are independent statements. Since these are independent statements, if I can assign a probability p to what it means to be absolutely sure in something I can figure out how large this set needs to be to know that something in the set is likely wrong. Let the cardinality of the set be n. My first idea was to see how large the set would have to be in order to be likely that some statement is wrong. This is the chances of all statements in the set being true is less then 50% or p^n < .5 n > log(,5) / log(p) with for p=.994 is n>115.2 and for p=.9999999 is n>6,931,471.5 . This means that with my confidence required to be absolutely sure in something I only need to be absolutely sure in 115 things to have it likely that something in that set being wrong while you would need almost 7 million (independent) things.
But wait. I didn’t say it was likely that something I am absolutely sure in is wrong, I am absolutely sure that something I am absolutely sure in is wrong. This means I need to satisfy p^n < 1-p n > log(1-p) / log(p) with for p=.994 has n > 850.1 and p=.9999999 has n > 161,180,948.5 . If there are 851 things that I have full confidence in, I have full confidence in at-least one of those things being wrong while if the number representing full confidence you come up with is 99.99999% then you need to have full confidence in more then 161 million things in order to have full confidence that one of those things is wrong. I am suspicious of anyone saying that they have full confidence in 160 million independent statements.
Now, I have full confidence in an infant number of statements (“every even integer, of magnitude gator than 2, is composite” entails an infinite number of statements) but I do not have full confidence in a infinite number of independent statements (I have limited knowledge, intelligence, and wisdom). For the reasons above, I think someone saying that they are 99.4% sure that 43 is a prime is making a more realistic statement then someone saying that they are 99.99999% sure that 43 is a prime. Maybe there is simply a communication difference between us in what it means that have p confidence that 43 is prime but I stand by my statement.
Now, my analysis above is complicated by the fact that I don’t assign the same probability to every statement that I claim to have full confidence in but it provides a useful demonstration. A question I have touched upon and have used an answer to though I am not confident in it is: are the chances that 43 is prime the same as the chances that 42 is not prime? I don’t know how to answer this question (with confidence).
Benjamin, I’ll bet you $10,000 to $10 that 43 is a prime. Are you taking that bet? If you aren’t, you should be at least 99.9% sure that 43 is indeed a prime.
I really like your analysis, but I feel that it’s missing a couple of nuances. First, “absolutely sure” is too broad a category. I’m absolutely sure that 43 is a prime and I’m absolutely sure that Madrid is the capital of Spain. Yet if I found out that the Spanish government sits in Sevilla I’d think “huh, how come I missed that?” and if I found out that 6*7=43 I’d think “Ahh! Everything I know is broken, I must have gone utterly insane!”.
Again, this has to do with entanglement. “43 is a prime” is entangled with basically all the arithmetic I know, and is thus equivalent to them. 6*7=43 and 9*5=44 aren’t independent statements, they’re almost the same one. In fact, I think that all of the knowledge of mathematics that I am “absolutely sure of” boils down to maybe 10 independent facts (including the logical rules of inference that allow me to create new facts). For example, I have no confidence that I remember the formula to the solutions of a quadratic equation correctly, but I know I can derive it from my base knowledge of calculus.
You are absolutely correct to doubt that someone can be confident in 160 million statements, but the limiting factor is time and memory, not accuracy of belief. So if I say that I believe that P(43 is prime) = .9999999 I mean something like: “I could make 100 million statements like that if I had the time and memory to study them. Because I have only studied math for two decades and my brain has limited storage, I have only accumulated 10 independent things that I’m sure of”.
What do you think? And would you take the bet?
How would this bet work? How would we settle that 43 is prime or not? Would we look for an authoritative source that says that 43 is prime? The probability I give for finding such a thing is 1-p where p is the probability that we somehow become incapacitated from finding and communicating such a thing. In this case, I would lose either way. I would either lose $10 or I and/or you will become sufficiently incapacitated. I consider either thing a loss (in that I have full confidence that I will be worse off in either situation). Do you need to convince me that there is a 99.99999% chance that 43 is prime in order for me to give you $10 and would I need to convince you that there is a 99.99999% chance that 43 isn’t prime for you to give me $10,000? Going by the strict wording of the proposed bet, we would need to find a 100% chance that 43 is prime for me to give you $10 and we would need to find a 100% chance that 43 isn’t prime for you to give me $10,000. While adhering to a philosophy that 100% is not a valid probability (at-least for a human claim to knowledge) then either situation is impossible and the bet is unenforceable.
Of course it is the case that either there is a 100% that 43 is prime or there is a 100% chance that 43 isn’t prime (this is inclusive or as, in a strict technical sense, by my current understanding of mathematics, 43 is both a prime and not a prime). The property of being prime is independent of human knowledge but all I have to work with is human knowledge. I can conceive that in after 500 years of further mathematical development that the mathematicians of 2516 would consider it absurd that anyone would claim that 43 is prime. (Remember that 500 years ago negative numbers were considered to be either non existent or illegitimate by several prominent mathematicians.) I don’t know how this would happen or what the consequences will be but that is included in my .6% chance that 43 isn’t prime. It could be that 43 isn’t prime while 6*7~=43 and 9*5=45 still or it could be that I am also mistaken on these.
It is true that my analysis will lead to the same probability that 6*7=42 as in the probability that 43 is prime and, based on how mathematics is constructed, I should have higher probability that 6*7=42 then I should have in 43 being prime. My analysis is flawed but It works good enough for my purposes. As long as I am human, I will not have perfect methods of analysis. I am still very suspicious that you have the capability (granting you immortality with the intellectual abilities you have now) of coming up with 100,000,000 independent and true statements without a single error on the first try (which is necessary to have 99.99999% confidence in anything). I am also suspicious with my much weaker claim that I am likely to be able to come up with 115 such statements thought I hold it in the realm of possibility. I still hold that 99.4% is a much more realistic probability then 99.99999% for any human claim of knowledge.
It appears that you would have an emotional reaction to being convinced that 43 is not prime. This is understandable. I would not. If I read in a (reputable) paper tomorrow, that a survey of 100,000 mathematicians (full professors at research universities) found that 100% of them claimed that in no way could 43 be considered prime, I would be very intrigued and, yes, very confused. I would read why they would say this and in this case (assuming everything is above board) likely hold 43 to not be prime in any sense. I would go on with my life still holding that 6*7=42 and 9*5=45 and doing everything the same. I very rarely make direct use of the fact that 43 is prime and I have no emotional attachment to that fact. There are other facts that I have attachment to that I would have an emotional reaction to if I discovered that they are false. For example if I discovered that my grandfather cheated on my grandmother more than once then I would have an emotional reaction to that fact and I consider that more likely then I consider the possibility that 43 cannot in any way be considered prime.
43 being prime doesn’t care if you are insane. Reality does not change based on whether or not the way you use numbers is fundamentally flawed (I would posit that in some way, that mathematicians have yet to fully adopt and explain to non-mathematicians or even discover that it almost certainly is). I have talked with people who have demonstrated that they have fundamental flaws in how they deal with numbers and yet thought they knew numbers well. I’m sure I have some fundamental flaw and yet I think I know numbers well (and I’m pretty sure I can demonstrate an above average understanding amongst adult humans). I still find the claim that you could, given enough time, say 100 million independent statements in a row and only 100 million independent statements without making an error. I think that I view someone claiming to have 99.99999% confidence in something the same way you view someone claiming to have 100% confidence in something.
I can understand why you would want 99.99999% confidence in 43 being prime. What I am arguing is that if you follow dispassionate and reasonable analysis to find your confidence in the statement “43 is prime,” that you will come up with a confidence smaller then this. I claim that you will not get what you want in this instance.
Treating miracles like measurements with certain probability distributions attached make them like any other experiences.Miracle is defined as “an extraordinary and welcome event that is not explicable by natural or scientific laws and is therefore attributed to a divine agency”. Leaving aside the sad fact that most people have only vague knowledge of these laws, it is worth focusing here on “welcome event”. If one adds to it very personal element (say, Paul hearing his own name on the road from Damascus) then a single miracle can be a 100% conversion. One may argue that humans are always subject to doubt and the probabilities that you juggle account for that inner doubt, but I would not bother with that as long as one cannot distinguish the difference from outside within the error bars and noise level of our normal erratic behaviour.
Every night I hallucinate vividly for several hours, every morning I wake up and decide that whatever I dreamt of was a figment of my imagination and shouldn’t lead me to doubt my picture of how the universe works. It’s very easy (drugs, hypnosis etc.) to fool my brain into seeing and hearing things for a limited time, but not long term. My picture of reality is built upon years and years of experiences, decades of not ever seeing a single law of physics contradicted a single time. That’s why I give my picture of reality more weight than I would a single episode, no matter how convincing it is in the moment. Even if upon seeing a miracle I feel converted, the next day I will probably decide that no single occurrence should dominate the evidence of a lifetime and go back to doubt.
Basically, I have “error bars” around my accumulated map of physical reality, but I have much larger error bars on the proposition that “what I am perceiving is a true event” in a given moment. Comparing tiny error probabilities isn’t something that human intuition does easily, that’s why I spell out the math. I’m happy to note that proper scientific epistemology has progressed somewhat since the days of Paul the Apostle.
Interesting, but I’m not sure you’re doing the Bayesian calculation right here. You say you are willing to admit a factor of 1 in 10 for a god of some sort, but only 1 in a million for the Jewish God. I am assuming based on the context (perhaps this is wrong, in which case what I am saying may be off-base) that your reason for assigning an additional factor of 100,000 is due to the large amount of specific, complicated doctrines needed to describe one particular God, rather than any specific prejudice against Judaism. That is, roughly speaking, there are about 100,000 equally plausible ways for God to be, and Judaism is just one of them. (Of course, traditional Monotheistic metaphysics says God is a necessary being and thus if we fully understood him, we would realize there was only 1 way for him to be, but given human uncertainty about metaphysics, this is compatible with saying there are many possible deities from an epistemological point of view.) This also rasies questions about how “finely specified” the description of God should be to count as the Jewish God, which is somewhat ambiguous but I think shouldn’t matter much if you do everything properly.
Anyway, I claim the mere existence of a series of prophets who claim to have received revelation from the Jewish God is actually a HUGE amount of evidence for Judaism, which mostly cancels out this factor of 100,000. That is because, these prophets’ writings contain an amount of information sufficient to specify (more or less) which version of God is being discussed. To give an analogy, I claim that the license plate of my car is M36 EUY (this is actually true). The prior odds of getting that license plate randomly are about 1 in 45 million, but I still have an order unity probability of telling you the truth, because my odds of *lying* about having that particular plate also involve the same tiny number. Hence, factors due to informational specificity end up more or less cancelling in the calculation, leaving an order unity factor. (Of course, if I relay enough bits of info to you, I might be expected to make a few errors, but my odds of being 99% right or whatever should still converge to something reasonable.)
Of course, there are multiple religious systems out there besides Judaism, so the share of probability must be somehow divided between them. But I really don’t think there are 100,000 different belief systems out there with equally credible claims to be based on divine revelation. The number of world religions which an intellectual person can take seriously is maybe order 10 or so? So this suggests your probability of Judaism should actually be around 1%, prior to any updates based on more overt evidence such as dramatic miracles. So maybe it is already time to start laying off the bacon?
Also, one quick question about miracles. Why is it necessary for you personally to have witnessed the “miracle”? If there is sufficiently credible documentation of other people seeing miracles, shouldn’t that also lead you to update?
Just on miracles – in order to reason effectively, you need to give yourself a fairly substantial amount of credence, otherwise you can’t trust your own conclusions and the system falls apart. Not that you can’t fight biases, but in order to fight them effectively you need a lot of evidence about the nature and effect of the bias. You can much more freely doubt other people, because the reliability of their senses and logical skill is not a fundamental assumption of your thought process. This will mostly show up in low credence situations, like miracles, counterintuitive science results, and other strange events.
I don’t really disagree with that, but I’m not saying that you should give exactly the same amount of credence to yourself witnessing a miracle and someone else doing so. I merely claim that there can be significant evidence coming from other people. It seems fairly clear that there are situations in everyday life, nothing to do with miracles, where we get large Bayes factors from the testimony of other people. Obviously it depends on the details of the situations, for example a lot of people claiming to see the same thing with little motivation to lie is very different than one well-known faker reporting something.
Aron, great questions! You really made me think, let’s see if I can offer some thoughts.
Re: miracles, gazeboist makes a good point but you should also trust yourself more than others if you think they’re as reliable as you are, just because there are more of them. There’s actually some cool math here.
Let’s say that every person has 1% to hallucinate a miracle whether they exist or now, and in a world in which miracles actually exist every person also has another 1% to witness a true miracle. In miracle-world your chance of believing you saw a miracle is twice as high (2% vs. 1%), so seeing a miracle would be strong evidence. Now, let’s look at the proposition that at least some of the people you know saw a miracle.
Let’s say you know 300 people that you trust, including your friends and people of authority.
P(someone “saw” a miracle | no miracles) = 1 – 0.99^300 = 95%
P(someone “saw” a miracle | miracles exist) = 1 – 0.98^300 = 99%.
So in this case, “someone saw a miracle” is less evidence because we would almost certainly expect it either way. Add to that the fact that you can’t really trust other as much as yourself by simple conjunction (they could be crazy * they could be lying * they could mean something else * …) means that a personal miracle is definitely much stronger evidence.
Your model is interesting, but I don’t think it proves quite the point you think it does, for several reasons:
The model doesn’t actually discriminate between “self” and “other”. That is, suppose instead that instead of polling 300 friends, you poll 299 friends but then include yourself in the survey as the 300th person. Then I claim that (given the ground rules of your model) you would calculate the odds that somebody saw a miracle to be exactly the same as before. And if N people (but not you) report miracles, that provides the same amount of evidence as if N-1 people plus you report miracles. So the moral of your model is NOT that you should trust yourself more than others. Rather, it is that in any large enough group of people (whether or not it includes you) the chances of at least one false alarm go up with the size of the group.
(In real life there is at least one valid reason to trust yourself more than others—it is harder to tell if other people are lying! But your model doesn’t include lying as an option, so that isn’t relevant within the world of your model.)
You only asked yourself the probability that at least one person saw a miracle. But in your model, you actually have more data than that, you would know how many people reported miracles. If miracles are real, then on average you would expect twice as many people to report them than if not. This actually flips the situation around, and there is now a theorem of probability (called “monotonicity of relative entropy”) which implies that as you increase the number of people in yout sample, your expected degree of confirmation of the truth can only go up, not down. (I say expected because you could get unlucky and get misleading evidence sometimes.)
Let’s do the calculation. Suppose that miracles exist, and we sample a single friend to see whether they have seen an apparent miracle. There is a .02 chance that they say yes, in which case you get a Bayes factor of 2 in favor of miracles. There is a .98 chance that they say no, in which case you get a Bayes factor of 98/99 (slight disconfirmation of miracles). Since Bayes factors are multiplicative, it is most natural to consider the log averaged expected Bayes factor, which will be additive for independent measurements. Thus the expected Bayes factor for one friend is
e^[(.02 ln(2) + .98 ln(98/99) )] = 1.00392
This is a lot worse than a factor of 2 for you seeing a miracle, but only because we stipluated by fiat that you actually do see the apparent miracle, whereas we didn’t make the same stipulation about your friend. If we include all 300 friends we get
e^[300(.02 ln(2) + .98 ln(98/99) )] = 3.235
which is actually better, in this particular case, than the factor of 2 you would have gotten from a single isolated miracle report. This number actually increases monotonically as a function of your number of friends.
(And for any number of friends, if we stipulate that “one more friend than expected, even given miracles, saw a miracle”, I think this should always give you at least a factor of 2.)
BTW, the quantity in the square brackets (the expected log Bayes factor) is also known as the “relative entropy” S(rho|sigma), where rho is the true probability distribution (in this case miracles) and sigma is the background distribution (in this case no miracles). It is a theorem that, even without assuming independence, this cannot decrease when you increase the amount of data. That makes sense because if you could more reliably distinguish rho from sigma by looking only at a subset of the data, then you could have done that even with access to the whole data set.
Finally, while I know your example was just intended to illustrate the abstract mathematical point, I can’t help but notice that the definition of a miracle seems to have shifted. You originally said that you would consider an event to be a miracle if it provided a Bayes factor of 100, whereas now you have lowered it to 2. If you redid your original calculation with assumptions conforming to your original assumption (for example, .01 odds of seeing a true miracle and .0001 odds of hallucinating a fake one) then of course you would have gotten very different numbers.
Re: possible gods.
10 possible Gods is a very low number, even based on existing religions. Mormon Jesus ain’t the same as Catholic Jesus. There are even more gods when you look at the past. Do you count Zeus or Odin? Do you count the Canaanite or Edomite deities? We beat the Canaanites in battle, this doesn’t mean our god is the only option. At the very least, all the gods that are incompatible with each other count separately.
And there’s really no reason to believe that currently existing religious believers have exhausted the full space of possible conceptions of a deity. Most religion’s gods seem both memetically efficient and remarkably well suited to promote the interests of some societal group in the civilization that invented them. There’s no reason a real god has to satisfy these constraints. To put it another way: existing conceptions of god clearly haven’t heard of the orthogonality thesis :)
The writer of the old testament made it seem like all the prophets spoke to the same God, but then the Quran came and said that all of them really support Muhammad and Allah. Even if the prophecies were all factual and faithfully recorded, I would hardly say they provide unanimous evidence for anything. The number of people believing something is evidence of how convincing / palatable / politically expedient that idea is. In realms where real evidence is scarce and bullshit is plentiful, popular belief is weak support for an idea being true.
To your license plate example: should I trust you if you tell me there are exactly 4,920,743 hairs on your head? Even though it’s a matter of fact how many there are, I know you probably didn’t count them. Maybe you just picked it because it’s a prime number. I wouldn’t be willing to bet at 10:1 that the number is 4,920,743 and not 4,877,502 based on your testimony alone. Would you?
Even if god exists as a matter of fact, it seems that understanding god is harder than counting one’s hairs. Someone claiming that they have a perfect description of god is evidence, but not overwhelming proof.
In the case of my license plate example, our shared background knowledge states that I should have the power to determine what my license plate number is. On the other hand, I am very unlikely to be able to accurately count the number of hairs on my head. The case of God is intermediate, because people disagree about whether God has revealed his nature to particular human beings (e.g. prophets). It seems to be begging the question to assume that God’s nature is more like the hair example than like the license plate example.
Suppose that God exists and that he has the property “wants to reveal his existence to at least some significant sized group of human beings”. Then conditional on this being the case, we would expect there would exist people who are in a position to know the nature and attributes of God. It is not necessarily required for those people to have a “perfect description” of God, just good enough to go on.
I think there are several non-arbitrary reasons for thinking the Jewish concept of God is more plausible than that of other ancient peoples. First of all, they were the almost the only ancient people to firmly adhere to Monotheism as a revealed truth. Most ancient people were Polytheists, and on a philosophical level I think Monotheism is a lot more plausible, for example because it seems more compatible with a scientific worldview. (Note that the difference between Monotheism and Polytheism is NOT primarily the number of deities but rather the nature of deity; see https://lastedenblog.wordpress.com/2016/05/07/which-god-exists/ for some discussion of this issue.)
Secondly, the degree of claimed miracles and prophetic revelation seems greater in the case of Israel than of other people. We have numerous texts claiming to report the direct words of God to Hebrew prophets, but I am unaware of any historical parallel to e.g. the book of Isaiah, in which Aphrodite or Apollo narrates at length about their relationship to the Greek people. (I’m not counting texts like Homer or the Greek playwrights, that were understood to be fictional by their original audiences.) I also don’t know any real parallel to the dramaticness of the Exodus. I’m not saying we can prove the Exodus really happened, only that it seems unqiue among possibly-historically-documented ancient miracle claims.
Third, those other religions are mostly extinct. So I don’t count Zeus or Odin or the Cannanite or Edomite deities, since as far as I know almost no intelligent person today believes in them. This is based partly on theological reasons (any God that wants to reveal himself to worshippers would presumably at least arrange not to have his religion go utterly extinct) and partly for epistemic reasons (since at least some people are responsive to evidence, if there were good evidence for a deity performing specific signs you would expect at least some people to believe in it—this is not entirely unrelated to Aumann’s Agreement Theorem, the point being that you should update based on the beliefs of other people. If essentially 0 smart people believe something, then that is often extremely good evidence it is false.)
Whereas about half the population of the world now believes in some version of the God of Israel. And yes we should certainly count Muslims in that number, since Mohammad (whether or not he was a true prophet) was certainly claiming to receive revelation from the same God that had revealed himself earlier to Moses and the other Israelite prophets. The attributes of Allah in the Quran are almost identical to the attributes of God in the Hebrew Scriptures, aside from the historical detail of whether he revealed himself to Mohammad or not. (Obviously, whether he did so makes a huge practical difference to one’s religious obligations, but it does not make an enormous difference to the abstract theological conception of God.)
“In realms where real evidence is scarce and bullshit is plentiful, popular belief is weak support for an idea being true.”
I strongly disagree with this conclusion. I think it is based on a cognative bias whereby something seems like weak evidence for X if it is very unlikely that, taken by itself, it could convince you that X is true. For example, suppose a woman is murdered, and the cops learn that a certain man was her boyfriend. Intuitively, this seems like extremely weak evidence that that guy did it, because taken by itself this evidence could NEVER be used to convict the boyfriend in court or even raise the probability of his guilt above 10%. (Only 3.7% of murders are of the perp’s girlfriend, https://top5ofanything.com/list/8a1bf3d1/Murders-by-Relationship-to-the-Victim-in-the-United-States ) But considered as a Bayes factor it is actually a huge amount of evidence. In a city of 10 million people, the boyfriend is at least 100,000 times more likely to have done it than a randomly selected stranger. Of course, this issue of priors is particularly important once you have the possibility of additional evidence that might put you over the top (DNA tests, or some specific miracle claim, or what have you).
In the same way, even if there were 10 times as many false prophets as true prophets, even so the existence of any given prophet would still provide a quite substantially large Bayes factor to the specific religious views advocated by that prophet. I never said that the prophets provide “unanimous evidence for anything” (although many religious traditions essentially originate from a single prophet, or none, whereas there are dozens of prophets in the Hebrew Bible.). I merely claim that your 1/10 probability for God ought to be divided unequally between the various religions based on various plausibility factors, and that ceteris paribus, the more adherents a religion has, the more probable it is. If we cut off the number of adherents at 1 million or more, then according to http://www.adherents.com/Religions_By_Adherents.html there would be fewer than 20 religions that make the cut (although “African Traditional & Diasporic” should probably be counted as multiple religions, and if you separate out things like e.g. Mormonism from Christianity you may get a higher number). But I also don’t think that all of these 20 religions are equally credible. Why should the principle of indifference hold here?
Incidentally, I don’t believe in the “orthogonality thesis” when it comes to God, largely because I believe in moral realism. (Incidentally, when you claim that the Aumann Agreement Theorem doesn’t apply to your disagreement with Maimonades because you don’t believe in moral realism–doesn’t that just push the issue back a step? The question is now whether moral realism is true, and that IS a factual question about which you and Maimonades disagree.)