Edit: To clarify, when I say "accept Pascal's Wager" I mean accepting the idea that way to do the most (expected) good is to prevent as many people as possible from going to hell, and cause as many as possible to go to heaven, regardless of how likely it is that heaven/hell exists (as long as it's non-zero).
I am a utilitarian and I struggle to see why I shouldn't accept Pascal's Wager. I'm honestly surprised there isn't much discussion about it in this community considering it theoretically presents the most effective way to be altruistic.
I have heard the argument that there could be a god that reverses the positions of heaven and hell and therefore the probabilities cancel out, but this doesn't convince me. It seems quite clear that the probability of a god that matches the god of existing religions is far more likely than a god that is the opposite, therefore they don't cancel out because the expected utilities aren't equal.
I've also heard the argument that we should reject all infinite utilities – for now it seems to me that Pascal's Wager is the only example where the probabilities don't cancel out, so I don't have any paradoxes or inconsistencies, but this is probably quite a fragile position that could be changed. I also don't know how to go about rejecting infinite utilities if it turns out I have to.
I would obviously love to hear any other arguments.
The binary choice nature of the wager always seemed bizarre to me. The real-life choice isn’t “Christian God: yes or no,” it’s “try to pick the right option among these very many religious choices.” It also seems to me that some possibly right choices have rankings that go: my god > no religion > wrong religion. None of this necessarily means that you shouldn’t take the wager, but given the above it definitely isn’t obviously right to me.
The way I see it, the wager IS binary, but the choice is "act as though heaven/hell exists: yes or no". If you answer "yes", then of course there are multiple ways to proceed from that point, but that doesn't mean the wager itself isn't binary.
If I decide to accept the wager, the next step will be a WHOLE other thing and definitely not binary.
But as I understand it the whole point of the wager is the heavenly pay off. In that case, you can’t just say “I pick heaven” and defer the part where you pick a religion, as that influences whether or not you get the payoff. So I think this is less like a binary decision and more like picking the right card out of a deck.
It does seem to me, if you think the general reasoning of the wager is sound, that the most rational thing to do is to pick one of the cards and hope for the best, as opposed to not picking any of them.
You could for example pick Christianity or Islam, but also regularly pray to the “one true god” whoever he may be, and respectfully ask for forgiveness if your faith is misplaced. This might be a way of minimising the chances of going to hell, although there could be even better ways on further reflection.
Having said all that I’m atheist and never pray. But I’m not necessarily sure that’s the best way to be…
While I still disagree that the decision is non-binary, you do bring up a possibility I hadn't thought of which is that NO ACTION could be the BEST ACTION if you think practicing the wrong religion makes you more likely to go to hell and less likely to go to heaven.
Although now I think about it, that wouldn't imply no action, rather that you should encourage atheism, behaviour generally agreed upon across religions, and possibly converting people from one religion to a more likely one.
All options maximize expected utility (EU), since the expected utility will be undefined (or infinite) regardless. There's always a nonzero chance you will end up choosing the right religion and be rewarded infinitely and a nonzero chance you will end up choosing badly and be punished infinitely, so the EU is +infinity + (-infinity) = undefined. (I got this from a paper, but I forget which one; maybe something by Alan Hájek.)
In response to 1, you might say that we should maximize the probability of +infinity and minimize the probability of -infinity before considering finite values. This could be justified through the application of plausible rationality axioms directly, in particular the independence axiom. This could reduce to EU maximization with some prior steps where we ignore equiprobable parts of distributions with the same values. However, infinite and unbounded values violate the continuity axiom. Furthermore, if we're allowing infinities, especially as limits of aggregates of finite values like an eternity of positive welfare, then it would be suspicious to not allow unbounded finite values at least in principle. Unbounded finite values can lead to violations of the sure-thing principle, as well as vulnerability to Dutch books and money pumps (e.g. see here [LW · GW], here [LW(p) · GW(p)] and my reply, and here [LW · GW]). If the bases for allowing and treating infinities this way require the violation of some plausible requirements of rationality or require ad hoc and suspicious beliefs about what kinds of values are possible (infinity is possible but finite values must be bounded), then it's at least not obvious that we're normatively required to treat infinities this way. Some other decision theory might be preferable, or we can allow decision-theoretic uncertainty [? · GW].
There are plausible alternative decision theories that don't require (but may permit) choosing extremely low probability bets of extremely high payoffs, like EU maximization with bounded utility functions and stochastic dominance. Under decision-theoretic uncertainty [? · GW] assigning some credence to unbounded EU maximization with infinities, low probabilities of Heaven or Hell might still not dominate.
Under impartial views, conditional on a given god (or gods), you won't change the aggregate: it's already undefined, +infinity or -infinity, and adding one more person to Heaven or Hell won't make a difference to that value. Some possible responses:
More complex approaches to aggregation (e.g. ignoring unaffected individuals, the Pareto principle), so that getting one more person into Heaven or keeping one more person out of Hell is still infinitely better.
Maybe you can decrease the probability that Heaven will be empty, or increase the probability that Hell will be empty.
There might be more promising infinities we can pursue in practice, potentially
Creating or preventing infinite universes or infinitely many universes.
The universe is already infinite and has an infinite amount of value in expectation even on highly plausible physics, because it's very plausibly infinite in spatial extent (or temporally), and under evidential decision theory, we already acausally affect an infinite amount of value, because infinitely many agents make decisions correlated with our own.
OMG this is EXACTLY the kind of reply I was looking for, and more. Thank you so much!
Since I'm pretty new into philosophy, I believe what you say although I don't understand it. However you have given me a ton of invaluable starting points from which I can now begin learning how to answer these kind of questions myself.
You can be fairly confident that your comment will end up triggering a major (and probably inevitable) turning point in my philosophical journey and therefore my life since it sounds like utilitarianism in the form I have always followed is flawed and will need to be revised or even scrapped entirely.
I included the wager below for reference since it doesn't seem to be in the original question.
I think one problem is that belief in the existence of God is probably not sufficient for an infinite payoff (and it's not 100% clear to everyone what is sufficient). My understanding is that most major religions are meant to teach something more complex than that. Usually something to do with helping others and attaining peace by letting go of selfish desires in favor of loving and kind ones.
But honestly, I think the reason people reject the wager is because they don't like it. Maybe because infinity is already incomprehensible and uncertainty around infinities just makes it even more difficult to deal with. We generally like certainty or at least ways to be somewhat certain about how uncertain we are and how to become more certain.
So, it's often easier to just avoid something that doesn't clearly guarantee a payoff. And switching to either for or against God existing doesn't seem like it has a clear payoff for most people. Like many things, you can just forget about the question and then it won't seem to have much impact on your life. Same way most of the time most people just forget about the meaning of life or the possibility of nuclear war and other complex topics that seem to not have clear solutions to most people.
"Either God exists or God does not exist, and you can either wager for God or wager against God. The utilities of the relevant possible outcomes are as follows, where f1, f2, and f3 are numbers whose values are not specified beyond the requirement that they be finite:
God does not exist
Wager for God
Wager against God
Rationality requires the probability that you assign to God existing to be positive, and not infinitesimal.
Rationality requires you to perform the act of maximum expected utility (when there is one).
Conclusion 1. Rationality requires you to wager for God.
Let's entertain as an axiom the claim that, in the absence of evidence, promises of utility/disutility become less likely the more is promised.
If I promise you $1 to drop off a letter at the post office for me, you'd believe me. If I promise you $1,000,000, you'd think I was joking.
More specifically, let's make our axiom the claim that, if we integrate the likelihood of a payoff over the range of utilities promised, that integral is convergent.
No matter how much utility is promised, the amount of utility received in expectation is finite.
In other words, there is no infinite expected utility.
If this is accepted, then expected utility (always finite) is controlled largely by mechanistic plausibility and empirical evidence, not just the sheer amount promised.
For example, if I have a history of making and keeping extravagent promises, you know I have billions of dollars in the bank, and you can see a reason it would be worth it to me to pay $1,00,000 to have you take my letter to the bank, you might think it's pretty likely I'll pay you as I promise. These sorts of considerations become extremely important as the amount of utility promised increases.
You don't have to accept the axiom, but if you do, then I think you end up at the common-sense position that you should reject Pascal's Wager, be open to the possibility of small utility gains on limited evidence, and require larger amounts of evidence the more utility is promised. This principle comes in handy when avoiding scams.
Why didn't Pascal, a brilliant mathematician, come up with this argument on his own? I can think of a few possibilities:
I might be mistaken in my reasoning.
Pascal was born 20 years before Newton, and died several years before Newton's seminal publication on calculus. He may therefore not have had access to what is now college-level knowledge on divergent and convergent integrals.
He might have been rationalizing a decision that was fundamentally emotionally driven.
This is an interesting response, but doesn't it run into a problem where you could have large amounts of evidence that Action X provides infinite payoff but have to ignore it.
Imagine really credible scientist/theologians discover there's a 90% chance that X gives you infinite payoff and 90% chance Y gives you $5, but you feel obligated to grab the $5 just because you're an infinity skeptic?
I also think this isn't consistent with how people decide things in general- we didn't need more evidence that COVID vaccines worked than flu vaccines worked, even though the expected utility from COVID vaccines was much higher.
It's common sense that our prior for whether or not a technology will work for a given purpose depends on empiricism. This accounts for why we'd reject the million dollar post office run - we have abundant empirical and mechanistic evidence that offers of ~free money are typically lies or scams. Utility can be an inverse proxy for mechanistic plausibility, but only because of efficient market hypothesis-like considerations. If there was a $20 on the sidewalk, somebody would have already picked it up.
Right, the distinction between expected value from tech expected utility from offers from people makes sense. But I think your axiom still doesn't provide enough reason to reject Pascal's Wager.
I'm not sure if we can say we have good grounds to apply this discounting to God or the divine in general. Can we put that in the same bucket as human offers? I guess you could say yes by arguing that God is just a human invention but isn't that like assuming the conclusion or something?
I don't think probability declines as fast as promised value rises- a guy on the street offering me $1 Billion versus $100 million is about equally likely to be a scam, but the $$$ is different.
Because of how infinity works, wouldn't I have to think there is a 100% chance that your axiom holds? Otherwise, I would think even if there's only a 1% chance X God is real and a 1% chance that the expected value is infinite it still dominates everything.
I’m not sure about #1 or #3. I do think that #2 is false, again on mechanistic grounds. It’s harder to get a billion dollars than a million dollars, and that continues to apply as the sums of money offered grow larger.
Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?
"Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?"
Thanks for the example. Yes, I think you've convinced me on this point. I think I want to say something like "when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is."
The word "produce" is causal language. It seems to me that even if our actions are correlated with other people, there's no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to "produce" utility if we're not causally in control of that production?
I suspect that the answer to some of these questions at an intersection between psychology and mathematics.
Our understanding of physics is empirical. Before making observations of the universe, we'd have no reason to entertain the hypothesis that "light exists." There would be infinite possibilities, each infinitely unlikely.
Yet somehow, based on our observations, we find it wise to believe that our current understanding of how physics works is true. How did we go from a particular physics model being infinitely unlikely to it being considered almost certainly true, based on finite amounts of evidence?
It seems that we have a sort of mental "truth sensor," which gets activated based on what we observe. A mathematician's credence in the correctness of a proof is ultimately sourced from their "truth sensor" getting activated based on their observation of the consistency of the relationships within the proof.
So we might ultimately have to reframe this question as "why do/don't arguments for Pascal's Wager activate our 'truth sensor'?"
This is an easier question to answer, at least for me. I see no compelling way to attack the problem, nobody else seems to either, I see the claims of world religions about how to achieve utility as being about as informative as taking advice from monkeys on typewriters, and accepting Pascal's Wager seems deeply contrary to common sense. These are unfortunately only reasons not to spend time thinking more deeply about the problem, and don't contribute in any productive way to moving toward a resolution :/
If we use normal decision-making rules that many people use, especially consequentialists, we find that Pascal's wager is a pretty strong argument. There are many weak objections to and some more promising objections. But unless we're certain of these objections it seems difficult to escape the weight of infinity.
If we look to other more informal ways to make decisions- favoring ideas that are popular, beneficial, and intuitive, then major religions that claim to offer a route to infinity are pretty popular, arguably beneficial, and theism in general seems more intuitive to most people than atheism
I think that given we have no strong reason to reject Pascal's wager, I would suggest that people in general should do "due diligence" by investigating the claims and evidences for at least the major religions. If someone says hey I've spent 500 hours investigating Christianity and 500 hours investigating Islam and glanced at these other things and they all seem implausible... that's one thing. But I think it's hard (probably impossible) to justify not taking Pascal's wager without substantially investigating religious claims.
If for instance, you end up think there's 0.5% chance that Jesus was God or Mohammed was the messenger of God, that's pretty substantial.
How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?
How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?
Let me offer the idea of "universal common sense."
"Common sense" is "the way most people look at things." The way people commonly use this phrase today is what we might call "local common sense." It is the common sense of the people who are currently alive and part of our culture.
Local common sense is useful for local questions. Universal common sense is useful for universal questions.
Since religion, as well as science, claim to be universal questions, we ought to rely on universal common sense. The galactic wisdom of crowds, if you will.
Of course, we can't talk to people in the past or future. But even when we rely on local common sense, we are in some sense making a prediction about what our peers would say if we asked them the question we have in mind.
We can still make a prediction about what, say, a stone age person, or a person living 10,000 years in the future, would say if we asked them about whether Catholicism was real. The stone age person wouldn't know what you're talking about. The person 10,000 years in the future, I suspect, wouldn't know either, as Catholicism might have largely vanished into history.
However, I expect that science will still be going strong 10,000 years in the future, if humanity lives to that point. And I expect that by then, vastly more people will believe (or have believed) in a form of scientific materialism than will believe in any particular religion. Hence, I predict that "universal common sense" is that we ought not spend much time at all investigating the truth of any particular religion.
I think imagining that current view X is justified, because one imagines that future generations will also believe in X is really unconvincing.
I think most people think their views will be more popular in the future. Liberal democrats and Communists have both argued that their view would dominate the world. I don't think it adds anything other than illustrating the speaker is very confident of the merits of their worldview.
If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we're in the last generation so there are no future humans, would you change your mind?
If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we're in the last generation so there are no future humans, would you change your mind?
I've had a little more chance to flesh out this idea of "universal common sense." I'm now thinking of it as "the wisdom of the best parts of the past, present, and future."
Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view.
In the AI doom scenario, I think we should reject the common sense of the denizens of that future on matters pertaining to AI doom, as they weren't wise enough to avoid doom.
In the Mormon scenario, I think that if the future is Mormon, then that suggests Mormonism would probably be a good thing. I generally trust people to steer toward good outcomes over time. Hence, if I believed this, then that would make me take Mormonism much more seriously.
I have a wide confidence interval for this notion of "universal common sense" being useful. Since you seem to be confidently against it, do you have futher objections to it? I appreciate the chance to explore it with a critical lens.
I'm not against it- I think it's an okay way of framing something real. Your phrasing here is pretty sensible to me.
"Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view."
But I have concerns about the future perspective, in theory and practice.
I think people will just assert future people will agree with them. You think future people will agree with you, I think future people will agree with me. There's no way to settle that dispute conclusively (maybe expert predictions or a prediction market can point to some answer), so I think imagining the future perspective is basically worthless.
In contrast, we can look at people today or in the past (contingent on historical records). The widespread belief in the divine is, I think, at least another piece of (weak?) evidence that points to taking the wager. This could be weakened if secular societies or institutions were much more successful than their contemporaries.
"My view makes perfect sense, contemporary culture is crazy, and history will bear me out when my perspective becomes a durable new form of common sense" is a statement that, while it scans as arrogant, could easily be true - and has been many times in the past. It at least explains why a person who ascribes to "social intelligence" as a guide might still hold many counterintuitive opinions. I agree with you though that it's not useful for settling disputes when people disagree in their predictions about "universal common sense."
If you believe that current and past common sense is a better guide, then doesn't that work against Pascal's Wager? I mean, how many people now, or in the past, would agree with you that Pascal's Wager is a good idea? I think it has stuck around in part because it's so counterintuitive. We don't exactly see a ton of deathbed conversions, much less for game-theoretic reasons.
I would say if we use other people's judgment as a guide for our own, it's an argument for the belief in the divine/God/the supernatural and it becomes hard to say Christianity and Islam have negligible probability. So rules that are like "ignore tiny probability" don't work. Your idea of discounting probability as utility rises still works but we've talked about why I don't think that's compelling enough.
I don't have good survey evidence on Pascal's Wager, but I think a lot of religious believers would agree with the general concept- don't risk your soul, life is short and eternity is long, and other phrases like that seem to reference the basic idea.
Ahh yes your last paragraph is a good point that I hadn't considered. It doesn't convince me that I should reject the wager, but it does mean that I shouldn't take extreme actions that go against most people's moral beliefs in pursuit of these types of wagers.
My perspective on the issue is that by accepting the wager, you are likely to become far less effective at achieving your terminal goals, (since even if you can discount higher-probability wagers, there will eventually be a lower-probability one that you won’t be able to think your way out of and thus have to entertain on principle), and become vulnerable to adversarial attacks, leading to actions which in the vast majority of possible universes are losing moves.
If your epistemics require that you spend all your money on projects that will, for all intents and purposes do nothing (and which if universally followed would lead to a clearly dystopian world where only muggers get money), then I’d wager that the epistemics are the problem. Rationalists, and EAs, should play to win, and not fall prey to obvious basilisks of our own making.
This argument is one that makes intuitive sense, and of course I am no exception to that intuition. However intuition is not the path to truth, logic is. Unless you can provide a logic-founded reason why almost certain loss with a minuscule chance of a huge win is worse than unlikely loss with a probable win, then I can't accept the argument.
Although I don't think Yitz's comment is persuasive, I don't think your response is either. What's the "logic-founded" reason for accepting the wager? You might say expected value theory, but then, it's possible to ask what the reason for that is, etc. It's intuition all the way down.
That's true but I think we need to make the least number of intuition based assumptions possible. Yitz's suggestion adds an extra assumption ON TOP of expected value theory, so I would need a reason to add that assumption.
Oops I got mixed up and that response related to a totally different comment. See my reply below for my actual response
Expected value theory recommends sometimes taking bets that we expect to lose.
We should not adopt decision theories that recommend sometimes taking bets that we expect to lose.
You reject 3.
Yitz rejects 1.
This is not a matter of making more or fewer assumptions. Instead, it's a matter of weighing which of the propositions one finds least plausible. There may be further arguments to be made for or against any of these points, but it will eventually bottom out at intuitions.
Oh wait sorry I got confused with totally different comment that did add an extra assumption. My bad...
As for the actual comment this thread is about, expected value theory can be derived from the axioms of VNM-rationality (which I know nothing about btw), whereas proposition 3 is not really based on anything as far as I'm aware, it's just a kind of vague axiom of itself. I feel we should restrain from using intuitions as much as possible except when forced to at the most fundamental level of logic — like how we don't just assume 1+1=2, we reduce it to a more fundamental level of assumptions: the ZFC axioms.
In summary, propositions 1 and 3 are mutually exclusive, and I think 1 should be accepted more readily due to it being founded in a more fundamental level of assumptions.
Then it becomes a choice of accepting the VNM axioms or proposition 3 above.
Like I said, I agree that we should reject 3, but the reason for rejecting 3 is not because it is based on intuition (or based on a non-fundamental intuition). The reason is because it's a less plausible intuition relative to others. For example, one of the VNM axioms is transitivity: if A is preferable to B, and B is preferable to C, then A is preferable to C.
That's just much more plausible than the Yitz's suggestion that we shouldn't be "vulnerable to adversarial attacks" or whatever.
It's also worth noting that your justification for accepting expected value theory is not based on the VNM axioms, since you know nothing about them! Your justification is based on a) your own intuition that it seems correct and b) the testimony of the smart people you've encountered who say it's a good decision theory.
Well the only existing evidence for the nature of a God, given it exists, are the beliefs billions of people have held over thousands of years. This evidence suggests (no matter how weakly), that God is as they think it is. In the absence of any other evidence, this means it is more likely that God is as they think than anything else.
(Especially so when you think about how many people have believed these things and over how much time; surely it's reasonable to consider the possibility that they are right. [I think I might be talking about "epistemic humility" but I'm not familiar with the terminology])
But the evidence for a reasonable god seems at least as plausible to me.
Although more people throughout history may have believed in an arbitrary unreasonable god… those people seem a lot less knowledgeable and logical then those people who believe a benevolent creator is possible.
The only point I was making was that not all versions of God are equally likely, so the possible utilities of heaven and hell don't cancel. I don't know what the most likely form of God is, but it sounds like we both agree that not all of them are equally likely.
You can get out of the infinite (+/-) payoffs by exponentially discounting future well-being. This assumes that, while in heaven or hell, you experience a finite amount of well-being at every point in time (that doesn't grow exponentially without bound), but you live for an infinite amount of time.
I suppose I could see reason to make this assumption, given that you could get used to the luxuries of heaven and it would start to be less pleasurable. However this doesn't really eliminate the problem because there's still the possibility that this assumption is incorrect, meaning the probability of infinite payoff is still non-zero and therefore the wager still stands.
This might take care of the wager's implications for what we should try to believe ourselves, but it would probably have weird implications of its own (for fanatical EAs, at least). It might suggest that critical thinking courses have a much higher expected utility than bed nets.
Ha, yes, I think other comments (e.g. Zach [EA(p) · GW(p)]'s) are better getting at the deeper issues here. It's hard to explain why, but it sure does seem crazy to allow the tiniest chances of infinite value to swamp all else.
Huh, that's an interesting position that I wish I could agree with, but I just can't see why beliefs billions of people have had for thousands of years would be less likely to be true than a God who does in fact exist but is totally different from what everyone thought and instead rewards... reason?
Do you think you could elaborate on why this Evidentialist God seems more likely to you?
In Amanda Askell's site, linked in another comment by ColdButtonIssues, she gives a reason to think an evidentialist god could be more likely: ‘Divine hiddenness’ plus God making us capable of evidentialism. Roughly, the idea is to ask the question, "Why would a god want us to irrationally believe in it?"
It's also plausible that people's beliefs in a supernatural punishing/rewarding god can be explained by evolutionary/cultural factors that wouldn't reliably track the truth.
I'm pretty sure that religion and an Evidentialist God often don't contradict each other. This article has many examples from Christianity, though I'm certain there are many more examples in other religions:
"Yet most religious traditions allow and even encourage some kind of rational examination of their beliefs."
Which also says "from the earliest of times, Christians held to a significant degree of compatibility between faith and reason." and Aquinas had a rule that ''an interpretation of Scripture should be revised when it confronts properly scientific knowledge.’’
Obviously pieces of the Bible can be used to justify any viewpoint, but I think it's at least worth mentioning this one verse that points directly against the Christian God being evidentialist:
John 20:29 Jesus said to him, "Because you have seen me, you have believed. Blessed are those who have not seen, and have believed."
I see this as saying that doubting your faith by needing evidence is less noble that having full trust in your faith by not requiring evidence. In other words, true faith doesn't need evidence.
I found this quote when someone pointed it out displayed at the front of a church, and regardless of its relevance to this conversation, I think it's a fascinating verse, especially since it was considered important enough for this church to place in large writing at its entrance.
I think one takeaway is that given the stakes of the question- people should actually assess the arguments offered for each religion's truth. It's probably not correct to just assume a thought experiment (the Evidentialist God is as plausible as Gods for which there is (at least purported) evidence that many find convincing.
But if Evidentialist God is the most likely, we should dedicate ourselves to spreading Bayesian statistics or something like that.
I think it makes sense to spend a substantial amount of time researching religions. If you're terminally ill, you should convert now.
Also, how you weigh suffering/joy probably matters. If Mormonism is true, it's super-hard to go to Hell/become a son of perdition. So if you want to minimize odds of eternal punishment, joining the LDS Church may be less attractive. But they do have essentially tiers of heaven so if you're more joy-motivated, research them!
Interesting. That passage could be interpeted very differently though like in favor of an evidentisalist God. (E.g. seeing is effortless while believing is harder and includes mulling over evidence).
I'm pretty sure that passage is in the context of doubting Thomas tho that dude was in a very different context. Instead of gods walking among us, we have many mutually exclusive religions vying for our attention. To have blind faith in one seems like a good way to end up in the wrong ideology.
As that article demonstrates, many experts in Christianity concluded reason is an essential guide to the correct ideology. And im sure they saw the passage ur refering to. So I'm inclined to belive them over some church you passed. Not to mention the strong evidentialist streak in other religions too.
The content of the beliefs matters to their credibility, far more than sheer numbers. I give ~zero weight to "what everyone thought", if I don't see any reason to expect their beliefs about the matter to be true. And the idea that an omnibenevolent God would punish people for being epistemically rational strikes me as outright incoherent, and so warrants ~zero credence.
Perhaps I should have said "...than a Pascalian God who rewards belief regardless of whether the belief is epistemically justified." (Obviously Pascal took this to be an accurate characterization of Christianity, but it doesn't really matter for my purposes. If a world religion doesn't match this description, then it won't be supported by the reasoning of Pascal's Wager.)
There isn't one. To reject Pascal's Wager, you just have to conclude that you don't care about infinity. Taking Pascal's Wager is the correct utilitarian response. You probably need to weight religions both by how likely they are to be true and how likely you can "win" conditional upon them being true.
Amanda Askell has a good rundown on why most objections to Pascal's Wager are bad.
Askell's first response is a non sequitur. The person deciding to take Pascal's wager does so under uncertainty about which of the n gods will get them into heaven. The response is assuming you're already in the afterlife and will definitely get into heaven if you choose door A.
However, the n-god Pascal's wager suggests that believing in any one of the possible gods (indeterminate EU) is better than believing in no god (-infinite EU). Believing in all of them is even better (+infinite EU). There's nothing in the problem statement saying that each god will send you to hell for believing in any other god (although it can be inferred from the Ten Commandments that Yahweh will do so).
I'm not sure I buy her last argument. Pascal's Wager does seem like a reductio ad absurdum of expected utility theory. Because if you accepted, it then, by equivalent logic, you would have to perform every other belief, no matter how improbable, as long as it had an infinite payoff. For example, somebody could tell me that if I stepped on a crack, the universe will end. And since there's a non-zero chance that they're correct, I couldn't step on any cracks ever again. As long as these potentially infinite payoff outcomes aren't mutually exclusive, you would have to accept them. And there's no bound on the number of them. Imagine being OCD in this world! Since this is clearly insane, there must be a fundamental flaw with how expected utility theory deals with infinities. Yet another reason to embrace virtue ethics :)
In theory, you could be stuck doing bizarre things like that. But I don't think you would in this world. Most reasonably taking infinity seriously, probably involves converting to Christianity or if not that Islam or if not that some other established religions.
Major religions normally condemn occultic practices and superstitions from outside that practice. If someone comes up to you and claims to be a demon that will inflict suffering, someone who has already bet on the Christian God or Allah for instance, can just say go away- I'm already maximizing my chance of infinite reward and minimizing my chance of infinite punishment.
My thoughts are pretty similar to those already expressed by ryancbriggs and MichaelStJules, and some others.
What does it mean to accept Pascal's Wager?
I understand Pascal's Wager to argue that it's more rational to believe in God than not because you end up in Heaven if you believe in God and not if you don't, and heaven is infinitely more valuable than any alternative. So even if your rational credence in God existing is very very low, it's still more rational to believe in God than otherwise.
I take the correct core of the argument to be that sometimes it can be prudential to believe in something you think is likely false (such as the existence of God and Heaven), or pursue some outcome you think is very unlikely (getting into heaven), because the value of the reward is high enough.
The general form of this argument is generally accepted and acted upon. People do similar sorts of things all the time when they pursue success in competitive or challenging environments with high rewards to the biggest winners, like high level professional sports, startups, tenure track academia, etc etc.
I'm don't know if Pascal's Wager has any implications beyond this?
In real life, there is no binary choice between 'Believe in God and have a chance of getting into heaven' and 'Don't believe in God and have no chance of getting into heaven'. There are several different major world religions, some very different sects inside the same religions, your own individual interpretation, the possibility of infinite utility from technological sources, etc. And this is without getting into how one actually achieves going to heaven in religions - sometimes it's not 'belief in our god and you'll get into heaven', but rather something involving structuring your life around the religion. And of course, many of these directly or indirectly oppose each other even apart from the opportunity cost.
Perhaps Pascal's Wager could be an argument to set one's life up to deliberately pursue even one source of infinite utility rather than none. I don't think this is the worst argument in the world and it could work as part of a cluster of arguments, but it's a pretty weak one in a vacuum since it's completely silent about crucial matters such as how to choose between any of these possible infinite utilities or how to pursue them.
I agree that there are difficulties here. However I do think there is a degree of flexibility in choosing beliefs, at both the conscious and subconscious levels, and people often end up believing things that are helpful to believe in some way even if not necessarily completely true. Intentionally trying to believe something you think is likely false and you have no other reason to believe in is probably going to be very hard, but you may easily end up believing weak arguments if other incentives line up in favor of the belief.
I clarified in my edit at the top of my post what I mean by "accept Pascal's Wager". To repeat I see it as accepting the idea that way to do the most (expected) good is to prevent as many people as possible from going to hell, and cause as many as possible to go to heaven, regardless of how likely it is that heaven/hell exists (as long as it's non-zero).
As for what this entails I have no idea. For now I'm just trying to decide whether to pursue this aim or not. The way I would actually do that comes later, if I choose to accept.
There are many good critiques of the details of Pascal's wager. For example:
He assumed that reason couldn't help you figure out if God existed, so presumably it was just a leap of faith. 
He assumed God wouldn't mind someone brainwashing themselves into a religion they disagree with, out of pure self interest.
He gives little to no reason to follow one religion over another, since almost all of them claim there the afterlife can be very postive or painful
but I have looked and not found any good reason to dismiss what I think is the heart of the argument: that we should take the possiblity of going to Hell or Heaven super seriously, more than any Earthly matter.
Pascal's wager offers little insight on what to do with this information, but I think a good next step is trying to find out for sure if Heaven and Hell are fiction or not or if any/which religion proclaiming them is plausible.
P.S. Please correct me if I'm wrong about anything here or if I missed anything important.
Very intersting question, though I wouldn't mind if you clarified it. What exactly do you mean by accepting Pascal's wager? Practicing any major religion until you start beleiving in it?
Perhaps more to the point: Pascal’s wager assumes "that God would not grant eternal life to a non-believer and that sincerity in one’s belief in God is not a requirement for salvation." which may not be obviously true to many EAers and religious adherents.
Oh in that case, I havent seen any good reasons not to take hell and heaven seriously (e.g. at least try to find out for sure if theyre fiction or not) from effective altrusits or others, but please let me know if you come across any.
I've been mulling over what this would entail, and plan on sharing my ideas on the EA forum in the next few days. I'd love to hear your thoughts on it. Thanks for asking this really cool question.