Posts

Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. 2021-05-16T13:47:08.223Z
Incompatibility of moral realism and time discounting 2020-12-12T18:47:10.308Z
Oxford college choice from EA perspective? 2020-11-23T15:38:51.599Z

Comments

Comment by wuschel on Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. · 2021-05-18T08:26:28.518Z · EA · GW

Very interesting point, I have not thought of this. 

I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic. 

Comment by wuschel on Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. · 2021-05-18T08:16:02.012Z · EA · GW

I think you are correct, that there are RC-like problems that AU faces (like the ones you describe), but the original RC (For any population, leading happy lives, there is a bigger population leading nearly worth living lives, whose existence would be better) can be refuted. 

Comment by wuschel on Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. · 2021-05-18T08:07:22.951Z · EA · GW

1. : elaborating on why I think Tarsney implicitly assumes SSA:

You are right, that Tarsney does not take any anthropic evidence into account. Therefore it might be more accurate to say, that he forgot about anthropics/does not think it is important. However it just so happens, that assuming the Self-Sampeling Assumption would not change his credence in solipsism at all. If you are a random person from all actual persons, you can not take your existence as evidence how many people exist. So by not taking anthropic reasoning into account, he gets the same result as if he assumed the Self-Sampeling Assumption.

 

2. Does't the Self-Indicaion Assumption say, that the universe is almost surely infinite?

Yes, that is the great weakness of the SIA. You are also completely correct, that we need some kind of more sophisticated mathematics if we want to take into account the possibility of infinite people. But also if we just consider the possibility if very many people existing, the SIA yields weird results. See for example Nick Bostroms thought experiment of the presumptuous philosopher (copy-pasted the text from here):

It is the year 2100 and physicists have narrowed down the search for a theory of  everything to only two remaining plausible candidate theories, T1 and T2 (using  considerations from super-duper symmetry). According to T1 the world is very,  very big but finite, and there are a total of a trillion trillion observers in  the cosmos. According to T2, the world is very, very, very big but finite, and  there are a trillion trillion trillion observers. The super-duper symmetry  considerations seem to be roughly indifferent between these two theories. The  physicists are planning on carrying out a simple experiment that will falsify  one of the theories. Enter the presumptuous philosopher: "Hey guys, it is  completely unnecessary for you to do the experiment, because I can already show  to you that T2 is about a trillion times more likely to be true than T1  (whereupon the philosopher runs the God’s Coin Toss thought experiment and  explains Model 3)!"

Comment by wuschel on Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. · 2021-05-18T07:51:17.508Z · EA · GW

I think the way you put it makes sense, and if you put the number in, you get to the right conclusion. The way I think about this is slightly different, but (I think) equivalent:

Let  be the set of all possible Persons, and  the probability of them existing. The probability, that you are the person  is . Lets say some but not all possible people have red hair. She subset of possible people with red hair is  . Then the probability, that you have red hair is:

In my calculations in the post, the set of all possible people is the one solipsistic guy, and  people in the non-solipsistic universe. (with their probability of existence being  and  ). So the probability, that you are in a world, where solipsism is true, is .

Comment by wuschel on Announcing "Naming What We Can"! · 2021-04-09T12:36:08.517Z · EA · GW

This comment totally made my day!

Comment by wuschel on A parable of brightspots and blindspots · 2021-03-23T17:55:35.189Z · EA · GW

Hi, I am happy your parable finally made it on the forum.  Also: really nice Idea to also upload the audio of the main text. For me at least, this is awesome, as I much rather listen to things than read them.  Wild Idea: maybe more people could also narrate their posts, and we could have a tag that highlights audio-posts, so one could specifically look for them? 

Comment by wuschel on Incompatibility of moral realism and time discounting · 2020-12-14T20:34:34.052Z · EA · GW

Thanks for that comment and your thoughts! I am unfortunately unfamiliar with the works of Hare, but it sounds interesting and I might have to read up on that. 

I totally agree with you, that there are statements to which we assign truth values, that depend on the frame of reference (like "Derek Parfit's cat is to my left", or the temporal ordering of spacelike separated events.) 

I would also not have a problem with a moral theory, that assigns 2 Utilons to an action in one frame of reference, and 3 Utilons in another. 

I do however believe that there are some statements that should not depend on the frame of reference. 

We have physical theories to predict the outcome of Measurements, so any sensible physical theory should predict the same outcome to any measurement, whichever frame of reference we use to describe it. 

We have moral theories to tell us what actions we should do, so any sensible moral theory should prescribe the same actions, whichever frame of reference we use to describe them. 

If you however do not have that requirement to a moral theory, I see that discounting realists would not have to change their views.

Comment by wuschel on Incompatibility of moral realism and time discounting · 2020-12-12T22:41:09.983Z · EA · GW

Yes, good point. I agree that sufficient specification can make time discounting compatible with moral realism.  

One would have to specify an inertial system, from which to measure time. (That would be equivalent to specifying the language to English for example.) 

Then we would not have a logical contradiction anymore, which weakens my claim, but we would still have something I would find unplausible: An inertial system that is preferred by the correct moral theory, even though it is not preferred by the laws of physics. 

Comment by wuschel on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-08T23:16:11.822Z · EA · GW

On a side note: I think this is beautifully written, and I would be happy, to read future posts from you. These personal glimpses in other people's struggle with EA concepts and values is something that I think might really be valuable to the community, and not many people have the talent to provide it.

Comment by wuschel on What are you grateful for? · 2020-11-27T20:16:12.795Z · EA · GW

I am grateful for all the people in the community, who are always happy to help with minor things. Everytime I ask someone for training advice at a conference, or for a, explanation of any word on Dank EA Memes, or for career advice in the Forum, I always got really nice and detailed answers. I feel really excepted through that, especially considering, how rare these things are on the internet.

Comment by wuschel on Competitive Ethics · 2020-11-24T14:54:28.653Z · EA · GW

Interesting Idea. Although I fear we might not like what we find....

Comment by wuschel on Competitive Ethics · 2020-11-24T14:52:06.722Z · EA · GW

I do find that interesting.  Research projects that come to mind:

Gene sequence 200 Philosophy grad students. 100 consequentialists, 100 deontologists. See if you find any trends.

Than take 100 14 year olds. Gene sequence them, and try predicting utilitarian leaning, and deontologist leaning. 

Than confront thee 14 year olds with arguments for and against Consequentialism and Deontology. 

See if you predicted correctly significantly. 

Comment by wuschel on Five New EA Charities with High Potential for Impact · 2020-11-02T10:27:58.686Z · EA · GW

I really like this accessible format. However, I think it would be helpful, if there would at least be footnotes to the course of your information, whenever something is an interesting claim (for example "One in three children has dangerous levels of lead in their bloodstream").
I fear that without a tractability of information within official EA contexts, a lot of half true hear say seeps through the cracks.
I don't expect any of the  information in this post to be false, however. 

Comment by wuschel on How much does a vote matter? · 2020-10-31T10:29:03.874Z · EA · GW

I completely agree with you. This whole reasoning seems to heavily depend on using causal decision theory instead of its (in my opinion) more sensible competitors.

Comment by wuschel on The Fable of the Bladder-Tyrant · 2020-10-01T08:14:25.377Z · EA · GW

I am not sure, if no one is getting the joke, or just down voting, because they don't wand irony-jokey content on the EA Forum..

Comment by wuschel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T07:24:46.536Z · EA · GW

Would you rather be one or two dogs?

Comment by wuschel on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T17:16:19.537Z · EA · GW

Interesting questions. Although I don't think i know the answer to any of them better than you do, I have another possible reason, why the suffering in your situation might not be bad:

You could argue through the lens of personal identity, that if you would self-modify, not to feel pain via sympathy anymore, that the person you would turn into would not be you anymore in the morally relevant sense.

This reasoning however would only apply, if you have ethics, that care about personal Identity (for example, by caring about you or your loved ones surviving in some sense). Having preferences like that seems to be pretty intuitive, but before embracing this view I would recommend having a look at the counter arguments by Derek Parfit ( https://plato.stanford.edu/entries/identity-ethics/#IDM ).

Comment by wuschel on Are we living at the most influential time in history? · 2020-04-27T18:25:14.575Z · EA · GW

Imagine you play cards with your friends. You have the deck in your hand. You are pretty confident, that you have shuffled the deck. Than you seal the deck, and give yourself the first 10 cards. And what a surprise: You happen to find all the clubs in your hand!

What is more reasonable to assume? That you just happen do dray all the clubs, or that you where wrong about having suffeld the cards? Rather the latter one.

Compare this to:

Imagine, thinking about the HoH hypothesis. You are pretty confident, that you are good at long term-forecasting, and you predict, that the most influential time in history in: NOW?!

Here to, so the argument goes, it is more reasonable to assume, that your assumption of being good in forecasting the future, is flawed.