Posts

How to run an effective stall (we think) 2021-03-24T08:30:37.017Z
peterbarnett's Shortform 2021-03-08T23:03:29.429Z

Comments

Comment by peterbarnett on peterbarnett's Shortform · 2021-06-04T02:08:02.887Z · EA · GW

Updating Moral Beliefs

Imagine there is a box with a ball inside it, and you believe the ball is red. But you also believe that in the future you will update your belief and think that the ball is blue (the ball is a normal, non-color-changing ball). This seems like a very strange position to be in, and you should just believe that the ball is blue now.

This is an example of how we should deal with beliefs in general; if you think in the future you will update a belief in a specific direction then you should just update now.

I think the same principle applies to moral beliefs. If you think that in the future you'll believe that it's wrong to do something, then you should believe that it's wrong now.

As an example of this, if you think that in the future you'll believe eating meat is wrong, then you sort of already believe eating meat is wrong. I was in exactly this position for a while, thinking in the future I would stop eating meat, while also continuing to eat meat. A similar case to this is deliberately remaining ignorant about something because learning would change your moral beliefs. If you're avoiding learning about factory farming because you think it would cause you to believe eating factory farmed meat is bad, then you already on some level believe that.

Another case of this is in politics when a politician says it's 'not the time' for some political action but in the future it will be. This is 'fine' if it's 'not the time' due to political reasons, such as the electorate not reelecting the politician. But I don't think it's consistent to say an action is currently not moral, but will be moral in the future. Obviously this only works if the action now and in the future are actually equivalent. 

Comment by peterbarnett on Project For Awesome 2021 was a success! · 2021-03-25T00:26:41.748Z · EA · GW

Wow, that's great! Very happily surprised that a charity focused on wild animal welfare is getting recognition in an event like this which isn't explicitly EA. 

Comment by peterbarnett on peterbarnett's Shortform · 2021-03-08T23:03:29.788Z · EA · GW

Flipping the Repugnant Conclusion

Imagine a world populated by many, many (trillions) of people. These people's lives aren't purely full of joy, and do have a lot of misery as well. But each person thinks that their life is worth living. Their lives might be a be bit boring or they might be full of huge ups and downs, but on the whole they are net-positive.

From this view it seems really strange to think that it would be good for every person in this world to die/not exist/never have existed in order to allow a very small number of privileged people to live spectacular lives. It seems bad to stop many people from living a life that they mostly enjoy, in order to allow the flourishing of the few.

I think this hypothetical is a decent intuition pump for why the Repugnant Conclusion isn't actually repugnant. But I do think it might be a little bit dishonest or manipulative. It frames the situation in terms of fairness and equality; we can sympathize with the many slightly happy people who are maybe being denied the right to exist, and think of the few extremely happy people as the privileged elite. It also takes advantage of status quo bias; by beginning with the many slightly happy people it seems worse to then 'remove' them. 
 

Comment by peterbarnett on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-04T01:36:36.511Z · EA · GW

It seems as if EA organisations were in need of more operations people around 2018 (as evidenced by that 80k article), is there currently a need for more operations people in EA orgs? 

Relatedly, how difficult is it to get a position doing operations work for an EA org, especially if you have some but not tonnes of operations experience?