Posts

On GiveWell's estimates of the cost of saving a life 2020-10-01T10:22:34.170Z

Comments

Comment by Aaron__Maiwald on [deleted post] 2021-02-07T15:18:31.228Z

SEARCHING FOR A GRAPHIC DESIGNER

Comment by Aaron__Maiwald on What actually is the argument for effective altruism? · 2020-11-12T20:43:28.047Z · EA · GW

I actually think there is more needed. 

If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.

I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:

  1. you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
  2. pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally

If that's true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good  for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?

Comment by Aaron__Maiwald on 80,000 Hours: Bad habits among people trying to improve the world · 2020-10-07T14:48:26.593Z · EA · GW

Super interesting, thanks!

Comment by Aaron__Maiwald on What actually is the argument for effective altruism? · 2020-09-28T12:00:27.997Z · EA · GW

I'd say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn't be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?

To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?

Comment by Aaron__Maiwald on How you can contribute to the broader EA research project · 2020-09-09T18:25:29.584Z · EA · GW

Do you still recommend these approaches or has your thinking shifted on any? Personally, I'd be especially interested if you still recommend to "Produce a shallow review of a career path few people are informed about, using the 80,000 Hours framework. ".

Comment by Aaron__Maiwald on Making decisions under moral uncertainty · 2020-09-09T18:14:02.304Z · EA · GW

Hey, thank you very much for the summary!

I have two questions:

(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?

"All" seems impossible, supposing the set of moral theories is indeed infinite; "whatever you like" seems to justify basically any act by just selecting or inventing the right subset of moral theories; "take the popular ones" seems very limited (admittedly, I dont have an argument against that option, but is there a positive one for it?)

(2) how should one assign probabilities to moral theories?


I realise that these are probably still controversial issues in philosophy, so I dont expect a solution. Rather, any (yet speculative) ideas on how to resolve them would be great!

Comment by Aaron__Maiwald on [deleted post] 2020-09-08T05:46:50.263Z

Correct me if I'm wrong, but it seems to me that a common view in EA is the following:

(A1) If the expected net-values of performing any action A and its omission are equal, then the moral value of A is neutral.

In more concrete words: if donating to AMF and not donating to AMF would cause the same net-value, then donating to AMF would be morally neutral.

This, I think, has implications that are very weird. To see why let's consider this case:

Two hitmen lign up to execute some person P. (1) They both simultaneously shoot at P in a way that would suffice to kill P. (2) In expectation, the only difference in terms of value, between pulling the trigger and not pulling it is that P dies.

Now, for both hitmen it is true that, the net-values of shooting P and not shooting P are equal. This, I think, follows from (1) and (2).

This together with (A1) implies, that shooting P is morally neutral for both hitmen. But how can that be? Is my reasoning flawed here or is this really an implication of (A1)?

Could this maybe be applied to real situations?

Comment by Aaron__Maiwald on [deleted post] 2020-09-07T15:06:45.481Z

Correct me if I'm wrong, but it seems to me that a common view in EA is the following:

(A1) If the expected net-values of performing any action A and its omission are equal, then the moral value of A is neutral.

In more concrete words: if donating to AMF and not donating to AMF would cause the same net-value, then donating to AMF would be morally neutral.

This, I think, has implications that are very weird. To see why let's consider this case:

Two hitmen lign up to execute some person P. (1) They both simultaneously shoot at P in a way that would suffice to kill P. (2) In expectation, the only difference in terms of value, between pulling the trigger and not pulling it is that P dies.

Now, for both hitmen it is true that, the net-values of shooting P and not shooting P are equal. This, I think, follows from (1) and (2).

This together with (A1) implies, that shooting P is morally neutral for both hitmen. But how can that be? Is my reasoning flawed here or is this really an implication of (A1)?

Could this maybe be applied to real situations?