Comment by alexrjl on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T20:18:01.511Z · score: 6 (8 votes) · EA · GW

I looked for the study because I was surprised by the strength of the statement it was used to support. When I found the study, I was annoyed to find that actually, it doesn't come close to supporting a claim of the strength made. This annoyance prompted the tone of the original post, which you have characterised fairly and was a mistake. I've now edited this out, because I don't want it to distract from the claim I am making:

The study does not support the claim that is it is being used to support.

Comment by alexrjl on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T07:03:25.761Z · score: 12 (12 votes) · EA · GW

"Even though historically men have been granted more authority than women, influence of feminism and social justice means that in many circumstances this has been mitigated or even reversed. For example, studies like Gornall and Strebulaev (2019) found that blinding evaluators to the race or sex of applicants showed that by default they were biased against white men."

That is an unreasonably strong conclusion to draw from the study you've cited, not least given that even in the abstract of that study the authors make it extremely clear that "[their] experimental design is unable to capture discrimination at later stages [than the initial email/expression of interest stage]". https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3301982

[Edited for tone]

Comment by alexrjl on Making discussions in EA groups inclusive · 2019-03-04T22:02:04.426Z · score: 3 (3 votes) · EA · GW

To be clear, I didn't make the above point in order to say "you should feel bad because you're white and male". I also didn't make it to say "you should just shut up and defer to the opinions of others here because you're white and male". I made it to try to explain why the choice to say "everyone just needs to suck it up and deal with their own discomfort" is not a choice with no downside; it puts the debate on an uneven footing, where people are not able to participate equally. It doesn't seem a stretch to then say that debates where some people are at an inherent disadvantage from the start are not as self evidently optimal as a truth seeking exercise as it may first seem.

Comment by alexrjl on Making discussions in EA groups inclusive · 2019-03-04T21:29:05.290Z · score: 2 (9 votes) · EA · GW
I'm a white male, and I view my own comfort in debate spaces merely as a means to reach truth, and welcome attempts to trade the former for the latter. Of course, you may be thinking "that's easy for you to say, cause you're a white male!" And there's no point arguing with that because no one will be convinced of anything. But I'm at least going to say that it is the right philosophy to follow.

Consider the possibility that the philosophy you mention is not as easy for everyone to follow as it is for you. When the entirety of society is built with your comfort in mind, it's very easy to sacrifice some of it as a means to reach truth, especially as as soon as you leave the debate space you can go back to not thinking about any of the discomfort you experienced during the discussion. You are safe putting all of your emotional and intellectual energy into the debate space, knowing that if the conversation gets too much for you, you can opt to leave at any time and go on with the rest of your day.

If, however, someone lives in a world where every day includes many instances of them being made uncomfortable (even if each instance might seem trivially small to someone who only sees one), which they have no option to switch off, that person cannot safely put all of their emotional and intellectual energy into a debate space which asks them to sacrifice their level of comfort. They don't have the option of participating in a discussion until it becomes too much, because they have to save energy with which to get through the rest of the day. As a result, they are not able to participate in the debate on an equal footing (if they even choose to at all, given the emotional effort involved).

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-24T17:31:24.645Z · score: 1 (1 votes) · EA · GW

Thanks for this. I've edited your question into the post. The third bullet point you wrote I actually think captures a lot of why I'm excited about a potential Task Y (or list, like the one aaron posted). If people have the option to do something which both genuinely is good, and seems good to them, and hear that this is actively encouraged by the EA community and enough to be considered a valuable part of it, I think this goes quite a long way towards stopping it seeming so elitist. Having multiple levels of commitment available to people, with good advice about the most effective thing to do given a particular level of commitment, seems to plausibly have lots of potential.

I have price discrimination in my head as a model here, though I realise the analogy is not a perfect one.

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-24T17:01:17.061Z · score: 1 (1 votes) · EA · GW

Thank you all for the positive comments and extrememly useful feedback! I've edited some subheadings and a summary into the original post, though I've (optimistically) left the title so that people who've read the post and want coming back to participate in the discussion don't get lost. I've also included John's question in the list of important question to ask.

Can the EA community copy Teach for America? (Looking for Task Y)

2019-02-21T13:38:33.921Z · score: 63 (33 votes)
Comment by alexrjl on Would killing one be in line with EA if it can save 10? · 2018-11-29T21:20:19.917Z · score: 2 (2 votes) · EA · GW

I'm not sure many EAs will agree with your intuition (If I'm understanding your question correctly) that it's morally wrong to kill one person to save 10. There are certainly some moral philosophers who do, however. This dilemma is often referred to as the "trolley problem", and has had plenty of discussion over the years.

You may find this interesting reading, it turns out people's intuitions about similar problems vary quite a lot based on culture.

Comment by alexrjl on Amazon Smile · 2018-11-22T07:03:33.769Z · score: 1 (1 votes) · EA · GW

I think this is a valid concern, and certainly don't think presenting 'Amazon smile is the sort of thing EAs do' is particularly useful or accurate. To try to be sightly more clear about why I do think the mention is a useful starting point:

  • Full EA can be quite a lot to try to introduce to people all at once, even when those people already want to help.
  • Asking people to carefully consider how they make a specific donation is a gentle way in, at least to 'soft EA'. (Giving games are another example of this)
  • Amazon Smile is a specific donation that you can ask people to consider how they make. If they haven't heard of it before, it's likely that their net experience of hearing about it and setting it up will be positive (they are getting to donate to a charity with no downside, again rather like a giving game). My hope is that this positive experience will make people more likely to consider where their donations go in future, and/or to respond positively to future things they hear about EA. I'm uncertain about how large the effects in each case will be, but don't think they will be negative. I am concerned, however about the effect of someone setting up Amazon smile on the total amount that they donate in future, which I think will be negative if you ignore any potential introduction to EA. This means the probability of the exercise being positive depends on how likely you are to be able to use the conversations as a productive starting point.

Amazon Smile

2018-11-18T20:16:27.180Z · score: 8 (7 votes)
Comment by alexrjl on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-12T22:25:30.189Z · score: 2 (2 votes) · EA · GW

I think there's reason to be cautious with the "highest marginal information comes from studying neglected interventions" line of reasoning, because of the danger of studies not replicating. If we only ever test new ideas, and then suggest funding the new ideas which appear from their first study to have the highest marginal impact, it's very easy to end up with several false positives being funded even if they don't work particularly well.

In fact, in some sense the opposite argument could be made; it is possible that the highest marginal information gain will come from research research into a topic which is already receiving lots of funding. Mass deworming is the first example that springs to mind, mostly because there's such a lack of clarity at the moment, but the marginal impact of finding new evidence about an intervention there's lots of money in could still be very large.

I guess the rather sad thing is that the biggest impact comes from bad news: if an intervention is currently receiving lots of funding because the research picture looks positive, and a large study fails to replicate, a promising intervention now looks less so. If funding moves towards more promising causes as a result, this is a big positive impact, but it feels like a loss. It certainly feels less like good news than a promising initial study on a new cause area, but I'm not sure it actually results in a smaller impact.