Posts

[Link] An Interview with Ben Garfinkel, Governance of AI Program Researcher 2019-06-24T11:00:40.871Z · score: 14 (3 votes)
A guide to effective altruism fellowships 2019-01-19T10:25:36.833Z · score: 34 (21 votes)

Comments

Comment by jtm on Reality is often underpowered · 2019-10-19T14:37:46.377Z · score: 2 (2 votes) · EA · GW

Thanks for a great post, Greg! Loved this quote:

But it should temper our enthusiasm about how many insights we can glean by getting some data and doing something sciency to it.

-Joshua Monrad

Comment by jtm on EA Survey 2018 Series: How welcoming is EA? · 2019-02-28T20:50:46.621Z · score: 13 (7 votes) · EA · GW

Wow, this is incredibly comprehensive - great work, thanks to authors!

Considering how many graphs and tables there are, I am surprised there's no mention of subjective welcomeness conditional on race, ethnicity, and socioeconomic background.

Do you know if this data exists?

Also, were there any questions getting at why EA is or is not welcoming?

Thanks! :)

Comment by jtm on 2018 AI Alignment Literature Review and Charity Comparison · 2019-02-17T02:34:04.508Z · score: 2 (2 votes) · EA · GW

Hey Aaron!

So, I think we agree and I may have been unclear in my comment. I didn't mean to imply that the problem of AI bias necessarily is large/neglected/tractable enough that the EA community should be very preoccupied with it.

The reason I commented was that I read OP's paragraph to not only say 'bias isn't the kind of thing that the EA community should focus on' but rather something much more bold, i.e. 'bias isn't a problem at all'.

And I quite confidently and strongly disagree with the latter claim.

-Joshua from YEA.

Comment by jtm on 2018 AI Alignment Literature Review and Charity Comparison · 2019-02-02T10:45:20.522Z · score: 0 (2 votes) · EA · GW

Hi! What a comprehensive review, thanks for writing it up!

One quibble is that the OP is very dismissive of the issue of biases, discrimination, and AI.

While I don't necessarily think that this issue should fall under the category of AI alignment that people in the EA community normally are concerned with, I also believe that it is inappropriate to completely dismiss it. So, I just wanted to add a comment saying that some of us in the community are concerned about biases and AI, and I hope the EA community will being having a healthy discussion about it.

Cheers!

Comment by jtm on Vox's "Future Perfect" column frequently has flawed journalism · 2019-02-02T10:34:12.322Z · score: 5 (5 votes) · EA · GW

Hi OP! Thanks for writing this up. A few comments on the section about Booker's policy proposal.

1) I agree that journalists should focus more on poverty alleviation in the poorest parts of the world, such as sub-Saharan African countries. Fortunately, Future Perfect (FP) does cover global poverty reduction efforts much more than most mainstream media outlets. Now, you are right that the piece on Booker's proposal is part of a tendency for FP to focus more on US politics and US poverty alleviation than most EA organisations. However, I think this approach is justified for (at least) two reasons: a) For the foreseeable future, the US will inevitably spend a lot more on domestic social programs than on foreign aid. Completely neglecting a conversation about how the US should approach social expenditure would, I believe, be a huge foregone opportunity to do a lot of good. Yes, a big part of EA is to figure out which general cause areas that should receive most attention. But I believe that EA is also about figuring out what the best approaches are within different important cause areas, such as poverty in the US. I think that FP doing this is a very good thing. b) Part of the intended audience for FP (rightly) cares a lot about poverty in the US. Covering this issue can be a way of widening the FP audience, thus bringing much-needed attention to other important issues also covered by FP, such as AI safety.

2) I personally agree with the "basic moral imperative to get as many people as possible out of poverty" as you call it. But, without getting deep into normative ethics, I think it is fair to say that several moral theories are concerned with grave injustices such as the current state of racial inequity in the United States. Closing the race-wealth gap will only be a "strange thing to focus on" if you assume, with great confidence, utilitarianism to be true.

3) Even if one assumes utilitarianism to be true, there are solid arguments for focusing on racial inequity in the US. Efforts to support people of colour specifically in the US is not just to "fixate" on an arbitrarily selected race. It is to fixate on a group of people who have been systematically downtrodden for most of US history and who until very recently (if not still) have been discriminated against by the government in ways that have kept them from prospering. (For anyone curious about this claim, I strongly encourage you to read this essay for historical context.) I totally agree with you that "unequal racial distribution can have important secondary effects", and this is why there is a solid case for paying attention to the race-wealth gap, even on utilitarian grounds. You argue that this "should take a backstage" to general poverty alleviation. I actually agree, and that is also how the EA movement is already acting and prioritising. But 'taking a backstage' does not have to (and should not) mean being completely neglected, and I for one really appreciate that FP is applying the methods and concepts of effective altruism to a wider range of issues.

Cheers! :)

Joshua, former Co-President of Yale EA.

Comment by jtm on A guide to effective altruism fellowships · 2019-01-24T11:40:25.720Z · score: 3 (3 votes) · EA · GW

Thanks for the encouraging words, I really appreciate it!

Comment by jtm on A guide to effective altruism fellowships · 2019-01-19T20:57:29.262Z · score: 3 (3 votes) · EA · GW

Hey! Obviously, the list you got is a great place to start and I'm sure your project will be awesome.

One thing that the list kind of lacks is focused discussions on one cause area at a time, which we had for existential risks, animal welfare, and global health and development. If you want to make room for deeper dives into each of these topics, it might be a great idea to do a workshop in the beginning of the stipend where you cover a bunch of the essentials (expected value theory, neglectedness, counterfactual thinking), so you don't have to spend whole sessions on them.

I would perhaps also recommend picking a different topic than the chapter on conscious consumerism. While I think that MacAskill has a really great point, I think there are more important topics to cover, and you risk turning off people who care deeply about conscious consumerism already.

Let me know if you have other questions :)

Comment by jtm on A guide to effective altruism fellowships · 2019-01-19T20:49:48.297Z · score: 3 (3 votes) · EA · GW

Thanks so much, Risto_Uuk, I really appreciate it. I agree that admissions are quite difficult and ultimately we relied on intuition to some extent as well, but I do believe that putting the criteria in explicit terms helps structure the process a bit. Another thing that helps is to be multiple people going through the list of candidates together. :)