Should I be vegan? 2015-05-17T12:06:18.096Z
Hope: How Far Humanity Has Come 2015-02-25T15:06:46.013Z
The perspectives on effective altruism we don't hear 2014-12-30T12:04:55.169Z
Supportive Scepticism 2014-09-18T14:02:36.734Z


Comment by Jess_Whittlestone on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-04T16:38:19.235Z · EA · GW

Firstly, I very much appreciate the grant made by the LTF Fund! On the discussion of the paper by Stephen Cave & Seán Ó hÉigeartaigh in the addenda, I just wanted to briefly say that I’d be happy to talk further about both: (a) the specific ideas/approaches in the paper mentioned, and also (b) broader questions about CFI and CSER’s work. While there are probably some fundamental differences in approach here, I also think a lot may come down to misunderstanding/lack of communication. I recognise that both CFI and CSER could probably do more to explain their goals and priorities to the EA community, and I think several others beyond myself would also be happy to engage in discussion.

I don’t think this is the right place to get into that discussion (since this is a writeup of many grants beyond my own), but I do think it could be productive to discuss elsewhere. I may well end up posting something separate on the question of how useful it is to try and “bridge” near-term and long-term AI policy issues, responding to some of Oli’s critique - I think engaging with more sceptical perspectives on this could help clarify my thinking. Anyone who would like to talk/ask questions about the goals and priorities of CFI/CSER more broadly is welcome to reach out to me about that. I think those conversations may be better had offline, but if there's enough interest maybe we could do an AMA or something.

Comment by Jess_Whittlestone on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T19:20:56.698Z · EA · GW

I'd be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.

Comment by Jess_Whittlestone on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T19:14:22.357Z · EA · GW

Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you've said I do think that more time needs to be going into these grant reviews.

It don't think it's unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn't your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that's the case, then I think there's a bigger problem (the job isn't being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.

Comment by Jess_Whittlestone on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T14:00:36.555Z · EA · GW
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

I'm pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know - being able to do this should be a requirement of the job, IMO. Seconding Peter's question below, I'd be keen to hear if there are any plans to make progress on this.

If you really don't have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.

Comment by Jess_Whittlestone on Effective Altruism Grants project update · 2017-11-25T11:02:23.938Z · EA · GW

This may be a bit late, but: I'd like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund - especially when some of the amounts are pretty big, and there's a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I'm struggling to imagine what that's being spent on.

Comment by Jess_Whittlestone on EA Survey 2017 Series: How do People Get Into EA? · 2017-11-25T10:52:54.248Z · EA · GW

Did SlateStarCodex even exist before 2009? I'm sceptical - the post archives only go back to 2013: Maybe not a big deal but does suggest at least some of your sample were just choosing options randomly/dishonestly.

Comment by Jess_Whittlestone on Anonymous EA comments · 2017-02-10T10:11:01.414Z · EA · GW

If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.

I strongly agree with this, and I hadn't heard anyone articulate it quite this explicitly - thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn't always "use this CFAR technique."

(I think CFAR are great and a lot of their techniques are really useful. But I've also spent a bunch of time feeling bad the fact that I don't seem able to learn and implement these techniques in the way many other people seem to, and it's taken me a long time to realise that trying to 'figure out' how to fix my problems in a very analytical way is very often not what I need.)

Comment by Jess_Whittlestone on Use "care" with care. · 2017-02-09T09:31:42.691Z · EA · GW

Thanks for writing this Roxanne, I agree that this is a risk - and I've also cringed sometimes when I've heard EAs say they "don't care" about certain things. I think it's good to highlight this as a thing we should be wary of.

It reminds me a bit of how in academia people often say, "I'm interested in x", where x is some very specific, niche subfield, implying that they're not interested in anything else - whereas what they really mean is, "x is the focus of my research." I've found myself saying this wrt my own research, and then often caveating, "actually, I'm interested in a tonne of wider stuff, this is just what I'm thinking about at the moment!" So I'd like it if the norm in EA were more towards saying things like, "I'm currently focusing on/working on/thinking about x" rather than, "I care about x"

Comment by Jess_Whittlestone on Should I be vegan? · 2015-05-18T19:42:08.579Z · EA · GW

If you haven't tried just avoiding eggs, it seems worth at least trying.

Yeah, that seems right!

I don't understand the "completely trivial difference" line. How do you think it compares to the quality of life lost by eating somewhat cheaper food? For me, the cheaper food is much more cost-effective, in terms of world-bettering per unit of foregone joy.

I think this is probably just a personal thing - for me I think eating somewhat cheaper food would be worse in terms of enjoyment than cutting out dairy. The reason I say it's a basically trivial difference is that, while I enjoy dairy products, I don't think I enjoy them more than I enjoy various other foods - they're just another thing that I enjoy. So given that I can basically replace all the non-vegan meals I would normally have with vegan meals that I like as much (which requires some planning, of course), then I don't think there will be much, if any, difference in my enjoyment of food over time. I also think that even a very small difference in the pleasure I get from eating dairy vs vegan food would be trivial in terms of my happiness/enjoyment over my life as a whole, or even any day as a whole - I don't think I'd ever look back on a day and think "Oh, my enjoyment of that day would have been so much greater if I'd eaten cheese." I enjoy food, but it's not that big a deal relative to a lot of other more important things.

Comment by Jess_Whittlestone on Should I be vegan? · 2015-05-17T18:18:00.870Z · EA · GW

Regarding willpower: If you maintain a vegan diet for a few months, it will probably stop requiring willpower since you will stop thinking of animal products as an option that you have available. This has been my experience and the experience of lots of other vegans, although it's probably not universal.

Yeah, my experience previously has been that the willpower required mostly decreases over time - there was definitely a time a while ago when the thought of buying and eating eggs was kind of absurd to me. This was slightly counterbalanced by sometimes getting odd cravings for animal products, though. I think that if I put conscious effort into developing negative associations around animal products, though, I could probably end up in a situation where it took zero willpower. That would obviously take effort though.

Does it take willpower for you to be vegetarian? If not, then it probably won't take willpower for you to be vegan either once you get used to it.

No, being vegetarian takes zero willpower for me, but I was raised vegetarian, so I have hardly eaten any meat in my entire life, so I have very little desire to eat it - and even an aversive reaction to a lot of meat. (Which I'm very grateful to my parents for!)

Comment by Jess_Whittlestone on Should I be vegan? · 2015-05-17T17:03:40.004Z · EA · GW

I like the idea of counting non-vegan meals, that sounds great. Maybe I'll beemind it... then I'd have an incentive to keep it low, but I don't have to be absolute about it. Diana told me that whenever she eats something non-vegan she makes a donation to an animal welfare charity - I like that idea too.

The way I see this is getting from 85% to 100% is probably the most costly part for me (most inconvenience, most social cost) and I am getting the vast majority of the benefit with very little of the cost. I do feel uncomfortable with that 15% though. I think I will continue until September, and then reasses after a year, maybe getting closer to 100% with new rules.

Yeah, I think that's right. It's quite possible that the main downside of not going 100% vegan is just the discomfort that you end up feeling about it! (And that in particular this is larger than any actual consequences, especially if you're mostly eating dairy.)

Comment by Jess_Whittlestone on Should I be vegan? · 2015-05-17T16:52:53.659Z · EA · GW

Yeah, I think lacto-vegetarianism is probably 95% of the way in terms of impact on animal suffering anyway (or even more.) As I said above, for me the main reason for cutting out dairy too is that I think if I eat dairy I might be more likely to slip into eating eggs too down the line. But it's possible I could just protect against that by setting more solid rules in place etc.

Comment by Jess_Whittlestone on Should I be vegan? · 2015-05-17T16:51:11.369Z · EA · GW

Yeah, good point. I'm definitely a lot less concerned about eating dairy than I am eggs. The main reason for lumping them together is that I think I'd find it quite a bit easier psychologically to be "vegan" than to be someone who "doesn't eat eggs", and I think I'd be more likely to keep it up, but it's possible that's more malleable than I think.

I'm not totally convinced that not eating dairy will make my life worse in any nontrivial way, though. I enjoy eating cheese, sure, but it's not an experience that's unlike any other. I'm pretty sure that the difference in enjoyment in a life in which I eat dairy products and one in which I don't will basically be completely trivial.

Comment by Jess_Whittlestone on Hope: How Far Humanity Has Come · 2015-02-25T18:09:44.089Z · EA · GW

Ah, thanks for pointing these things out! I didn't realise either of these things - admittedly, I didn't have as much time as I would have liked to research the historical facts for this. A lot of these points were taken from some top posts on Quora on a thread about progress over the past few centuries, and I was (perhaps naively) hoping that crowdsourced info would give me fairly accurate info. Anyway, I was thinking of writing a more detailed article about human progress at some point, so I'll definitely try to do a bit more research and take these points into account - thanks for flagging my errors/sloppiness!

Comment by Jess_Whittlestone on Why I Don't Account for Moral Uncertainty · 2015-01-12T08:28:57.621Z · EA · GW

Yeah, I think it was a really good thing to prompt discussion of, the post just could have been framed a little better to make it clear you just wanted to prompt discussion. Please don't take this as a reason to stop posting though! I'd just take it as a reason to think a little more about your tone and whether it might appear overconfident, and try and hedge or explain your claims a bit more. It's a difficult thing to get exactly right though and I think something all of us can work on.

Comment by Jess_Whittlestone on Why I Don't Account for Moral Uncertainty · 2015-01-11T16:09:38.842Z · EA · GW

Good point to raise Owen! I strongly agree that we don't want to put people off contributing ideas that might run against default opinion or have flaws - these kinds of ideas are definitely really useful. And I think there were points in this post that did contribute something useful - I hadn't thought before about whether a subjectivist should take into account moral uncertainty, and that strikes me as an interesting question. I didn't downvote the post for this reason - it's certainly relevant and it prompted me to think about some useful things - although I was initially very tempted to, because it did strike me as unreasonably overconfident.

Comment by Jess_Whittlestone on Why I Don't Account for Moral Uncertainty · 2015-01-10T13:44:37.268Z · EA · GW

Even if being a subjectivist means you don't need to account for uncertainty as to which normative view is correct, shouldn't you still account for meta-ethical uncertainty i.e. that you could be wrong about subjectivism? Which would then suggest you should in turn account for moral uncertainty over normative views.

I think you're kind of trying to address this in what you wrote about moral realism, but it doesn't seem clear or convincing to me. There are a lot of premises here (there's no reason to prefer one moral realism over another, we can just cancel each possible moral realism out by an equally likely opposite realism) that seem far from obvious to me, and you don't give any justification for.

In general, it seems overconfident to me to write off moral uncertainty in a few relatively short paragraphs, given how much time others have spent thinking about this in a lot of depth. Will wrote his entire thesis on this, and there are also whole books in moral philosophy on the topic. Maybe you're just trying to give a brief explanation of your view and not go into a tonne of depth here, though, which is obviously reasonable. But I think it's worth you saying more about how your view fits with and responds to these conflicting views, because otherwise it sounds a bit like you are dismissing them quite offhand.

Comment by Jess_Whittlestone on The perspectives on effective altruism we don't hear · 2015-01-05T13:43:23.088Z · EA · GW

Thanks, Ben! This is a great idea, especially for student groups.

Comment by Jess_Whittlestone on The perspectives on effective altruism we don't hear · 2015-01-05T13:05:43.846Z · EA · GW

Thanks for being so honest, Nicholas, really useful to hear your perspective - especially as it sounds like you've been paying a fair amount of attention to what's going on in the EA movement. I can empathise with your point 4. quite a bit and I think a fair number of others do too - it's so hard to be motivated if you're unsure about whether you're actually making a difference, and it's so hard to be sure you're making a difference, especially when we start questioning everything. For me it doesn't stop me wanting to try, but it does affect my motivation sometimes, and I'd love to know of better ways to deal with uncertainty.

Comment by Jess_Whittlestone on Economic altruism · 2014-12-05T08:32:07.543Z · EA · GW

Have you heard about how Beeminder cofounders Danny and Bethany use exactly this to split up chores between them?

Comment by Jess_Whittlestone on Spitballing EA career ideas · 2014-12-01T17:55:20.651Z · EA · GW

Yeah, fair - the way I read it at the beginning sounded more like the whole thing was talking about 80k than perhaps it was. Anyway, just wanted to make clear that I think 80k very much agrees with most (if not all) or what you're saying here :)

Comment by Jess_Whittlestone on Spitballing EA career ideas · 2014-12-01T16:31:16.975Z · EA · GW

This is cool Ben, thanks for doing this! I agree with the general idea that personal fit is very important and we should be open to considering a wide range of careers.

I do think this slightly misrepresents 80k though. You say they only have a list of four top careers, but in fact what they have is four careers they consider "very promising, but highly competitive and with low chance of success", and ten more careers they consider promising. I think 80k also talk about most of the careers you list - if not all the specific sub-categories. And the 80k website actually actively advises against narrowing down based on what you're interested in, and emphasises the importance of personal fit.

Comment by Jess_Whittlestone on Effective Altruism Blogs · 2014-12-01T16:21:30.424Z · EA · GW

I'd also be interested.

Comment by Jess_Whittlestone on Should Giving What We Can change its Pledge? · 2014-10-23T10:18:23.854Z · EA · GW

It's worth noting that many people do, and that this isn't obviously indefensible. So people can genuinely care more about existing people or existing creatures :-)

Yeah, I don't mean that it's unheard of - but I do think this is a pretty rare view within the EA community.

Comment by Jess_Whittlestone on Should Giving What We Can change its Pledge? · 2014-10-22T20:40:21.516Z · EA · GW

For example, there are many people in the world today who believe that the best cause to help other people is to donate a significant part (10% infact) of their income towards god's plan by funding the expansion of evangelical churches across the world. Would you be comfortable with them signing the GWWC pledge and associating themselves with the organisation? What about those who feel that legalising drugs is the most important cause because they like to get high? or Hindu charities who fund sanctuaries for cows because they believe cows are sacred animals with incommensurable value above mere people?

I'm not sure these people are much more easily excluded by the current pledge. You could still get people who have very bizarre beliefs about the best way to help people in poverty. This is always going to be a risk - but it seems unlikely people who are overly attached to specific causes are going to find the GWWC community that appealing.

I joined because I am concerned about causes that demonstrably and effectively help human people today - not causes that may conceivably if we accept unfalsifiable/provable premises help people in the future or causes that provably help animals (because I reject the philosophical premises of that cause).

Are you saying that you genuinely care more about people alive today than people who will live in the future? Or that you care about them equally but think we have much more evidence for helping the former category and so should focus our efforts there? If the former, then I think you'll find a lot of the existing GWWC community disagree with you. If the latter, then it seems that you should at least be open to considering and investigating causes that help people in the future, even if you don't currently think that the standards of evidence are anywhere near high enough, which I agree is reasonable.

Comment by Jess_Whittlestone on Should Giving What We Can change its Pledge? · 2014-10-22T20:03:55.711Z · EA · GW

Thanks for writing this up and seeking feedback, Michelle!

I'm in favour of the change - you know this, but I'm saying it here because I'm concerned that only people with strong disagreements will respond to this post, and so it will end up looking like the community is more against the change than it in fact is.

I think ultimately having a broader pledge will better represent the views of those who take it and the community, and agree that having a clear action which becomes standard for all EAs could be very beneficial.

Comment by Jess_Whittlestone on Should Giving What We Can change its Pledge? · 2014-10-22T19:56:04.671Z · EA · GW

Making this change would basically allow other causes that may have significant philosophical and/or practical baggage to trade on that reputation while undermining the focus and work on extreme poverty. It does nothing to help the fight against extreme poverty and may harm it, while boosting those who are seeking to advance other causes.

This makes it sound like the causes are competing with each other, which I don't think is true. Changing the pledge isn't about undermining the focus on extreme poverty, it's about recognising that what we ultimately care about is saving lives, no matter where or when they are, whether they are in the developing world, the developed world, or in the future. Some people think the best way to save lives is to donate to far-future oriented causes, others think the best way is to donate to poverty causes - these people disagree, but mostly they agree that what they ultimately care about is the same and this is just a really tough question. Given that none of us can be certain that poverty is the best cause to focus on, it seems beneficial to be more inclusive of other potentially effective cause areas, so we can encourage more discussion and debate amongst the people who disagree. It is not a competition or a matter of one cause trying to crowd out the other.

To be a little rude, we don't need more ways for people looking to blur the lines between their "Institute for Rich White Guys to write Harry Potter Fan Fiction" (as it was described in one recent debate elsewhere) and the reputations of charities fighting malaria in the developing world - there are more than enough other avenues within effective altruism where this happens already as a historical accident of where it first found purchase.

Again, you're assuming that there being a link between people focused on fighting malaria and those concerned about existential risk is a bad thing. I acknowledge that there are PR issues with xrisk, and there are concerns there - but ultimately, it seems a good thing to me to have a community where people from both these groups can acknowledge their shared values and have productive debates with one another.

Comment by Jess_Whittlestone on Should Giving What We Can change its Pledge? · 2014-10-22T19:49:48.563Z · EA · GW

I don't think it's accurate to say that if the pledge were changed, GWWC would become a community of "singularitarians, rationalists and the like." It would be a community of people who want to donate 10% of their income to most effectively improve the lives of others, which could include singularitarians and rationalists, but certainly wouldn't be defined by it. Saying you wouldn't want to take the pledge for this reason seems a bit like saying you don't want to be part of the EA community because it contains those people.

Also, note that the current pledge doesn't actually exclude singularitarians, rationalists etc.: "The change is not likely to make a difference to people who think that the best way to help others is to ensure that the future will go well, since the pledge already explicitly includes people who will live in the future, as well as those alive now." So it's unlikely that changing the pledge would result in the community changing in the way you're concerned about.

Comment by Jess_Whittlestone on How a lazy eater went vegan · 2014-10-07T07:38:50.731Z · EA · GW

Thanks for posting this Topher. When I was vegan, my diet was very similar to the one you described, and all in all I didn't find it that difficult. You'll notice the "was" in that sentence though - the thing that got me was eating out or eating socially with friends - I found it very difficult to maintain a vegan diet then, and so I found myself slipping. I'd be interested in how you deal with this - do you stick to a vegan diet even when eating out or going to friends houses, and if so, how difficult do you find it?

My solution for a while was to have a strict rule that I am entirely vegan in what I cook and by for myself, and vegetarian in other situations - like eating out - where being vegan is very difficult or inconvenient. This worked pretty well for a while. It's harder now because I'm living in a house with people who frequently cook together - which has a lot of benefits of saved time, money, and being more enjoyable and sociable - but aren't vegan. So I've slipped back to just being vegetarian across the board, but I feel somewhat uncomfortable about it.

Comment by Jess_Whittlestone on Career choice: Evaluate opportunities, not just fields · 2014-09-30T11:52:20.025Z · EA · GW

Great post Ben, this seems like a really good point to make clear. I think there's a general point here that it's much easier, and often better, to choose between specific options than general categories of options.

Generally when I think about career choice I think it's useful to begin by narrowing down to a few different fields that seem best for impact and fit, and then within those fields seek out concrete opportunities - and ultimately the decision will come down to how good the opportunities are, not a comparison between the fields themselves. But you've still narrowed by field initially. This seems to be the case especially when the fields you're comparing seem roughly as good as each other or each have different advantages.

I like the suggestion of putting a lot of effort into looking for really good opportunities, too - I imagine this is often neglected. A side point there is that obviously in some fields this is more worth doing than others, because some fields are going to be higher variance than others in terms of how good the opportunities are. e.g. I'd imagine there's higher variance in software jobs than in certain academic ones.

Comment by Jess_Whittlestone on Effective altruism as the most exciting cause in the world · 2014-09-28T17:17:11.433Z · EA · GW

Great post, and definitely agree we should focus on this more.

Another thing I personally find exciting about effective altruism is that the question "How do I do the most good?" (with my career, money etc.) is a really motivating, intellectually challenging question to spend my time thinking about. So for those who enjoy spending their time thinking about interesting questions, effective altruism offers an environment in which to discuss one of the most important and stimulating questions out there - that's pretty exciting to me. I would imagine at least some others feel similarly.

Comment by Jess_Whittlestone on An epistemology for effective altruism? · 2014-09-27T09:02:58.515Z · EA · GW

When I hover over the 3 upvotes in the corner by the title, it says "100% positive" - which suggests people haven't downvoted it, it's just that not many people have upvoted it? But maybe I'm reading that wrong.

I thought it was a good and useful post, I don't see any reason why people would downvote it - but would also be interested to hear why if there were people who did.

Comment by Jess_Whittlestone on Learning From Less Wrong: Special Threads, and Making This Forum More Useful · 2014-09-24T11:21:49.731Z · EA · GW

There are already has open threads

Think you've got an extra word in here :)

Comment by Jess_Whittlestone on Cooperation in a movement supporting diverse causes · 2014-09-24T09:13:02.961Z · EA · GW

A nice middle ground between "not talking about our reasons for supporting different causes at all" and "having people try to persuade others that their cause is the most important one" could be to simply encourage more truth-seeking, collaborative discussion about causes.

So rather than having people lay out their case for different causes (which risks causing people to get defensive and further entrenching peoples' sense of affiliation to a certain cause, and a divide between different "groups" in the movement) it would be nice to see more discussion where people who support different causes explicitly try and find out where their disagreements lie, and learn from each other. I'm thinking of the kind of discussion that was had between Eliezer, Luke and Holden, for example, where they discussed their views on the far future and eventually found they didn't disagree as much as they thought they did. This kind of thing seems really valuable, both in terms of learning and bringing people closer together.

Comment by Jess_Whittlestone on Cooperation in a movement supporting diverse causes · 2014-09-24T08:55:32.969Z · EA · GW

My reading of Michelle's point was not that we should be writing about and defending causes that we wouldn't normally think of as EA (although this could also be beneficial!) - I think she meant, within the space of the causes EAs generally talk about, it would be good if people wrote about and defended causes different to the ones they normally do. So, for example, if a person is known for talking about and defending animal causes, they could spend some time also writing about and defending xrisk or poverty. This would then lessen the impression that many people are "fixed" to one cause, but wouldn't have the problem you mention. I might be reading this wrong though.

Comment by Jess_Whittlestone on Supportive Scepticism · 2014-09-20T15:07:51.722Z · EA · GW

Yeah, agree that this is a simple but useful idea!

One concern I would have with this in some situations is that it might cause you to anchor on your initial option too much - you might miss some good alternatives because you're just looking for things that most easily come to mind as comparable to your first option. But I don't know how often this would actually be a problem.

Comment by Jess_Whittlestone on Help spread the movement! · 2014-09-20T09:10:08.941Z · EA · GW

On the first page of the link, there's a box at the bottom - it's not clear what, if anything, should go in there?

Comment by Jess_Whittlestone on Supportive Scepticism · 2014-09-18T16:15:22.323Z · EA · GW

Nice quote, and very relevant - thanks for sharing! A general worry is that EA is often framed as inherently critical - as being sceptical of typical ways of doing good, as debunking and criticising ineffective attempts at altruism etc. - and this will mean we naturally end up using a lot of negative words.

I think there's some evidence that being critical outside of a group can make people within the group feel closer to each other - which makes sense, because it strengthens the feeling of "us" versus "them." But doing this with EA seems actively harmful, both because we want to attract as many people to be part of the "group" as possible, and because it's unclear exactly where the line of the "group" lies, so we inevitably end up being critical of each other too.

Comment by Jess_Whittlestone on Supportive Scepticism · 2014-09-18T16:06:56.954Z · EA · GW

Great points Erica, thanks! I've been using very similar ways of thinking recently, actually, and it's helped a lot.

One thing I've found, though, is that it's easy to reflectively know that all of these points are true, but still not believe them on an emotional level, and so still find it difficult to make decisions. I think the main thing that's helped me here is just time and persistence - I'm gradually coming to believe these things on a more gut level the more times I do just make a decision, even though I'm not certain, and it turns out ok. I think this is a classic situation where your system 2 can believe something, but your system 1 needs repeated experience of the thing actually happening - decisions you're not certain of turning out ok, in this case - to really internalise it.

Comment by Jess_Whittlestone on Supportive Scepticism · 2014-09-18T16:00:34.185Z · EA · GW

Thanks Dette :)

I suspect sometimes we feel it's tougher, stronger or somehow virtuous not to need support

Yeah, agree. I think the solution to this is just for more people to stand up and admit they need support, and for us to reward those people for doing so, so that it becomes more socially acceptable. This can be hard to do though, of course. But it's easy to forget that everyone is trying to project their most confident image, and that we may not always be as confident as we try to project!

Comment by Jess_Whittlestone on Open Thread · 2014-09-18T13:30:48.584Z · EA · GW

There seem to be two questions here:

(1) Does believing in or identifying as EA require having a certain amount of hubris and arrogance?

(2) Is EA more likely to attract arrogant people than more modest people?

I think the answer to (1) is clearly no - you can believe that you should try to work out what the best way to use resources is, without thinking you are necessarily better than other people at doing it - it's just that other people aren't thinking about it. My impression is a lot of EAs are like this - they don't think they're in a better position to figure out the most effective ways of doing good than others, but given that most other people aren't thinking about this, they may as well try.

I'm less sure about (2), and it depends what the comparison is - are we asking, "Is the average person who is attracted to EA more likely to be arrogant than the average person who is interested in altruism in a broader sense?". It seems plausible that of all the people who are interested in altruism, those who are more arrogant are more likely to be drawn to effective altruism than other forms of altruism. But I'm not sure that EAs are on the whole more arrogant than people who promote other altruistic cause areas - in a way, EAs seem less arrogant to me because they are more willing to accept that they might be wrong, and less dogmatic in asserting that their specific cause is the most important one.

There's a third question which I think is also important: is EA more likely to be perceived as arrogant from the outside than other similar social movements or specific causes? I think here there is a risk - stating that you are trying to figure out the best thing can certainly sound arrogant to someone else (even though, as I said above, it actually seems less arrogant to me than being dogmatic about a specific cause!) So maybe it's important for us to think about how to present EA in ways that doesn't come across as arrogant. One idea would be to talk more about ourselves as "aspiring" effective altruists that as simply effective altruists - we're not trying to claim that we're better at altruism than everyone else really, but rather that we are trying to figure out what the best way is.

Comment by Jess_Whittlestone on Open Thread · 2014-09-18T11:59:43.409Z · EA · GW

I think it originated with GiveWell - they used something like this framework for assessing cause areas, which 80k then based their framework on. It's possible I'm misremembering this though.

Comment by Jess_Whittlestone on Introduce Yourself · 2014-09-18T11:55:32.844Z · EA · GW

Hi, I'm Jess. I'm currently living in Oxford and doing a PhD in Behavioural Science - I'm looking at ways of making people more willing to consider evidence that conflicts with their existing views, and more likely to change their minds about important topics. I want to figure out how people can be more open-minded and truth-seeking, basically :)

I got involved in effective altruism after I finished my degree at Oxford, and came across 80,000 Hours. I'd always wanted to make a difference in the world, but was feeling a bit disillusioned about how to do it and unsure how this really fitted with my skills and interests. I ended up spending a year working for 80k - mostly designing and running the careers advising process - whilst learning more about effective altruism and thinking about what to do with my life. I'm still figuring a lot of this out!

I blog at and have written a number of EA-related posts for the 80,000 Hours blog.

Comment by Jess_Whittlestone on To Inspire People to Give, Be Public About Your Giving · 2014-09-18T11:44:15.062Z · EA · GW

Nice post, Peter!

Asides from seeming boastful, I think the other risk of talking publicly about giving is that it can risk seeming critical, or alienating people. I've definitely found some people respond to me talking about giving defensively - if I say I donate x%, they might look for reasons why I'm being unreasonable, or why my situation is very different from theirs. I think this is because they feel threatened - talking about giving can make some people feel like you are judging them for not giving, which provokes a defensive reaction.

Of course, in a lot of cases it may be that this risk is outweighed by the benefit of making giving seem more commonplace. And the more people talk about giving, the more of a "social proof" effect you get, and so the less likely this is to be an issue. But I think it's something worth bearing in mind, especially in one-on-one interactions.