Posts

Comments

Comment by Sophia on What gives me hope · 2021-05-19T18:28:40.626Z · EA · GW

I love this post! Thank you for sharing it :)

Comment by Sophia on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-16T00:59:36.662Z · EA · GW

Yay! I'm glad :)

Comment by Sophia on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-15T08:12:28.103Z · EA · GW

It is probably totally inappropriate to respond to questions on an AMA for other people, but I thought I'd mention anyway that I loved a talk (linked below) that Hayden Wilkinson gave, which was very relevant to this. 

Hayden pointed out that even if, theoretically, your only goal* was to help others as much as you can over your lifetime, you still need to take into account that you are human and what you do now changes what your future self is likely to want to do.  If you try and do an extreme amount now, with no plan to give yourself a break from this extreme amount when you need one, then your lifetime impact will probably be less than if you set yourself much less demanding targets. If you then find that the less demanding targets are easy to maintain and you think you really could do more, at that point you can rev up. Likewise, when what you are doing feels too much (even if theoretically, you think you should be doing even more), giving yourself permission to properly take care of yourself in the short-term might be the best way to increase your impact over your lifetime.  

*For the record, I'd guess that for almost everyone within the EA community, doing as much as they can to help others isn't even their only goal in life, even if it is still a very high priority for them (and for almost all goals that a person might have, self-care for your long-term wellbeing seems really important). I have other goals (like having an enjoyable life) because I am not perfectly selfless,  but I think it is plausible that letting myself have other goals increases the chances that this goal (the goal of helping others as much as I can with a significant proportion of my time and money) will be a pretty high priority for me for the rest of my life. 

Comment by Sophia on HIPR: A new EA-aligned policy newsletter · 2021-05-13T06:07:20.259Z · EA · GW

That makes sense! My mistake. 

Comment by Sophia on HIPR: A new EA-aligned policy newsletter · 2021-05-13T04:24:54.493Z · EA · GW

I downvoted your comment despite agreeing with a lot of your critiques because I very, very strongly disagree that posts like this aren't a good fit for the forum (and my best guess is that discouraging this sort of post does significantly more harm than good). If someone who has a good understanding of what effective altruism is has an idea they think is plausibly a high impact use of time (or other resources), the forum is exactly where that sort of idea belongs! This post clearly reaches this standard. Once the idea is on the forum, open discussion can happen about whether it is a high impact idea, or even net positive. 

 If people only ever post ideas to the forum that they are already quite sure the effective altruism community will agree are high impact, it will be much harder for the effective altruism community to not be an echo chamber of only the "approved" ideas.  I think the author has improved the forum by making this post for two reasons. The first reason is that the post created an interesting discussion on whether this idea is good one and how it could be improved (the critiques in your comment were an important contribution to this!). Secondly, more importantly, their post nudged the culture of the forum in a direction I liked; making it more normal to post ideas for plausibly* high impact projects that aren't as obviously connected to one of the standard EA ideas that come up in every EA intro talk. Despite me not being sure that this idea is even net positive, it still seems almost absurd to me that this post isn't a good fit for the EA forum (especially if people like you make compelling critiques and suggestions in the comments, ensuring the discussion isn't too one-sided and maybe also allowing plausibly good ideas to iterate into better ideas)!

*To me, sufficiently plausible to be a good fit for a forum post, as I said above, is an author who understands what EA is who thinks the idea might be high impact. I actually think this author went well beyond and above what I think a good minimum bar is for such ideas;  it sounds like this author put in a great deal of thought into this project, has put quite a bit of work already into getting this idea off the ground and also got feedback from multiple people in the EA community!

Comment by Sophia on HIPR: A new EA-aligned policy newsletter · 2021-05-13T04:23:35.445Z · EA · GW

I downvoted your comment despite agreeing with a lot of your critiques because I very, very strongly disagree that posts like this aren't a good fit for the forum (and my best guess is that discouraging this sort of post does significantly more harm than good). If someone who has a good understanding of what effective altruism is has an idea they think is plausibly a high impact use of time (or other resources), the forum is exactly where that sort of idea belongs! This post clearly reaches this standard. Once the idea is on the forum, open discussion can happen about whether it is a high impact idea, or even net positive. 

 If people only ever post ideas to the forum that they are already quite sure the effective altruism community will agree are high impact, it will be much harder for the effective altruism community to not be an echo chamber of only the "approved" ideas.  I think the author has improved the forum by making this post for two reasons. The first reason is that the post created an interesting discussion on whether this idea is good one and how it could be improved (the critiques in your comment were an important contribution to this!). Secondly, more importantly, their post nudged the culture of the forum in a direction I liked; making it more normal to post ideas for plausibly* high impact projects that aren't as obviously connected to one of the standard EA ideas that come up in every EA intro talk. Despite me not being sure that this idea is even net positive, it still seems almost absurd to me that this post isn't a good fit for the EA forum (especially if people like you make compelling critiques and suggestions in the comments, ensuring the discussion isn't too one-sided and maybe also allowing plausibly good ideas to iterate into better ideas)!

*To me, sufficiently plausible to be a good fit for a forum post, as I said above, is an author who understands what EA is who thinks the idea might be high impact. I actually think this author went well beyond and above what I think a good minimum bar is for such ideas;  it sounds like this author put in a great deal of thought into this project, has put quite a bit of work already into getting this idea off the ground and also got feedback from multiple people in the EA community!

Comment by Sophia on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-26T00:32:52.029Z · EA · GW

I am enjoying all this recent discussion on what we should be calling "effective altruism". 

As EA ideas become more common and get applied in a larger variety of contexts, it might be good to have different words that are context and audience specific.  For example, "global priorities" seems like a great name for the academic field, and it can be acknowledged that it is related to "effective altruism" the social movement which is, itself, clearly very distinct but still related to the LessWrong/Rationality community. Maybe policy orientated effective altruism needs its own name (clearly related to the academic field and social movement but distinct from it?). Similarly, maybe it is also okay for a broader appeal version of effective altruism to have a different name (this is maybe what the GWWC brand is moving towards?). 

The effective altruism project is pretty broad and even if a large amount more thought had been put into the name, it still seems unlikely to me that one name could appeal to policy-makers, academics, the broader population and students/ people on the internet that both like to deeply philosophise about morality and base their lives around the conclusions of that philosophising.