Posts
Comments
I suggest publishing this as a post, rather than in the shortform. Is there any particular reason you chose not to?
I've moved this post to "Personal Blog" as I'm not sure if this is strictly relevant to Doing The Most Good or to the EA community (and to be clear, this doesn't speak as to the quality of this post nor how likely it is to be of interest to people in the community - this post, like your many others, looks of high quality and of interest to people in the community). Please let me know if you think otherwise!
😉
Love it! I'm in 💃🕺💃🕺
You can read some of my thoughts on relevant issues here, an analysis of the value of medical research here, and very relevant discussions under the Differential Progress tag.
Sorry, but I've downvoted this post. Generally speaking, I think that it is very possible that some fundamental research is extremely important but I don't think that this post adds value to this discussion.
The two major problems that I see in this post, besides its brevity (which might actually be good in some cases!), are
- One-sidedness. This post seems to try and persuade that fundamental research is important, rather than assess it truthfully. I find that such posts usually don't help me, because I expect there to be counter arguments which are omitted.
- Lack of engagement with relevant arguments. This topic has been addressed before, and I heavily encourage you to search more on the forum (you can start with this tag).
I would really like to see more discussion on this topic and I definitely encourage you to read and write more about it! (A potentially fun experiment I'd like someone to do is to have a Change My View thread about such a topic; Perhaps you can have one on the importance of fundamental research. I'd also naturally be very interested in any independent research or a synthesis of information on this topic if you are up to do more work).
catchy!
I can't wait for a new Bennian paradigm shift
Sorry, I've tried very hard but guesstimate is near perfection
Thanks for your valuable critique! I've updated our model accordingly.
Must say that I should have been more skeptical when my calculation resulted in a post that's worth 0.4 QALY. Now, after also raising our estimates for total Karma (wow!) we estimate our impact as 0.018 QALYs, which makes more sense.
this is so silly, I love it!
How about Smiles Without Borders (SWB)? Potentially include a "Research Institute"
How about Caring Tuna? This would surely get support from Open Phil
I think that the forum itself is nothing without the people and the community within. We, the users, are the ones that upvote or downvote posts. From this emerges a collective intelligence that deems what is worthy for the EA community and what should be strongly downvoted to oblivion, which in return explains what content gets written.
I propose to call this collective intelligence The Karma Police.
With so many research institutions, we should really have an organization to support this ecosystem. I propose RIRI - Research Institutes Research Institute.
How about: Probability? Good!
OvercomingScriptophobia
I generally believe that EAs should keep their identities small. Small enough so it wouldn't really matter what Julia you are
I think that it's interesting to note that it will always be the 137 years ahead, regardless of the current year. That is unless we learn to do better predictions. But it doesn't matter, as currently we should only care about the next 137 years!
I think that QURI should be called Probably Good
ConsEAder applyEAng to NWWC
Love it! I also thought that your corporate canpaining org idea is fantastic 😁
Jinx!
"Yes We Can"
😢
I would prefer Blockchain, as it is more general than cryptocurrency and doesn't confuse people with the field of cryptology
Thanks for writing this post and crossposting it here on the EA forum! :)
This post is also posted on LessWrong and discussed there.
Thanks for writing this post and for the great graphs!
One relevant thought regarding OpenPhil, my understanding is that they could have expanded their work within each cause area but they are not giving more because they don't have opportunities that seem better than saving the money for next years (even though they want to give away the money early), or because they have self-enforced upper bounds to give opportunities to other philanthropists (so they don't donate to GiveWell more than half of what they get by donations).
[I'm sure that the previous paragraph is wrong in some details, but overall I think it paints the right picture. I'd love to be corrected, and sorry for not taking the time to verify and find supporting links]
Sure. So, consider x-risk as an example cause area. It is a pretty broad cause area and contains secondary causes like mitigating AI-risk or Biorisk. Developing this as a common cause area involves advances like understanding what are the different risks, identifying relevant political and legal actions, making a strong ethical case, and gathering broad support.
So even if we think that the best interventions are likely in, say, AI-safety, it might be better to develop a community around a broader cause area. (So, here I'm thinking of cause area more like that in Givewell's 2013 definition).
This matches at least my take on this.
prescriptively, I would add that this contributes to the importance of being open to other people's ideas about how to do good (even if they are not familiar with EA).
I agree with this. I think one important consideration here is who are the agents for which we are doing the prioritization.
If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing. If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions, as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors
The website looks amazing! I love the clear and concise writing, with a ton of leads to further material (and especially I think it's awesome that you point out to career profile from AAC and 80k in almost the same breath as PG-original content). It's also great that it's very clear what it is that you are offering as an organization and on the website. Well done, and looking forward!
I've thought of this as an alternative in cases where the person thinks that there are likely many people with more experience than themselves, but where they can still generate useful answers and insights.
An alternative to an AMA might be an open discussion thread on a given topic, perhaps with specific people committing to be active in the discussions (could be experts, but not necessarily)
Thanks for writing this post!
You might be interested in the works of Charity Entrepreneurship on Family Planning (link to a blogpost about why that matters, where they also list a potential positive impact from reducing population growth). Here are some more explicit models about the relation of reducing population growth (via family planning) on animal welfare and CO2 emissions.
Also, their list of top charity ideas and an in-depth report on their top charity idea, which they will hopefully incubate at the upcoming program.
Thanks for the response and for taking the time to add references! I'm glad to see two EA orgs have put substantial effort into this, and it's terrific that it had such a direct and potentially impactful impact on someone's career (and I'd bet that there are many others undocumented).
Thanks for compiling the review, it's exciting to see that your work seems to be highly cost-effective! I have a couple of questions I'm curious about.
- The Case Against Randomista Development was exceptionally well received. Do you know of any direct impact it had? (say in terms of money moved or follow-up research done). Generally, how do you think about the impact it has?
- How much do you think crowdfunding can grow? Do you think that "donations available through crowdfunding" is a limiting factor for expanding your work?
I've skimmed the book and it looks very interesting and relevant. It surprises me that people have downvoted this post - could someone who did so explain their reasoning?
Thanks for the answer! I want to make sure that I get this clearly, if you are still taking questions :)
Are you making attempts to diversify grants based on these kinds of axes, in cases where there is no clear-cut position? My current understanding is that you do it but mostly implicitly
But seriously, I'd really love a deeper dive on many of these topics and other suggestions for academic disciplines
I think that
Is really really REALLY important, but not everyone agrees. You can find more information in this critical review. 😘
👍
Nice find!
What cause-prioritization efforts would you most like to see from within the EA community?
How would you define a "cause area" and "cause prioritization", in a way which extends beyond Open Phil?
How much worldview-diversification and dividing capital into buckets do you have within each of the three main cause areas, if at all? For example, I could imagine a divide between short and long AI Timelines, or a divide between policy-oriented and research-oriented grants.
I'm curious about your take on prioritizing between science funding and other causes. In the 80k interview you said:
When we were starting out, it was important to us that we put some money in science funding and some money in policy funding. Most of that is coming through our other causes that we already identified, but we also want to get experience with those things.
We also want to gain experience in just funding basic science, and doing that well and having a world-class team at that. So, some of our money in science goes there as well.
That’s coming much less from a philosophy point of view and much more from a track record… Philanthropy has done great things in the area of science and in the area of policy. We want to have an apparatus and an infrastructure that lets us capitalise on that kind of opportunity to do good as philanthropists.
[...]
So, I feel like this isn’t Open Phil’s primary bet, but I could imagine in a world where there was a lot less funding going to basic science — like Howard Hughes Medical Institute didn’t exist — then we would be bigger on it.
My question: Is funding in basic science less of a priority because there are compelling reasons to deprioritize funding more projects there generally, because there is less organizational comparative advantage (or not enough expertise yet) or something else?
This project sounds great, I love how you flesh out the plan and pre-commit to it.
I have a minor concern, which might be mistaken as I don't have any relevant experience. In the "what we can learn from religious texts"-section you mentioned potential applications to community building and spreading ideas. However, the process involves a synthesis of verses more directly related to EA. Also, I imagine that general lessons about how religious communities and ideas evolved have been investigated quite a bit in the academy and might have used historical sources and sociological methods. So all this makes me less excited about these specific applications.
On the other hand, I hope that it will inform more about how to communicate better with religious groups and lead to a better understanding of how EA-related views were seen in the past.
Also, David Manheim is doing some work on the space of Judaism and EA.
Thank you!
I've searched and found this post describing it. The summary:
Evidence Action is terminating the No Lean Season program, which was designed to increase household food consumption and income by providing travel subsidies for seasonal migration by poor rural laborers in Bangladesh, and was based on multiple rounds of rigorous research showing positive effects of the intervention. This is an important decision for Evidence Action, and we want to share the rationale behind it.
Two factors led to this, including the disappointing 2017 evidence on program performance coupled with operational challenges given a recent termination of the relationship with our local partner due to allegations of financial improprieties.
Ultimately, we determined that the opportunity cost for Evidence Action of rebuilding the program is too high relative to other opportunities we have to meet our vision of measurably improving the lives of hundreds of millions of people. Importantly, we are not saying that seasonal migration subsidies do not work or that they lack impact; rather, No Lean Season is unlikely to be among the best strategic opportunities for Evidence Action to achieve our vision.
In this 2017 post Emily Tench talks about "The extraordinary value of ordinary norms", as (I think) she did while in an internship at CEA and where she got feedback and comments from Owen and others.