Posts

Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization 2021-03-04T16:57:44.352Z
List of Under-Investigated Fields - Matthew McAteer 2021-01-30T10:26:32.896Z
Impact of Charity Evaluations on Evaluated Charities' Effectiveness 2021-01-25T13:24:59.265Z
Is Earth Running Out of Resources? 2021-01-02T20:08:59.452Z
Requests on the Forum 2020-12-22T10:42:51.574Z
What are some potential coordination failures in our community? 2020-12-12T08:00:25.858Z
On Common Goods in Prioritization Research 2020-12-10T10:25:10.275Z
Does Qualitative Research improve drastically with increasing expertise? 2020-12-05T18:28:55.162Z
Summary of "The Most Good We Can Do or the Best Person We Can Be?" - a Critique of EA 2020-11-28T07:41:28.010Z
Proposal for managing community requests on the forum 2020-11-24T11:14:18.168Z
Prioritization in Science - current view 2020-10-31T15:22:07.289Z
What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? 2020-10-21T04:44:57.757Z
Criteria for scientific choice I, II 2020-07-29T10:21:30.000Z
Small Research Grants Program in EA Israel - Request for feedback 2020-07-21T08:35:16.729Z
A bill to massively expand NSF to tech domains. What's the relevance for x-risk? 2020-07-12T15:20:21.553Z
EA is risk-constrained 2020-06-24T07:54:09.771Z
Workshop on Mechanism Design requesting Problem Pitches 2020-06-02T06:28:04.538Z
What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z
What is the size of the EA community? 2019-11-19T07:48:31.078Z
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z
Off-Earth Governance 2019-09-06T19:26:26.106Z
edoarad's Shortform 2019-08-16T13:35:05.296Z
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z

Comments

Comment by EdoArad (edoarad) on RationalityEnhancementGroup's Shortform · 2021-04-19T15:50:46.072Z · EA · GW

I suggest publishing this as a post, rather than in the shortform. Is there any particular reason you chose not to?

Comment by EdoArad (edoarad) on Problems of evil · 2021-04-19T10:50:48.487Z · EA · GW

I've moved this post to "Personal Blog"  as I'm not sure if this is strictly relevant to Doing The Most Good or to the EA community (and to be clear, this doesn't speak as to the quality of this post nor how likely it is to be of interest to people in the community - this post, like your many others, looks of high quality and of interest to people in the community). Please let me know if you think otherwise!

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-10T07:00:51.005Z · EA · GW

😉

Comment by EdoArad (edoarad) on The EA Forum Editing Festival has begun! · 2021-04-07T12:45:18.863Z · EA · GW

Love it! I'm in 💃🕺💃🕺

Comment by EdoArad (edoarad) on Cause area : Fundamental Research · 2021-04-07T06:09:30.240Z · EA · GW

You can read some of my thoughts on relevant issues here, an analysis of the value of medical research here, and very relevant discussions under the Differential Progress tag

Comment by EdoArad (edoarad) on Cause area : Fundamental Research · 2021-04-06T11:40:49.595Z · EA · GW

Sorry, but I've downvoted this post. Generally speaking, I think that it is very possible that some fundamental research is extremely important but I don't think that this post adds value to this discussion.

The two major problems that I see in this post, besides its brevity (which might actually be good in some cases!), are

  1. One-sidedness. This post seems to try and persuade that fundamental research is important, rather than assess it truthfully. I find that such posts usually don't help me, because I expect there to be counter arguments which are omitted.
  2. Lack of engagement with relevant arguments. This topic has been addressed before, and I heavily encourage you to search more on the forum (you can start with this tag).

I would really like to see more discussion on this topic and I definitely encourage you to read and write more about it! (A potentially fun experiment I'd like someone to do is to have a Change My View thread about such a topic; Perhaps you can have one on the importance of fundamental research. I'd also naturally be very interested in any independent research or a synthesis of information on this topic if you are up to do more work).

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-04T15:45:10.615Z · EA · GW

catchy!

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-04T05:22:20.500Z · EA · GW

I can't wait for a new Bennian paradigm shift

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-03T07:56:04.650Z · EA · GW

Sorry, I've tried very hard but guesstimate is near perfection

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T15:39:13.821Z · EA · GW

Thanks for your valuable critique! I've updated our model accordingly. 

Must say that I should have been more skeptical when my calculation resulted in a post that's worth 0.4 QALY. Now, after also raising our estimates for total Karma (wow!) we estimate our impact as 0.018 QALYs, which makes more sense. 

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T10:31:26.103Z · EA · GW

this is so silly, I love it!

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T10:20:19.883Z · EA · GW

How about Smiles Without Borders (SWB)?  Potentially include a "Research Institute"

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T06:27:35.002Z · EA · GW

How about Caring Tuna? This would surely get support from Open Phil

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T06:22:31.967Z · EA · GW

I think that the forum itself is nothing without the people and the community within. We, the users, are the ones that upvote or downvote posts. From this emerges a collective intelligence that deems what is worthy for the EA community and what should be strongly downvoted to oblivion, which in return explains what content gets written.

I propose to call this collective intelligence The Karma Police.

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T06:14:27.751Z · EA · GW

With so many research institutions, we should really have an organization to support this ecosystem. I propose RIRI - Research Institutes Research Institute.

Comment by EdoArad (edoarad) on New Top EA Causes for 2021? · 2021-04-02T03:20:04.703Z · EA · GW

How about: Probability? Good!

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-02T03:09:47.756Z · EA · GW

OvercomingScriptophobia

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-01T18:00:57.232Z · EA · GW

I generally believe that EAs should keep their identities small. Small enough so it wouldn't really matter what Julia you are

Comment by EdoArad (edoarad) on New Top EA Causes for 2021? · 2021-04-01T17:56:34.965Z · EA · GW

I think that it's interesting to note that it will always be the 137 years ahead, regardless of the current year. That is unless we learn to do better predictions. But it doesn't matter, as currently we should only care about the next 137 years!

Comment by EdoArad (edoarad) on New Top EA Causes for 2021? · 2021-04-01T15:20:39.750Z · EA · GW

I think that QURI should be called Probably Good

Comment by EdoArad (edoarad) on New Top EA Causes for 2021? · 2021-04-01T15:18:33.901Z · EA · GW

ConsEAder applyEAng to NWWC

Comment by EdoArad (edoarad) on Announcing "Naming What We Can"! · 2021-04-01T15:14:06.634Z · EA · GW

Love it! I also thought that your corporate canpaining org idea is fantastic 😁

Comment by EdoArad (edoarad) on [New org] Canning What We Give · 2021-04-01T14:54:53.447Z · EA · GW

Jinx!

Comment by EdoArad (edoarad) on [New org] Canning What We Give · 2021-04-01T14:54:00.882Z · EA · GW

"Yes We Can" 

Comment by EdoArad (edoarad) on Propose and vote on potential tags · 2021-03-31T11:15:39.498Z · EA · GW

😢

Comment by EdoArad (edoarad) on Propose and vote on potential tags · 2021-03-31T08:07:30.463Z · EA · GW

I would prefer Blockchain, as it is more general than cryptocurrency and doesn't confuse people with the field of cryptology

Comment by EdoArad (edoarad) on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-09T14:51:32.901Z · EA · GW

Thanks for writing this post and crossposting it here on the EA forum! :) 

This post is also posted on LessWrong and discussed there.

Comment by EdoArad (edoarad) on Total Funding by Cause Area · 2021-03-08T08:20:47.795Z · EA · GW

Thanks for writing this post and for the great graphs! 

One relevant thought regarding OpenPhil, my understanding is that they could have expanded their work within each cause area but they are not giving more because they don't have opportunities that seem better than saving the money for next years (even though they want to give away the money early), or because they have self-enforced upper bounds to give opportunities to other philanthropists (so they don't donate to GiveWell more than half of what they get by donations).

[I'm sure that the previous paragraph is wrong in some details, but overall I think it paints the right picture. I'd love to be corrected, and sorry for not taking the time to verify and find supporting links]

Comment by EdoArad (edoarad) on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-07T10:27:50.752Z · EA · GW

Sure. So, consider x-risk as an example cause area. It is a pretty broad cause area and contains secondary causes like mitigating AI-risk or Biorisk. Developing this as a common cause area involves advances like understanding what are the different risks, identifying relevant political and legal actions, making a strong ethical case, and gathering broad support. 

So even if we think that the best interventions are likely in, say, AI-safety, it might be better to develop a community around a broader cause area.  (So, here I'm thinking of cause area more like that in Givewell's 2013 definition).

Comment by EdoArad (edoarad) on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-07T08:30:07.431Z · EA · GW

This matches at least my take on this. 

prescriptively, I would add that this contributes to the importance of being open to other people's ideas about how to do good (even if they are not familiar with EA). 

Comment by EdoArad (edoarad) on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-07T08:22:30.160Z · EA · GW

I agree with this. I think one important consideration here is who are the agents for which we are doing the prioritization. 

If our goal is to start a new charity and we are comparing causes, then all we should care about is the best intervention (we can find) - the one which we will end up implementing. If, in contrast, our goal is to develop a diverse community of people interested in exploring and solving some cause, we might care about a broader range of interventions, as well as potentially some qualities of the problem which help increase overall cohesiveness between the different actors

Comment by EdoArad (edoarad) on Early Alpha Version of the Probably Good Website · 2021-03-01T20:14:27.063Z · EA · GW

The website looks amazing! I love the clear and concise writing, with a ton of leads to further material (and especially I think it's awesome that you point out to career profile from AAC and 80k in almost the same breath as PG-original content). It's also great that it's very clear what it is that you are offering as an organization and on the website. Well done, and looking forward!

Comment by EdoArad (edoarad) on Running an AMA on the EA Forum · 2021-02-20T09:40:26.270Z · EA · GW

I've thought of this as an alternative in cases where the person thinks that there are likely many people with more experience than themselves, but where they can still generate useful answers and insights.

Comment by EdoArad (edoarad) on Running an AMA on the EA Forum · 2021-02-20T09:37:29.031Z · EA · GW

An alternative to an AMA might be an open discussion thread on a given topic, perhaps with specific people  committing to be active in the discussions (could be experts, but not necessarily) 

Comment by EdoArad (edoarad) on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T07:42:38.420Z · EA · GW

Thanks for writing this post! 

You might be interested in the works of Charity Entrepreneurship on Family Planning (link to a blogpost about why that matters, where they also list a potential positive impact from reducing population growth). Here are some more explicit models about the relation of reducing population growth (via family planning) on animal welfare and CO2 emissions. 
Also, their list of top charity ideas and an in-depth report on their top charity idea, which they will hopefully incubate at the upcoming program

Comment by EdoArad (edoarad) on Let's Fund Living review: 2020 update · 2021-02-13T13:39:25.851Z · EA · GW

Thanks for the response and for taking the time to add references! I'm glad to see two EA orgs have put substantial effort into this, and it's terrific that it had such a direct and potentially impactful impact on someone's career (and I'd bet that there are many others undocumented). 

Comment by EdoArad (edoarad) on Let's Fund Living review: 2020 update · 2021-02-13T07:32:43.391Z · EA · GW

Thanks for compiling the review, it's exciting to see that your work seems to be highly cost-effective! I have a couple of questions I'm curious about.

  1. The Case Against Randomista Development was exceptionally well received. Do you know of any direct impact it had? (say in terms of money moved or follow-up research done). Generally, how do you think about the impact it has?
  2. How much do you think crowdfunding can grow? Do you think that "donations available through crowdfunding" is a limiting factor for expanding your work?
Comment by EdoArad (edoarad) on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-09T06:25:07.686Z · EA · GW

I've skimmed the book and it looks very interesting and relevant. It surprises me that people have downvoted this post - could someone who did so explain their reasoning?

Comment by EdoArad (edoarad) on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-07T15:41:39.087Z · EA · GW

Thanks for the answer! I want to make sure that I get this clearly, if you are still taking questions :) 

Are you making attempts to diversify grants based on these kinds of axes, in cases where there is no clear-cut position?  My current understanding is that you do it but mostly implicitly

Comment by EdoArad (edoarad) on List of Under-Investigated Fields - Matthew McAteer · 2021-02-02T18:18:27.592Z · EA · GW

But seriously, I'd really love a deeper dive on many of these topics and other suggestions for academic disciplines

Comment by EdoArad (edoarad) on List of Under-Investigated Fields - Matthew McAteer · 2021-02-02T18:16:56.627Z · EA · GW

I think that 

Is really really REALLY important, but not everyone agrees. You can find more information in this critical review. 😘

Comment by EdoArad (edoarad) on List of Under-Investigated Fields - Matthew McAteer · 2021-02-02T18:13:18.115Z · EA · GW

👍

Comment by EdoArad (edoarad) on Why "cause area" as the unit of analysis? · 2021-01-31T10:43:25.319Z · EA · GW

Nice find!

Comment by EdoArad (edoarad) on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-30T16:55:18.824Z · EA · GW

What cause-prioritization efforts would you most like to see from within the EA community?

Comment by EdoArad (edoarad) on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-30T16:54:21.689Z · EA · GW

How would you define a "cause area" and "cause prioritization", in a way which extends beyond Open Phil? 

Comment by EdoArad (edoarad) on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-30T16:51:47.775Z · EA · GW

How much worldview-diversification and dividing capital into buckets do you have within each of the three main cause areas, if at all? For example, I could imagine a divide between short and long AI Timelines, or a divide between policy-oriented and research-oriented grants.

Comment by EdoArad (edoarad) on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-30T16:41:46.998Z · EA · GW

I'm curious about your take on prioritizing between science funding and other causes. In the 80k interview you said:

When we were starting out, it was important to us that we put some money in science funding and some money in policy funding. Most of that is coming through our other causes that we already identified, but we also want to get experience with those things.

We also want to gain experience in just funding basic science, and doing that well and having a world-class team at that. So, some of our money in science goes there as well.

That’s coming much less from a philosophy point of view and much more from a track record… Philanthropy has done great things in the area of science and in the area of policy. We want to have an apparatus and an infrastructure that lets us capitalise on that kind of opportunity to do good as philanthropists.

[...]

So, I feel like this isn’t Open Phil’s primary bet, but I could imagine in a world where there was a lot less funding going to basic science — like Howard Hughes Medical Institute didn’t exist — then we would be bigger on it.

My question: Is funding in basic science less of a priority because there are compelling reasons to deprioritize funding more projects there generally, because there is less organizational comparative advantage (or not enough expertise yet) or something else?

Comment by EdoArad (edoarad) on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-01-30T13:05:56.746Z · EA · GW

This project sounds great, I love how you flesh out the plan and pre-commit to it. 

I have a minor concern, which might be mistaken as I don't have any relevant experience. In the "what we can learn from religious texts"-section you mentioned potential applications to community building and spreading ideas. However, the process involves a synthesis of verses more directly related to EA. Also, I imagine that general lessons about how religious communities and ideas evolved have been investigated quite a bit in the academy and might have used historical sources and sociological methods. So all this makes me less excited about these specific applications.

On the other hand, I hope that it will inform more about how to communicate better with religious groups and lead to a better understanding of how EA-related views were seen in the past. 

Also, David Manheim is doing some work on the space of Judaism and EA.

Comment by EdoArad (edoarad) on Impact of Charity Evaluations on Evaluated Charities' Effectiveness · 2021-01-27T17:51:57.604Z · EA · GW

Thank you!

I've searched and found this post describing it. The summary:

Evidence Action is terminating the No Lean Season program, which was designed to increase household food consumption and income by providing travel subsidies for seasonal migration by poor rural laborers in Bangladesh, and was based on multiple rounds of rigorous research showing positive effects of the intervention. This is an important decision for Evidence Action, and we want to share the rationale behind it.  

Two factors led to this, including the disappointing 2017 evidence on program performance coupled with operational challenges given a recent termination of the relationship with our local partner due to allegations of financial improprieties. 

Ultimately, we determined that the opportunity cost for Evidence Action of rebuilding the program is too high relative to other opportunities we have to meet our vision of measurably improving the lives of hundreds of millions of people. Importantly, we are not saying that seasonal migration subsidies do not work or that they lack impact; rather, No Lean Season is unlikely to be among the best strategic opportunities for Evidence Action to achieve our vision.

Comment by EdoArad (edoarad) on Everyday Longtermism · 2021-01-27T09:07:16.177Z · EA · GW

In this 2017 post Emily Tench talks about "The extraordinary value of ordinary norms", as (I think) she did while in an internship at CEA and where she got feedback and comments from Owen and others.