An ML safety insurance company - shower thoughts 2021-10-18T07:45:32.978Z
$3M XPRIZE for a company fighting Malaria 2021-06-28T15:24:28.880Z
A new proposal for regulating AI in the EU 2021-04-26T17:25:07.032Z
Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization 2021-03-04T16:57:44.352Z
List of Under-Investigated Fields - Matthew McAteer 2021-01-30T10:26:32.896Z
Impact of Charity Evaluations on Evaluated Charities' Effectiveness 2021-01-25T13:24:59.265Z
Is Earth Running Out of Resources? 2021-01-02T20:08:59.452Z
Requests on the Forum 2020-12-22T10:42:51.574Z
What are some potential coordination failures in our community? 2020-12-12T08:00:25.858Z
On Common Goods in Prioritization Research 2020-12-10T10:25:10.275Z
Does Qualitative Research improve drastically with increasing expertise? 2020-12-05T18:28:55.162Z
Summary of "The Most Good We Can Do or the Best Person We Can Be?" - a Critique of EA 2020-11-28T07:41:28.010Z
Proposal for managing community requests on the forum 2020-11-24T11:14:18.168Z
Prioritization in Science - current view 2020-10-31T15:22:07.289Z
What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? 2020-10-21T04:44:57.757Z
Criteria for scientific choice I, II 2020-07-29T10:21:30.000Z
Small Research Grants Program in EA Israel - Request for feedback 2020-07-21T08:35:16.729Z
A bill to massively expand NSF to tech domains. What's the relevance for x-risk? 2020-07-12T15:20:21.553Z
EA is risk-constrained 2020-06-24T07:54:09.771Z
Workshop on Mechanism Design requesting Problem Pitches 2020-06-02T06:28:04.538Z
What are some good online courses relevant to EA? 2020-04-14T08:36:22.785Z
What do we mean by 'suffering'? 2020-04-07T16:01:53.341Z
Announcing A Volunteer Research Team at EA Israel! 2020-01-18T17:55:47.476Z
A collection of researchy projects for Aspiring EAs 2019-12-02T11:14:24.310Z
What is the size of the EA community? 2019-11-19T07:48:31.078Z
Some Modes of Thinking about EA 2019-11-09T17:54:42.407Z
Off-Earth Governance 2019-09-06T19:26:26.106Z
edoarad's Shortform 2019-08-16T13:35:05.296Z
Microsoft invests 1b$ in OpenAI 2019-07-22T18:29:57.316Z
Cochrane: a quick and dirty summary 2019-07-14T17:46:42.945Z
Target Malaria begins a first experiment on the release of sterile mosquitoes in Africa 2019-07-05T04:58:44.912Z
Babbling on Singleton Governance 2019-06-23T04:59:30.567Z
Is there an analysis that estimates possible timelines for arrival of easy-to-create pathogens? 2019-06-14T20:41:42.228Z
Innovating Institutions: Robin Hanson Arguing for Conducting Field Trials on New Institutions 2019-03-31T20:33:06.581Z
China's Z-Machine, a test facility for nuclear weapons 2018-12-13T07:03:22.910Z


Comment by EdoArad (edoarad) on Meditations on Caring · 2021-11-30T04:37:03.242Z · EA · GW

This sounds a bit similar to tonglen meditation, which I found interesting and somewhat useful.

Relatedly, I think it's worth practicing feeling a higher level of compassion and desire to change things as the imagined scale grows.

Comment by EdoArad (edoarad) on What Small Weird Thing Do You Fund? · 2021-11-26T04:56:01.196Z · EA · GW

I thought about donating to PlayPumps, just to make sure that our best anti-example is still spinning around

Comment by EdoArad (edoarad) on Frankfurt Declaration on the Cambridge Declaration on Consciousness · 2021-10-25T04:36:49.886Z · EA · GW

Definitely interested! :) 

Comment by EdoArad (edoarad) on Frankfurt Declaration on the Cambridge Declaration on Consciousness · 2021-10-24T14:32:50.843Z · EA · GW

It looks like the Cambridge Declaration is saying something like "we have no scientific reason to suspect that nonhuman animals are less conscious than humans", rather than "animals are conscious".  

This is the full declaration:

The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.[149]

You seem to be saying that the arguments given here don't make scientific sense. Is this because you think that current neuroscientific approaches to sentience are too far away from (something like) the hard question of consciousness, or is it something more concrete (like Luke's remark about the neocortex's potential role in consciousness)?

Comment by EdoArad (edoarad) on An estimate of the value of Metaculus questions · 2021-10-23T09:13:44.967Z · EA · GW

The three metrics feel more logarithmic than linear, so it'd probably make more sense to use addition rather than multiplication. However, I've tested it and it practically doesn't change the ordering for the top 50% and mostly influences the lower results (especially those that multiply to 0 😊). 

(Also, it's clearly an irrelevant level of analysis, as I'd expect the problems to be more in the choice and definition of the metrics, and the valuations thereof)

Comment by EdoArad (edoarad) on People in bunkers, "sardines" and why biorisks may be overrated as a global priority · 2021-10-23T05:37:06.372Z · EA · GW

You might be interested in reading this investigation about whether civilization collapse can lead to extinction

Comment by EdoArad (edoarad) on An ML safety insurance company - shower thoughts · 2021-10-18T10:55:05.118Z · EA · GW

Thanks! Happy to see that :)

Comment by EdoArad (edoarad) on Forbes India - Effective Altruism · 2021-10-18T05:22:24.370Z · EA · GW

how about

Comment by EdoArad (edoarad) on Forbes India - Effective Altruism · 2021-10-17T18:20:38.806Z · EA · GW

I think that this is generated automatically, and you can write anything you want instead of "Effective Altruism" 😶 

Comment by EdoArad (edoarad) on Is "founder effects" EA jargon? · 2021-10-15T12:54:11.152Z · EA · GW

Turns out that there's also this fiction book - 

Comment by EdoArad (edoarad) on When and how should an online community space (e.g., Slack workspace) for a particular type/group of people be created? · 2021-10-15T10:35:08.335Z · EA · GW

A different approach could be to first get in touch with, say, 2-5 people who are (semi-)professionally interested in such a discussion space, and optimize that for starters. Maybe that'd mean just scheduling recurrent calls, or having an email exchange. Once you have a strong core group it should be easy to expand more publicly.  

Besides that, I don't really have much to add to the answers (and a comment! ) thus far. 

Comment by EdoArad (edoarad) on Looking for cofounder on EA-driven startup. Also, free insomnia care program · 2021-10-11T07:46:52.950Z · EA · GW

Sounds exciting, good luck!!

(Note that the link has an extra "." and should be

Comment by EdoArad (edoarad) on Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness · 2021-10-05T07:06:36.350Z · EA · GW

Edited to fix this, thanks!

Comment by EdoArad (edoarad) on [PR FAQ] Tagging users in posts and comments · 2021-10-02T08:00:16.149Z · EA · GW

Yes, please!

Comment by EdoArad (edoarad) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T06:59:53.305Z · EA · GW

(Note that there is a relation between these questions - the sum of the last three probabilities is twice the first)

Comment by EdoArad (edoarad) on Why I am probably not a longtermist · 2021-09-25T06:18:13.647Z · EA · GW

Good points, thanks :) I agree with everything here.

One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier. 

However, there are some things that we might be able to put some credence on that we'd expect future people to value. For example, I think that it's more likely than not that future people would value their own welfare. So while it's not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I'm not sure about how interventions there are actually promising).

Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I'm confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.

Comment by EdoArad (edoarad) on Why I am probably not a longtermist · 2021-09-24T09:01:23.258Z · EA · GW

Thanks for this clear write-up in an important discussion :) 

I'm not sure where exactly my own views lie, but let me engage with some of your points with the hope of clarifying my own views (and hopefully also help you or other readers).

You say that care more about the preference of people than about total wellbeing, and that it'd change your mind if it turns out that people today prefer longtermist causes. 

What do you think about the preferences of future people? You seem to take the "rather make people happy than to make happy people" point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren't interested in a world where trillions of people watch Netflix all day, I take it that you don't take their preferences as that important. 

That said, you clearly do care about the shape of the future of humanity. Whether people have freedom, whether people suffer, whether they are morally righteous, etc. In fact, you seem to be pretty pessimistic about humanity's future in those aspects. Also, it seems like you aren't interested in transhumanist futures - at least, not how they are usually depicted. 

Some thoughts on that. But first, please let me know if (where) I was off in any of the above. Sorry if I've misinterpreted your views.

  1. I think that the length of the long-term future might be a strong double-crux here. If you'd expect the future to be mostly devoid of value, or even not many orders of magnitude more than the near future, then I'd find it very hard to justify working on longtermist causes (mostly due to traceability). Instead of addressing that, I'll just respond to your other points conditional on there being a likely long-term future with lots of valuable life.
  2. I feel some uneasiness about not considering future people's preferences as mostly equal to people alive today. I think that the way I feel about it is somewhat like child-rearing: I'd want some sort of a balance between directing my children's future to become "better people" and give them the freedom to make their own choices and binge on Netflix. Furthermore, I can already predict many of their preferences for which I can make some preparation (say, save up on money or buy an apartment in child-friendly areas). Another analogy here is that of colonialism, where one entity acts to shape the future of another (weaker) entity. Overall, I feel like we have a lot of responsibility for future people and we should take care not to enforce our own worldview too much. 
  3. Very relevant is the question of whether moral growth is possible (or even expected). I'm not sure of my own views here, but I definitely think that improving moral progress could be potentially a very important cause. 
  4. I think that some sort of a transhumanist future is inevitable. It's hard for me to imagine economic/intellectual progress completely stopping or slowing down drastically forever without any major catastrophe, and it's hard for me to imagine non-transhumanist futures with consistent exponential growth. Holden Karnofsky makes this case here in his recent The Most Important Century series
  5. Now, since you seem to disvalue transhumanist futures, I think this might be where our opinions might differ the most but maybe most malleable. I can imagine many potential futures where sentient beings are living in abundance and having meaningful lives. I don't think that paperclip-maximizers and ruthless dictatorships are the most likely futures (although, I do think that these kinds of futures are important risks). For one thing, our values aren't that weird. But other than that, a likely scenario is that of gradual moral change, rather than locking-in to some malign set of random values. I think that some discussions of Utopias are very relevant here, but they may be misleading.  This is something I want to think more about, as I'm easily biased into believing weird futuristic scenarios.
Comment by EdoArad (edoarad) on What are some effective/impactful charities in the domain of human rights and anti-authoritarianism? · 2021-09-23T07:34:23.266Z · EA · GW

Hmm, I'm not sure I understand the relationship between economic growth and improving human rights. (Well, authoritarian regimes tend to lower growth, but do you think that this seems like the best way to increase economic growth?)

Comment by EdoArad (edoarad) on Suggested norms about financial aid for EAG(x) · 2021-09-22T13:14:10.628Z · EA · GW

It might actually make sense that the outcome would be that no one would end up paying. E.g, there could be enough money in community building so that it'd actually be better if people give to causes with more room for funding and have the entire thing subsidized. 

I don't expect that to be the case, but this doesn't feel like a reductio ad absurdum.

Comment by EdoArad (edoarad) on UK's new 10-year "National AI Strategy," released today · 2021-09-22T12:03:37.628Z · EA · GW

It's great that they explicitly mention A(G)I Safety and Catastrophic Risks as a part of their agenda

Comment by EdoArad (edoarad) on The Fable of the Dragon-Tyrant · 2021-09-21T06:28:34.392Z · EA · GW

Great animated video version by CGP Grey - link

Comment by EdoArad (edoarad) on The motivated reasoning critique of effective altruism · 2021-09-20T12:53:42.486Z · EA · GW

Can you say a bit more about the first point? Do you think of cases of EA groups that where too disagreeable and paranoid to be sustained or cases of the opposite sort? Or maybe cases where motivated reasoning was targeted directly?

Comment by EdoArad (edoarad) on A Website for Aggregating and Visualising EA Data · 2021-09-12T06:39:05.422Z · EA · GW

That's beautiful! Thanks for creating the website and for this interesting writeup :) 

Comment by EdoArad (edoarad) on How to succeed as an early-stage researcher: the “lean startup” approach · 2021-09-07T14:18:05.383Z · EA · GW

This is great advice, thanks for writing this!

Several people had also recommended the book The Lean PhD, which I haven't yet read but it has some obvious parallels with this post :) 

Comment by EdoArad (edoarad) on Neglected biodiversity protection by EA. · 2021-09-04T10:17:03.502Z · EA · GW

I think this definitely is something that should be considered more under the lens of effective altruism. Currently, the vast majority of EA efforts are coming from a welfarist perspective and if I understand correctly biodiversity loss should be mostly neutral from that perspective. I guess that this is the main reason here, other than simply having no one picking up the glove.

It's definitely important to optimize "doing the most good" in moral frameworks other than welfarist. In particular, I'd be very happy to see an analysis of what'd be the best ways to contribute to preventing biodiversity loss and a good explanation of the moral framework involved (and why it's reasonable, and whether indeed biodiversity loss seems like the most important cause in that framework).

Broadly speaking, I think that there are two main ways of actually going about it in the EA community. One would be to develop this idea more and engage with the "intellectual" effort of figuring out how to do the most good. This could be done by, say, writing more about it, or by reaching out to people to discuss this (perhaps at the upcoming EAG). The other would be to set up an EA project around these lines, and try to secure funding from EA Funds or Open Phil or elsewhere. I'd expect both to be very challenging and to take a long time.

Comment by EdoArad (edoarad) on Is volunteer computing an easily accessible way of effective altruism? · 2021-08-28T07:15:03.361Z · EA · GW

That's a great suggestion! I'm not sure what exactly I think about it, but I'll just write some of my immediate thoughts:

  1. It seems like both the cost and the benefits for one person are very low. Cost is likely less than $100 per year. It seems like the biggest VC projects have hundreds of thousands of volunteers, so one may contribute, say, about a thousandth of the effort. And then, it's not clear what the impact of the scientific project is.
  2. Generally speaking, finding high-impact scientific projects is really hard. I would guess that most research being done is of very low impact.
  3. It might be interesting to think of how the EA community might scale this up. Perhaps it would be great if we could rather cheaply get thousands of people to start doing VC. Maybe with a focus on more promising efforts. Maybe even purely for environmental benefits of using electricity which would otherwise be used for idle machines, although I'm not sure if that actually works out. 
  4. One alternative might be to mine cryptocurrency and donate the rewards. (I'm not sure whether that's net positive..).
  5. Another alternative is to consider donating directly to scientific research to be spent on other sources of computing (or perhaps advertisements to their VC efforts or something like it).
Comment by EdoArad (edoarad) on Teaching You How To Learn post 1 is live! · 2021-08-18T07:40:44.185Z · EA · GW

I've had a period of being somewhat obsessed with improving learning, mostly in the context of improving the performance of high-achieving math & CS students. Some random thoughts:

  1. I loved this website.
  2. I'm not sure that good memorization techniques are the most important learning tool for many (most?) fields.
  3.  Also, it's likely that these aren't the major bottleneck for most people. I expect motivation and focus to be higher on the list. 
  4. There's something interesting going on regarding Bloom's 2-sigma problem.
  5. In a degree context, it might be important to identify the small number of important ideas/skills to learn that could best help with the rest of the degree and further on in life. 
  6. Generally speaking, I'm not sure how important is learning quality during a degree (as opposed to high grades) when considering one's potential impact on the world. I have some worry that in practice, all one needs is to get through the door to a good career that they'd be motivated to engage in and learn on the job whatever they lacked from school. 
  7. This reminds me a bit of this podcast, delivered by a psychologist interested in helping us to be more effective altruists. 
Comment by EdoArad (edoarad) on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T09:54:21.827Z · EA · GW

I'd be interested in thinking more about this, even as just a thought experiment :) 

Comment by EdoArad (edoarad) on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T08:41:19.386Z · EA · GW

Interesting! Do you know anything about the state of regulations around this? 

(sorta related, there are several pet cloning services)

I'm not sure what are the potential downsides of such a wide-spread tech, but it seems like something which can have high scalability if done as a for-profit company.

Comment by EdoArad (edoarad) on Open Thread: July 2021 · 2021-08-02T13:05:11.494Z · EA · GW

Hey there! 

There is a small and growing group of EAs interested in improving science (see this recent post for example). Let me know if you'd be interested in joining our Slack channel :) 

I'd also be interested in chatting about your research and how you think people in your field can do the most good. Let me know if you'd like to chat 😊

Comment by EdoArad (edoarad) on Arne's Shortform · 2021-07-31T18:36:01.651Z · EA · GW

Cool! Through data science I guess? 

Comment by EdoArad (edoarad) on Testing Newport's "Digital Minimalism" at CEEALAR · 2021-07-25T08:33:44.981Z · EA · GW

Cool! Looking forward for the results and your takeaways from it :) 

Comment by EdoArad (edoarad) on Propose and vote on potential EA Wiki entries · 2021-07-19T15:24:27.632Z · EA · GW

What about posts that discuss personal career choice processes (like this)?

Comment by EdoArad (edoarad) on The act of giving itself has positive impact · 2021-07-18T11:24:52.581Z · EA · GW

Ah! Thanks, this makes more sense to me :) 

I'd be interested if you want to give some more information about what is the positive impact and how large that is. I'm assuming you think less of the effects of giving on happiness and more on some cultural change that generally makes people more moral? 

Comment by EdoArad (edoarad) on Should someone start a grassroots campaign for USA to recognise the State of Palestine? · 2021-07-18T11:13:02.613Z · EA · GW

I only saw this post now. We definitely want to look more into the Israel-Palestine conflict in EA Israel. I'm personally a bit skeptical about the potential traceability and neglectedness of this cause in general, and this space is politically hazardous, but I think we may be able to find good opportunities for people in Israel interested in working on this.    

Comment by EdoArad (edoarad) on The act of giving itself has positive impact · 2021-07-17T18:31:51.262Z · EA · GW

Why do you think giving by itself might have a negative impact? 

Comment by EdoArad (edoarad) on The case against “EA cause areas” · 2021-07-17T15:34:08.214Z · EA · GW

Also, The fidelity model of spreading ideas

Comment by EdoArad (edoarad) on Arne's Shortform · 2021-07-16T15:21:38.935Z · EA · GW

Related - The Upper Limit of Value

Comment by EdoArad (edoarad) on Intervention report: Agricultural land redistribution · 2021-07-14T21:21:36.762Z · EA · GW

It's really exciting for me to see this thorough investigation into a neglected area which I've never heard of, even though it turns out unlikely to be cost-effective. 

I'm curious, what prompted you to start this investigation? How did you discover How Asia Works or this otherwise learned about this suggested intervention?

Also, how excited would you be for further research into Land Reform?  (both more into Land Redistribution and into Land Tenure Reforms)

Comment by EdoArad (edoarad) on [linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." · 2021-07-09T06:19:56.577Z · EA · GW

Thanks for making this  :) 

Some technical suggestions for these posts: 

  1. Instead of writing "[linkpost]", use the link feature: 
  1. It'd be easier for forum readers if the title was shorter. So, maybe just use something like Narration: "name of post".
  2. It would be nice if the posts were all together in a sequence
  3. I suggest using pretty much the same tags as the original post. (Not "forum prize" though)
  4. I've suggested to JP (the developer of the forum) that posts tagged with audio would have an icon
Comment by edoarad on [deleted post] 2021-07-08T17:27:47.208Z

psychology of giving? 

Comment by EdoArad (edoarad) on edoarad's Shortform · 2021-07-05T08:32:03.752Z · EA · GW

GiveWell got about $33.5M in Ethereum donations and $3.5M in Bitcoin donations

Comment by EdoArad (edoarad) on [Link] Reading the EA Forum; audio content · 2021-07-05T07:42:19.218Z · EA · GW

One important difference is that the EA forum is a continuous stream and people probably mostly read posts by the frontpage feed, rather than looking directly for information (which is probably more the case for the skills profiles)

Comment by EdoArad (edoarad) on List of EA-related organisations · 2021-07-05T07:39:16.828Z · EA · GW

Turns out that 80k just published a talk with Max Roser (who leads OWID). He seems to be at least well acquainted with EA and funded by EAs

Max Roser: But still, I think we should do it. And I also saw on some effective altruism forums online that people are discussing that question, like how good of an idea is it to donate to Our World in Data. And they were relying on some of the information that was publicly available, but I think we could do a better job, when we have some time, to provide more of the information that those people discussed. And some of them also ended up donating. We got several grants in the last few years from effective altruist-aligned donors.

Comment by EdoArad (edoarad) on List of EA-related organisations · 2021-07-05T07:24:23.140Z · EA · GW

I was surprised to see Our World In Data on this list. Which of the criteria holds?

  • Have explicitly aligned themselves with EA
  • Are currently recommended by GiveWell or Animal Charity Evaluators
  • Were incubated by Charity Entrepreneurship
  • Have engaged with the EA community (e.g. by posting on the EA Forum or attending EA Global)
Comment by EdoArad (edoarad) on Big List of Cause Candidates · 2021-07-05T07:10:02.120Z · EA · GW

Related - Problem areas beyond 80,000 Hours current priorities (Jan 2020).

From there, at least Migration Restrictions and Global Public Goods seem to be missing from this list

Comment by EdoArad (edoarad) on Which EA forum posts would you most like narrated? · 2021-07-03T07:44:54.638Z · EA · GW

Ah, I see! Yea, the way it's sorted makes it very confusing (it's based on the tag upvotes, which is rather irrelevant here)

Comment by EdoArad (edoarad) on Which EA forum posts would you most like narrated? · 2021-07-03T04:52:29.415Z · EA · GW

The forum prize is ongoing, the most recent is for March (and I guess that the April edition should be out soon) 

Comment by EdoArad (edoarad) on Which EA forum posts would you most like narrated? · 2021-07-02T06:00:44.104Z · EA · GW

How about the posts that won the Forum Prize

Comment by EdoArad (edoarad) on EA needs consultancies · 2021-07-01T08:28:47.452Z · EA · GW

Do you, or anyone else, have some more insight into the consultancy work that's needed around statistics and data science?