Posts

Should there be an EA crowdfunding platform? 2018-05-01T15:34:01.183Z · score: 15 (17 votes)
Would an EA world with limited money fund costly treatments? 2018-03-31T01:20:08.675Z · score: 1 (1 votes)
Would it be a good idea to create a 'GiveWell' for U.S. charities? 2018-02-04T21:29:41.564Z · score: 6 (8 votes)
How much further does your dollar go overseas? 2018-02-04T21:28:01.733Z · score: 16 (16 votes)
Which five books would you recommend to an 18 year old? 2017-09-05T21:10:19.667Z · score: 10 (10 votes)
Does Effective Altruism Lead to the Altruistic Repugnant Conclusion? 2017-07-27T20:49:06.425Z · score: 2 (4 votes)

Comments

Comment by randomea on What are the leading critiques of "longtermism" and related concepts · 2020-06-04T22:11:19.980Z · score: 2 (2 votes) · EA · GW

Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don't think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I'll send you my draft when I'm done, but until then, I don't think it's productive for us to go back and forth like this.

Comment by randomea on What are the leading critiques of "longtermism" and related concepts · 2020-06-02T22:19:48.568Z · score: 1 (1 votes) · EA · GW

Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.

Comment by randomea on What are the leading critiques of "longtermism" and related concepts · 2020-06-02T01:13:18.678Z · score: 24 (11 votes) · EA · GW

As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.

Comment by randomea on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-28T02:11:19.348Z · score: 4 (3 votes) · EA · GW

For those who are curious,

  • in April 2015, GiveWell had 18 full-time staff, while
  • 80,000 Hours currently has a CEO, a president, 11 core team members, and two freelancers and works with four CEA staff.
Comment by randomea on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-26T19:16:49.310Z · score: 14 (9 votes) · EA · GW

Hi Ben,

Thank you to you and the 80,000 Hours team for the excellent content. One issue that I've noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I'm aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hours cannot grow as rapidly that its research is more subjective in nature, making good judgment more important, and that judgment is quite difficult to assess?

Comment by randomea on A cause can be too neglected · 2020-04-08T00:36:30.701Z · score: 3 (2 votes) · EA · GW

It seems to me that there are two separate frameworks:

1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn't among the highest priority because it's not [insert one or more of the three]); and

2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).

Treating the second one as merely a formalization of the first one can be unhelpful when thinking through them. For example, even though the 80,000 Hours framework does not account for diminishing marginal returns, it justifies the inclusion of the crowdedness factor on the basis of diminishing marginal returns.

Notably, EA Concepts has separate pages for the informal INT framework and the 80,000 Hours framework.

Comment by randomea on Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? · 2020-03-28T00:54:21.338Z · score: 4 (4 votes) · EA · GW

In his blog post "Why Might the Future Be Good," Paul Christiano writes:

What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.

(Please read all of "How Much Altruism Do We Expect?" for the full context.)

Comment by randomea on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-23T14:55:46.518Z · score: 4 (3 votes) · EA · GW

Thanks Lucy! Readers should note that Elie's answer is likely partly addressed to Lucy's question.

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T06:26:38.095Z · score: 3 (2 votes) · EA · GW

What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T05:30:27.963Z · score: 10 (5 votes) · EA · GW

Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?

Comment by randomea on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-18T04:56:33.021Z · score: 2 (2 votes) · EA · GW

Has your thinking about donor coordination evolved since 2016, and if so, how? (My main motivation for asking is that this issue is the focus of a chapter in a recent book on philosophical issues in effective altruism though the chapter appears to be premised on this blog post, which has an update clarifying that it has not represented GiveWell's approach since 2016.)

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T04:18:27.355Z · score: 3 (2 votes) · EA · GW

How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T03:57:18.714Z · score: 13 (7 votes) · EA · GW

What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?

Comment by randomea on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-18T03:54:47.899Z · score: 1 (3 votes) · EA · GW

(This comment assumes GiveWell would broadly agree with a characterization of its worldview as consequentialist.) Do you agree with the view that, given moral uncertainty, consequentialists should give some weight to non-consequentialist values? If so, do you think GiveWell should give explicit weight to the intrinsic value of gender equality apart from its instrumental value? And if yes, do you think that, in consider the moral views of the communities that GiveWell operates in, it would make sense to give substantially more weight to the views of women than of men on the value of gender equality?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T03:37:41.890Z · score: 17 (6 votes) · EA · GW

There are many ways that technological development and economic growth could potentially affect the long-term future, including:

  • Hastening the development of technologies that create existential risk (see here)
  • Hastening the development of technologies that mitigate existential risk (see here)
  • Broadly empowering humanity (see here)
  • Improving human values (see here and here)
  • Reducing the chance of international armed conflict (see here)
  • Improving international cooperation (see the climate change mitigation debate)
  • Shifting the growth curve forward (see here)
  • Hastening the colonization of the accessible universe (see here and here)

What do you think is the overall sign of economic growth? Is it different for developing and developed countries?

Note: The fifth bullet point was added after Toby recorded his answers.

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T02:50:58.483Z · score: 18 (7 votes) · EA · GW

Do you think that "a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill's] view [about the level of risk this century] than to the median FHI view"? If so, should we defer to such a panel out of epistemic modesty?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T02:44:14.943Z · score: 3 (2 votes) · EA · GW

How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T02:36:32.004Z · score: 3 (2 votes) · EA · GW

What do you think of applying Open Phil's outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T02:18:07.613Z · score: 3 (2 votes) · EA · GW

Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse's EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for "I noticed that over the last few weeks" for context)?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T01:47:21.517Z · score: 21 (9 votes) · EA · GW

In an 80,000 Hours interview, Tyler Cowen states:

[44:06]
I don't think we'll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.

How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead's conclusion in this piece? Do you think Cowen's argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are "not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk"? Does positively shaping the development of artificial intelligence fall into that category?

Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of "reduc[ing] the risk of extinction for all future generations."

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T00:40:26.178Z · score: 3 (2 votes) · EA · GW

What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?

How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedge against the associated moral risk?

How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)

Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?

Comment by randomea on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T00:32:14.985Z · score: 10 (7 votes) · EA · GW

Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist's curse etc.)?

Comment by randomea on Quotes about the long reflection · 2020-03-08T14:24:05.817Z · score: 8 (4 votes) · EA · GW

In the new 80,000 Hours interview of Toby Ord, Arden Koehler asks:

Arden Koehler: So I’m curious about this second stage: the long reflection. It felt, in the book, like this was basically sitting around and doing moral philosophy. Maybe lots of science and other things and calmly figuring out, how can we most flourish in the future? I’m wondering whether it’s more likely to just look like politics? So you might think if we come to have this big general conversation about how the world should be, our most big general public conversation right now is a political conversation that has a lot of problems. People become very tribal and it’s just not an ideal discourse, let’s say. How likely is it do you think that the long reflection will end up looking more like that? And is that okay? What do you think about that?

Ord then gives a lengthy answer, with the following portion the most directly responsive:

Toby Ord: . . . I think that the political discourse these days is very poor and definitely doesn’t live up to the kinds of standards that I loftily suggest it would need to live up to, trying to actually track the truth and to reach a consensus that stands the test of time that’s not just a political battle between people based on the current levels of power today, at the point where they’ll stop fighting, but rather the kind of thing that you expect people in a thousand years to agree with. I think there’s a very high standard and I think that we’d have [to] try very hard to have a good public conversation about it.
Comment by randomea on Quotes about the long reflection · 2020-03-06T04:18:09.793Z · score: 5 (4 votes) · EA · GW

The GPI Agenda mentions "Greg Lewis, The not-so-Long Reflection?, 2018" though as of six months ago that piece was in draft form and not publicly available.

Comment by randomea on Thoughts on electoral reform · 2020-02-19T06:43:40.120Z · score: 15 (5 votes) · EA · GW

With respect to the necessity of a constitutional amendment, I agree with you on presidential elections but respectfully disagree as to congressional elections.

For presidential elections, the proposal with the most traction is the National Popular Vote Interstate Compact, which requires compacting states to give their electoral votes to the presidential ticket with a plurality of votes nationwide but only takes effect after states collectively possessing a majority of all electoral votes join the compact. Proponents argue that it is constitutional (with many believing it can be done without congressional consent), while opponents say that it is unconstitutional and in any case would require congressional consent. See pages 21-30 of this Congressional Research Service report for a summary of the legal issues. Regardless of which side has the better argument, it's unlikely that an interstate compact would be used to adopt instant runoff voting or approval voting for presidential elections because i) absent a law from Congress, it would be up to non-compacting states whether to switch from plurality voting in their own state (which could mean voters in some states would be limited to choosing one ticket) and ii) it is questionable whether Congress has the power to require non-compacting states to switch (though see pages 16-17 of this article arguing that it does).

As for congressional elections, it's worth noting that the U.S. Constitution does not require plurality voting and does not even require single member districts. Indeed, ranked choice voting was used in Maine for congressional elections in 2018, and a federal judge rejected the argument that it is unconstitutional due to being contrary to historical practice. And while single member districts have been used uniformly for nearly two centuries, it was not the only method in use at the founding and courts tend to give special weight to founding era practice (see e.g. Evenwel v. Abbott for an example related to elections), which makes me think that FairVote's single transferable vote proposal is on solid constitutional footing.

Comment by randomea on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T17:23:28.387Z · score: 5 (3 votes) · EA · GW

The 80,000 Hours career review on UK commercial law finds that "while almost 10% of the Members of Parliament are lawyers, only around 0.6% have any background in high-end commercial law." I have been unable to find any similar analysis for the US. Do you know of any?

Comment by randomea on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-23T00:12:19.230Z · score: 10 (8 votes) · EA · GW

This makes me feel more strongly that there should be a separate career advice organization focused on near term causes. (See here for my original comment proposing this idea.)

A near term career advice organization could do the following:

  • Write in-depth problem profiles on causes that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective (e.g. U.S. criminal justice reform, developing country mental health, policy approaches to global poverty, food innovation approaches to animal suffering, biomedical research focused on aging)

  • Write in-depth career reviews of careers that could be considered to be among the highest impact from a near term perspective but that are not considered to be among the highest impact from a long term perspective (e.g. careers that correspond with the problems listed in the previous bullet point, specific options in the global poverty space, specific options in the animal suffering space)

  • Produce a podcast that focuses on interviewing people working on issues that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective

  • Become deeply familiar with the global poverty space, the animal suffering space, and other cause areas that are much more likely to be prioritized by near term people and form close connections to organizations working in such cause areas

  • Provide job postings, career coaching, and referrals based on the information gained through the previous bullet point

I think the proposed organization would actually complement 80,000 Hours by expanding the number of cause areas for which there's in-depth career advice and coaching; the two organizations could even establish a partnership where they refer people to each other as appropriate.

(As noted in my original comment, I think it's better to have a separate organization do this since a long-term focused organization understandably wants to focus its efforts on causes that are effective from its perspective.)

This approach could have various benefits including:

  • directly increasing impact by providing better advice to individual EAs who are unable to contribute to causes that are considered to be among the most pressing from a long term perspective

  • benefiting the long-term space by keeping individuals who have the potential to contribute to the long term space involved with EA while they gain more skills and experience

  • benefiting the long-term space by increasing the number of people who are able to benefit from EA career advice and thus the number of people who will refer others to 80,000 Hours (directly or through this proposed organization)

  • benefiting the long-term space through the various benefits of worldview diversification (learning from feedback loops, community image, option value)

  • benefiting individual EAs by helping them find a more fulfilling career (their utility counts too!)

Comment by randomea on EA needs a cause prioritization journal · 2018-09-12T22:59:35.273Z · score: 2 (4 votes) · EA · GW

Relevant literature:

Comment by randomea on EA Forum 2.0 Initial Announcement · 2018-09-09T08:06:45.193Z · score: 1 (1 votes) · EA · GW

Would it be possible to introduce a coauthoring feature? Doing so would allow both authors to be notified of new comments. The karma could be split if there are concerns that people would free ride.

Comment by randomea on Open Thread #41 · 2018-09-07T02:41:34.297Z · score: 2 (4 votes) · EA · GW

[Criminal Justice Reform Donation Recommendations]

I emailed Chloe Cockburn (the Criminal Justice Reform Program Officer for the Open Philanthropy Project) asking what she would recommend to small donors. She told me she recommends Real Justice PAC. Since contributions of $200 or more to PACs are disclosed to the FEC, I asked her what she would recommend to a donor who wants to stay anonymous (and whether her recommendation would be different for someone who could donate significantly more to a 501(c)(3) than a 501(c)(4) for tax reasons). She told me that she would recommend 501(c)(4)s for all donors because it's much harder for 501(c)(4)s to raise money and she specifically recommended the following 501(c)(4)s: Color of Change, Texas Organizing Project, New Virginia Majority, Faith in Action, and People's Action.

I asked for and received her permission to post the above.

(I edited this to add a subject in brackets at the top.)

Comment by randomea on Public Opinion about Existential Risk · 2018-08-26T23:23:08.392Z · score: 1 (1 votes) · EA · GW

Do you know if this platform allows participants to go back? (I assumed it did, which is why I thought a separate study would be necessary.)

Comment by randomea on Public Opinion about Existential Risk · 2018-08-26T17:16:06.096Z · score: 1 (1 votes) · EA · GW

Do you think that asking the same respondents about 50 years, 100 years, and 500 years caused them to scale their answers so that they would be reasonable in relation to each other? Put another way, do you think you would have gotten significantly different answers if you had asked 395 people about 50 years, 395 people about 100 years, and 395 people about 500 years (c.f. scope insensitivity)?

Comment by randomea on EA Forum 2.0 Initial Announcement · 2018-08-25T03:22:02.190Z · score: 2 (2 votes) · EA · GW

If you add a tag feature, can you make it so that authors can add tags to posts imported from EA Forum 1.0? I think it'd be great if someone interested in animal suffering could easily see all the EA Forum posts related to animal suffering.

And would you be willing to add a feature that allows you to tag individuals? (For this to work, you'd have to provide notifications in a more prominent way than the current 'Messages' system.)

Comment by randomea on EA Facebook Group Greatest Hits: Top 50 Posts by Total Reactions · 2018-08-22T17:00:12.240Z · score: 0 (0 votes) · EA · GW

Thank you so much for doing this! Is the total number of reactions just the number of likes and comments or does it also include shares? And if you happen to have more than the top 50 (as you hinted at here), would you be willing to post just the links in a Google doc?

Comment by randomea on CEA on community building, representativeness, and the EA Summit · 2018-08-16T02:29:59.728Z · score: 5 (5 votes) · EA · GW

If you do an action that does not look cause impartial (say EA Funds mostly grants money to far future causes) then just acknowledge this and say that you have noted it and explain why it happened.

Do you mean EA Grants? The allocation of EA Funds across cause areas is outside of CEA's control since there's a separate fund for each cause area.

Comment by randomea on EA Funds - An update from CEA · 2018-08-09T13:52:58.038Z · score: 1 (1 votes) · EA · GW

Do you know if it's just a fund for other large donors? It seems unusual to require small donors to send an email in order to donate.

If the fund is open to small donors, I hope CEA will consider mentioning it on the EA Funds website and the GWWC website.

Comment by randomea on Students for High-Impact Charity: 2018 Update · 2018-08-09T13:43:56.439Z · score: 2 (2 votes) · EA · GW

Could you put together a handbook and/or video that could be sent to all trainees or is it critical that there be interaction between the trainer and trainee?

Comment by randomea on EA Funds - An update from CEA · 2018-08-09T04:55:09.921Z · score: 1 (1 votes) · EA · GW

Would it be a good idea to create an EA Fund for U.S. criminal justice? It could potentially be run by the Open Phil program officer for U.S. criminal justice since it seems like a cause area where Open Phil is unlikely to fund everything the program officer thinks should be funded, which makes it more likely that extra funding can be spent effectively.

This could help attract more people into effective altruism. However, that could be bad if you think those people are less likely to fully embrace the ideas of effective altruism and thus would dilute the community.

Comment by randomea on Students for High-Impact Charity: 2018 Update · 2018-08-09T04:35:15.284Z · score: 7 (7 votes) · EA · GW

Do you think the general knowledge of EA that a typical EA has is sufficient to run a SHIC workshop? It seems to me that having local groups and university groups give EA lectures at high schools on career day is potentially both high impact and a way for those groups to do direct work.

Comment by randomea on Students for High-Impact Charity: 2018 Update · 2018-08-09T04:32:59.945Z · score: 4 (4 votes) · EA · GW

Two significant limitations are high rates of respondent attrition and the likely influence of social desirability bias and/or demand effects, as it was likely clear (post-workshop) which were the desired responses.

It seems to me one indication of social desirability bias and/or selective attrition is that there is a nearly half point shift in the average response to "I currently eat less meat than I used to for ethical reasons." On the other hand, it's possible students interpreted it as "I currently plan on eating less meat than I used to for ethical reasons."

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-06T16:39:59.427Z · score: 1 (1 votes) · EA · GW

A. Does that mean that, under a symmetric person-affecting Epicurean view, it's not bad if a person brings into existence someone who's highly likely to have a life filled with extreme suffering? Do you find this plausible?

B. Does that also mean that, under a symmetric person-affecting Epicurean view, there's no benefit from allowing a person who is currently enduring extreme suffering to terminate their life? Do you find this plausible?

C. Let's say a person holds the following views:

  1. It is good to increase the well-being of currently existing people and to decrease the suffering of currently existing people.

  2. It is good to increase the well-being of future people who will necessarily exist and to decrease the suffering of future people who will necessarily exist. (I'm using necessarily exist in a broad sense that sets aside the non-identity problem.)

  3. It's neither good nor bad to cause a person with a net positive life to come into existence or to cause a currently existing person who would live net positively for the rest of their life to stay alive.

  4. It's bad to cause a person who would live a net negative life to come into existence and to cause a currently existing person who would live net negatively for the rest of their life to stay alive.

Does this qualify as an Epicurean view? If not, is there a name for such a view?

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-06T11:44:30.882Z · score: 2 (2 votes) · EA · GW

I actually began to wonder this myself after posting. Specifically, it seems like an Epicurean could think s-risks are the most important cause. Hopefully Michael Plant will be able to answer your question. (Maybe EA Forum 2.0 should include a tagging feature.)

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-06T09:48:35.720Z · score: 2 (2 votes) · EA · GW

I'm aware of this and also planning on addressing it. One of the reasons that people associate the long term future with x-risk reduction is that the major EA organizations that have embraced the long term future thesis (80,000 Hours, Open Phil etc.) all consider biosecurity to be important. If your primary focus is on s-risks, you would not put much effort into biorisk reduction. (See here and here.)

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-05T11:35:58.739Z · score: 5 (5 votes) · EA · GW

I plan on posting the standalone post later today. This is one of the issues that I will do a better job addressing (as well as stating when an argument applies only to a subset of long term future/existential risk causes).

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-04T20:10:38.814Z · score: 4 (4 votes) · EA · GW

I'll consider expanding it and converting it into its own post. Out of curiosity, to what extent does the Everyday Utilitarian article still reflect your views on the subject?

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-04T18:12:11.886Z · score: 84 (58 votes) · EA · GW

Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.

  1. You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).

  2. Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).

  3. You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes this view to Holden Karnofsky, Karnofsky now puts much more weight on the view that improving the long term future is valuable.

  4. You might think it's hard to predict how the future will unfold and what impact our actions will have. (Note that the post is from five years ago and may no longer reflect the views of the author.)

  5. You might think that AI is unlikely to be a concern for at least 50 years (perhaps based on your conversations with people in the field). Given that ongoing suffering can only be alleviated in the present, you might think it's better to focus on that for now.

  6. You might think that when there is an opportunity to have an unusually large impact in the present, you should take it even if the impact is smaller than the expected impact of spending that money on long term future causes.

  7. You might think that the shorter feedback loops of near term causes allow us to learn lessons that may help with the long term future. For example, Animal Charity Evaluators may help us get a better sense of how to estimate cost-effectiveness with relatively weak empirical evidence, Wild Animal Suffering Research may help us learn how to build a new academic field, and the Good Food Institute may help us gain valuable experience influencing major economic and political actors.

  8. You might feel like you are a bad fit for long term future causes because they require more technical expertise (making it hard to contribute directly) and are less funding constrained (making it hard to contribute financially).

  9. You might feel a spiritual need to work on near term causes. Relatedly, you might feel like you're more likely to do direct work long term if you can feel motivated by videos of animal suffering (similar to how you might donate a smaller portion of your income because you think it's more likely to result in you giving long term).

  10. As you noted, you might think there are public image or recruitment benefits to near term work.

Note: I do not necessarily agree with any of the above.

Comment by randomea on Problems with EA representativeness and how to solve it · 2018-08-04T04:27:39.240Z · score: 8 (8 votes) · EA · GW

Would it make sense to have a separate entity for some aspects of global poverty and animal suffering? This is already the case for charity evaluation (GiveWell, ACE). It's also more or less already the case for EA Funds and could easily be extended to EA Grants (with a separate donation pool for each cause area). I can also envision a new career advice organization that provides people interested in global poverty and animal suffering with coaching by people very familiar with and experienced in those areas. (80,000 Hours has problem profiles, career reviews, and interviews related to both of those areas, but their coaching seems to focus primarily on other areas.) To be clear, I'm not proposing that EA outreach (as opposed to cause-specific outreach) be formally split between different organizations (since I think that's likely to be harmful). I'm also not proposing that EA infrastructure (the EA Forum, EA Global, GWWC etc.) be split up (since there's less of a tradeoff between supporting cause areas for general infrastructure). But I do think that when there is a significant tradeoff (due to the function being resource intensive), it would be good for there to be a separate entity so that those who prioritize different cause areas can also have that function for their preferred area. (It seems to me it would be difficult to do this within a single organization since that organization would understandably want to prioritize the cause area(s) it felt were most effective.)

Comment by randomea on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-02T04:32:51.171Z · score: 2 (2 votes) · EA · GW

I think the grants just announced confirm your view that Nick Beckstead can typically convince Open Phil to fund the grantees that he thinks are good (though I also agree with Jeff Kaufman that this may not be true for other Open Phil program officers). To the extent that EA Funds are premised on deferring to the judgment of someone who works full time on identifying giving opportunities, the best alternative to an Open Phil employee may be someone who works on EA Grants.

Here's one way EA Funds could be used to support EA Grants. CEA could choose multiple grant evaluators for each cause area (AI safety, biosecurity, community building, cause prioritization) and give each evaluator for a cause area the same amount of money. The evaluators could then choose which applicants to support; applicants supported by multiple evaluators would receive money from each of them (perhaps equally or perhaps weighted by the amount each one recommended). Donors would be able to see who evaluators had funded in the past and donate directly to the fund of a specific evaluator. If CEA commits to giving each evaluator for a cause area the same amount of money, then donors can be confident that their donations cause evaluators they trust more to have more money (although it'd be harder for them to be confident that they are increasing the overall amount of money spent on a cause area).

Comment by randomea on EA Forum 2.0 Initial Announcement · 2018-08-01T23:51:19.535Z · score: 0 (0 votes) · EA · GW

Would it be possible for you add a minimum font size requirement? Posts like this one are hard for me to read.

Comment by randomea on New research on effective climate charities · 2018-07-29T22:39:58.939Z · score: 5 (5 votes) · EA · GW

Have you considered reaching out to Giving What We Can and asking them to add a notice at the top of their Cool Earth page informing donors that they may be interested in your report? They already have a notice that says the information may be outdated, but a donor who reads that may think that it's still the best available research unless informed of newer research.