Posts

Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism 2014-08-18T04:30:03.000Z · score: 2 (2 votes)
A relatively atheoretical perspective on astronomical waste 2014-08-06T00:55:16.000Z · score: 7 (2 votes)
Will we eventually be able to colonize other stars? Notes from a preliminary review 2014-06-22T18:19:50.000Z · score: 6 (5 votes)
Improving disaster shelters to increase the chances of recovery from a global catastrophe 2014-02-19T22:17:03.000Z · score: 11 (8 votes)
A proposed adjustment to the astronomical waste argument 2013-05-27T04:00:57.000Z · score: 8 (5 votes)

Comments

Comment by nick_beckstead on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-02T18:24:06.355Z · score: 18 (18 votes) · EA · GW

Hi Evan, let me address some of the topics you’ve raised in turn.

Regarding original intentions and new information obtained:

  • At the time that the funds were formed, it was an open question in my mind how much of the funding would support established organizations vs. emerging organizations.
  • Since then, the things that changed were that EA Grants got started, I encountered fewer emerging organizations that I wanted to prioritize funding than expected, and Open Phil funding to established organizations grew more than I expected.
  • The three factors contributed to having fewer grants to make that couldn’t be made in other ways than was expected.
  • The former two factors contributed to a desire to focus primarily on established organizations.
  • The third opposes this, but I still see the balance of considerations favoring me focusing on established organizations.

Regarding my/CEA’s communications about the purposes of the funds: It seems you and some others have gotten the impression that the EA Funds I manage were originally intended to focus on emerging organizations over established organizations. I don’t think this is communicated in the main places I would expect it to be communicated if the fund were definitely focused on emerging organizations. For example, the description of the Long-Term Future Fund reads:

“This fund will support organizations that work on improving long-term outcomes for humanity. Grants will likely go to organizations that seek to reduce global catastrophic risks, especially those relating to advanced artificial intelligence.”

And “What sorts of interventions or organizations might this fund support?” reads:

"In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence."

The new grants also strike me as a natural continuation of the “grant history” section. Based on the above, I'd have thought the more natural interpretation was, "You are giving money for Nick Beckstead to regrant at his discretion to organizations in the EA/GCR space."

The main piece of evidence that these funds were billed as focused on emerging organizations that I see in your write-up is this statement under “Why might you choose not to donate to this fund?”:

“First, donors who prefer to support established organizations. The fund manager has a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.”

I understand how this is confusing, and I regret the way that we worded it. I can see that this could give someone the impression that the fund would focus primarily on emerging organizations, and that isn’t what I intended to communicate.

What I wanted to communicate was that I might fund many emerging organizations, if that seemed like the best idea, and I wanted to warn donors about the risks involved with funding emerging organizations. Indeed, two early grants from these funds were to emerging orgs: BERI and EA Sweden, so I think it's good that some warning was here. That said, even at the time this was written, I think “likely” was too strong a word, and “may” would have been more appropriate. It’s just an error that I failed to catch. In a panel discussion at EA Global in 2017, my answer to a related question about funding new vs. established orgs was more tentative, and better reflects what I think the page should have said.

I also think there are a couple of other statements like this on the page that I think could have been misinterpreted in similar ways, and I have regrets about them as well.

Comment by nick_beckstead on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-23T19:53:59.468Z · score: 42 (42 votes) · EA · GW

Thanks for sharing your concerns, Evan. It sounds like your core concerns relate to (i) delay between receipt and use of funds, (ii) focus on established grantees over new and emerging grantees, and (iii) limited attention to these funds. Some thoughts and comments on these points:

  • I recently recommended a series of grants that will use up all EA Funds under my discretion. This became a larger priority in the last few months due to an influx of cryptocurrency donations. I expect a public announcement of the details after all grant logistics have been completed.

  • A major reason I haven’t made many grants is that most of the grants that I wanted to make could be made through Open Phil, and I’ve focused my attention on my Open Phil grantmaking because the amount of funding available is larger.

  • I am hopeful that EA Grants and BERI will provide funding to new projects in these areas. CEA and BERI strike me as likely to make good choices about funding new projects in these areas, and I think this makes sense as a division of labor. EA Grants isn’t immediately available for public applications, but I’m hopeful they’ll have a public funding round soon. BERI issued a request for proposals last month. As these programs mature, I expect that most of what is seen as funding gaps in these areas will be driven by taste/disagreement with these grantmakers rather than lack of funding.

For now, I don’t have any plans to change the focus or frequency of my grantmaking with these funds from what was indicated in my April 2018 update.

I think it’s probably true that a fund manager who has more time to manage these funds would be preferable, provided we found someone with suitable qualifications. This is a possibility that’s under consideration right now, but progress toward it will depend on the availability of a suitable manager and further thinking about how to allocate attention to this issue relative to other priorities.

Comment by nick_beckstead on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-27T18:49:58.863Z · score: 2 (2 votes) · EA · GW

In addition to, 35 days total. (I work at Open Phil.)

Comment by nick_beckstead on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-27T17:47:57.691Z · score: 0 (0 votes) · EA · GW

I don't mean to make a claim re: averages, just relaying personal experience.

Comment by nick_beckstead on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:33:12.720Z · score: 2 (2 votes) · EA · GW

I am a Program Officer at Open Philanthropy who joined as a Research Analyst about 3 years ago.

The prior two places I lived were New Brunswick, NJ and Oxford, UK. I live in a house with a few friends. It is 25-30m commute door-to-door via BART. My rent and monthly expenses are comparable to what I had in Oxford but noticeably larger than what I had in New Brunswick. I got pay increases when I moved to Open Phil, and additional raises over time. I’m comfortable on my current salary and could afford to get a single-bedroom apartment if I wanted, but I’m happy where I am.

Overall, I would say that it was an easy adjustment.

Comment by nick_beckstead on How important is marginal earning to give? · 2015-05-20T00:45:47.970Z · score: 3 (3 votes) · EA · GW

To avoid confusing people: my own annual contributions to charity are modest.

Comment by nick_beckstead on Should we launch a podcast about high-impact projects and people? · 2014-12-01T22:02:25.222Z · score: 2 (2 votes) · EA · GW

You might consider having a look at http://www.flamingswordofjustice.com/ . It's a podcast of interviews with activists of various types (pretty left-wing). I've listened to a few episodes and found it interesting. It was the closest thing I could think of that already exists.

Comment by nick_beckstead on Open Thread · 2014-09-21T16:25:27.749Z · score: 1 (1 votes) · EA · GW

I would love to see some action in this space. I think there is a natural harmony between what is best in Christianity--especially regarding helping the global poor--and effective altruism.

One person to consider speaking with is Charlie Camosy, who has worked with Peter Singer in the past (see info here). A couple other people to consider talking with would be Catriona Mackay and Alex Foster.

Comment by nick_beckstead on Cosmopolitanism · 2014-09-11T16:18:26.376Z · score: 8 (8 votes) · EA · GW

One attractive feature about cosmopolitanism in contrast with impartial benevolence is that impartial benevolence is often associated with denying that loved ones and family members are worthy targets of special concern, whereas I don't think cosmopolitanism has such associations. Another is that I think a larger fraction of educated people already have some knowledge about cosmopolitanism.

Comment by nick_beckstead on Good policy ideas that won’t happen (yet) · 2014-09-11T16:13:03.668Z · score: 2 (2 votes) · EA · GW

Niel, thanks for writing up this post. I think it's really worthwhile for us to discuss challenges that we encounter while working on EA projects with the community.

I noticed that this link in this sentence is broken:

Creating more disaster shelters to protect against global catastrophic risks (too weird)

Comment by nick_beckstead on Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism · 2014-08-19T14:50:00.000Z · score: 0 (0 votes) · EA · GW

I think that comment is mostly Holden being modest.

Comment by nick_beckstead on Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism · 2014-08-18T20:00:00.000Z · score: 0 (0 votes) · EA · GW

I agree with all of that, though maybe I'm a bit more queasy about numbers >100.

Comment by nick_beckstead on Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the "long-run" perspective on effective altruism · 2014-08-18T13:06:00.000Z · score: 1 (1 votes) · EA · GW

After thinking about this later, I noticed that one of my claims was wrong. I said:

> Though I’m not particularly excited about refuges, they might be a good test case. I think that if you had this 5N view, refuges would be obviously dumb but if you had the view that I defended in my dissertation then refuges would be interesting from a conceptual perspective.

But then I ran some numbers and this no longer seemed true. If you assumed a population of 10B, an N of 5, a cost of your refuge of $1B, that your risk of doom was 1%, and that your refuge could cut out a thousandth of that 1%, you get a cost per life-equivalent saved of $2000 (with much more favorable figures if you assume higher risk and/or higher refuge effectiveness). So a back-of-the-envelope calculation would suggest that, contrary to what I said, refuges would not be obviously dumb if you had the 5N view. (Link to back-of-envelope calc: https://docs.google.com/spreadsheets/d/1RRlj1sZpPJ8hr-KvMQy5R8NayA3a58EhLODXPu4NgRo/edit#gid=1176340950 .)

My best current guess is that building refuges wouldn't be this effective at reducing existential risk, but that was after I looked into the issue a bit. I was probably wrong to think that Holden's 5N heuristic would have ruled out refuges ex ante. (Link to other discussion of refuges: /ea/5r/improving_disaster_shelters_to_increase_the/ .)

Comment by nick_beckstead on A relatively atheoretical perspective on astronomical waste · 2014-08-08T14:01:00.000Z · score: 0 (0 votes) · EA · GW

I think it's an open question whether "even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future." But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of "long-term technological/cultural path dependence/lock in," rather than the GCR category (though that wasn't the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.

Comment by nick_beckstead on A relatively atheoretical perspective on astronomical waste · 2014-08-08T12:35:00.000Z · score: 0 (0 votes) · EA · GW

Re 1, yes it is philosophically controversial, but it also does speak to people with a number of different axiologies, as Brian Tomasik points out in another comment. One way to frame it is that it's doing what separability does in my dissertation, but noticing that astronomical waste can run without making assumptions about the value of creating extra people. So you could think of it as running that argument with one less premise.

Re 2, yes it pushes in an unbounded utility function direction, and that's relevant if your preferred resolution of Pascal's Mugging is to have a bounded utility function. But this is also a problem for standard presentations of the astronomical waste argument. As it happens, I think you can run stuff like astronomical waste with bounded utility functions. Matt Wage has some nice stuff about this in his senior thesis, and I think Carl Shulman has a forthcoming post which makes some similar points. I think astronomical waste can be defended from more perspectives than it has been in the past, and it's good to show that. This post is part of that project.

Re 3, I'd frame this way, "We use this all the time and it's great in ordinary situations. I'm doing the natural extrapolation to strange situations." Yes, it might break down in weird situations, but it's the extrapolation I'd put most weight on.

Comment by nick_beckstead on Will we eventually be able to colonize other stars? Notes from a preliminary review · 2014-06-26T14:28:00.000Z · score: 2 (2 votes) · EA · GW

I haven't done a calculation on that, but I agree it's important to consider. Regarding your calculation, a few of these factors are non-independent in a way that favors space colonization. Specifically:

Speeding up and slowing down are basically the same, so you should just treat that as one issue. Fitting everything you need into the spaceship and being able to build a civilization when you arrive are very closely related. Having your stuff survive the voyage and being able to build a civilization in a hostile environment are closely related. I would expect that if you can build a civilization when you get there, you can make sure your equipment functions during your voyage. Having the capacity to overcome the above problems and the non-existence of a presently unknown fatal obstacle is related to people later deciding whether to do this.

I also think they're positively related in a more subtle way. There are people who know more about this than you or me who are saying that all these obstacles can be overcome. Conditional on one of these obstacles being possible to overcome (as they say), I have more confidence in their judgment, which makes me more confident that the other obstacles can be overcome.

Re: space cities, I haven't looked into it much personally. Much of the discussion seems to assume building your civilization on a planet. My intuition is that space cities are probably easier.

Comment by nick_beckstead on EA transparency update: Sep 13 - Mar 14 · 2014-03-15T18:20:00.000Z · score: 0 (0 votes) · EA · GW

Cool, thanks for the update.

Comment by Nick_Beckstead on [deleted post] 2014-01-24T03:11:00.000Z

This post appears to be incomplete.

Comment by nick_beckstead on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-10T15:49:00.000Z · score: 2 (2 votes) · EA · GW

I agree that a choice of discount rate is fundamentally important in this context. If you did the standard thing of choosing a constant discount rate (e.g. 5%) and used that for all downstream benefits, even ones millions of years into the future, that would make helping future generations substantially less important. By emphasizing the distinction between pure discounting and discounting as a computational convenience, I did not mean to suggest that views about how to discount future benefits were unimportant.

I was distinguishing between two possible motives for discounting that I think clarifies what the purpose of discounting should be. The two purposes are hard to disentangle because they overlap in practice, but I think they diverge when it comes to distant future generations. I can try to explain more if you haven't understood what the distinction I'm intending is. It's the difference between "Benefits now are better just because that's what people prefer" and "benefits now are better because they cause compounding growth, future people will be richer, the future is uncertain, etc". If you go for the second answer, the conclusion isn't something like "use a 5% discount rate for all benefits, even ones a million years out", but instead "use a discount rate that accurately reflects your beliefs about growth, uncertainty, marginal value of consumption, etc." in the the distant future. For reasons I linked to in Hanson and Weitzman, that's not what I expect. Briefly, constant exponential growth over million-year timescales is hard (but not impossible) to square with physics-imposed constraints on the resources we could have access to. And, as Weitzman argues, I believe uncertainty about future growth results in a form of discounting that looks more hyperbolic and less exponential in the long run. These differences are not very consequential over the next 50 years or something, but I believe they are very consequential when you consider the entire possible future of our species.

That last sentence would take more explaining than I have done in any work I've publicly written up, and it's something I would like to get to in the future. I haven't run into many people for whom this was the major sticking point for whether they accept the long-run perspective I defend. But if this is your sticking point and you think it would be for many economists, do let me know and I'll consider prioritizing a better explanation.

Comment by nick_beckstead on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-10T03:28:00.000Z · score: 1 (1 votes) · EA · GW

I like to distinguish between pure discounting and discounting as a computational convenience. By "pure discounting," I mean caring less about the very same benefit, which you'll get with certainty in the future, than a benefit you can get now. I see this as a values question, and my preference is to have a 0% pure discount rate. One might discount as a computational convenience to adjust for returns on investment from having benefits arrive earlier, uncertainty about the benefits arriving, changes in future wealth, or other reasons.

When you are deciding how to discount, I find it easiest to think about the problem without any discounting of any kind (doing something like a classical utilitarian analysis) and explicitly think about the empirical effects. Then if you want to use discounting as a computational convenience, you can try to choose one that gives similar results to thinking about the problem without any kind of discounting.

Regarding the hypothetical richer kids vs. current kids, I agree that one should make adjustments for uncertainty about whether there will be future kids, diminishing marginal utility of consumption, and beliefs about future growth. I don't think this is well-captured by a constant exponential discount rate into the distant future. There are a lot of reasons I think this. Two I can quickly link to are here (http://www.overcomingbias.com/2009/09/limits-to-growth.html) and here (http://www.sciencedirect.com/science/article/pii/S009506969891052X).

I might be able to respond better if you told me how you think an appropriate treatment of discounting might affect the conclusions that Carl and I drew.

Comment by nick_beckstead on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-10T00:33:00.000Z · score: 1 (2 votes) · EA · GW

Yes, discount rates are an important thing to discuss here. I briefly discuss them on pp. 63-64 of my dissertation (http://www.nickbeckstead.com/research). I endorse using discount rates on a case-by-case basis as a convenience for calculation, but count harms and benefits as, in themselves and apart from their consequences, equally important whenever they occur.

For further articulation of similar perspectives I recommend:

Cowen, T. and Parfit, D. (1992). Justice Between Age Groups and Generations, chapter Against the Social Discount Rate, pages 144–161. Yale University Press, New Haven.

and

http://rationalaltruist.com/2013/02/22/four-flavors-of-time-discounting-i-endorse-and-one-i-do-not/

Comment by nick_beckstead on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-08T05:17:00.000Z · score: 0 (0 votes) · EA · GW

I broadly agree with Carl’s comment, though I have less of an opinion about the specifics of how you have done your learning grants. Part of your question may be, “Why would you do this if we’re already doing it?” I believe that strategic cause selection is an enormous issue and we have something to contribute. In this scenario, we certainly would want to work with you and like-minded organizations.

Comment by nick_beckstead on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-08T02:36:00.000Z · score: 1 (1 votes) · EA · GW

We think many non-human animals, artificial intelligence programs, and extraterrestrial species could all be of moral concern, to degrees varying based on their particular characteristics but without species membership as such being essential. Humanity is used alternately in the text with "civilization," a civilization for which humanity is currently in the driver's seat.