Posts

State Space of X-Risk Trajectories 2020-02-06T13:37:50.405Z · score: 23 (20 votes)
Four components of strategy research 2020-01-30T19:08:37.244Z · score: 16 (12 votes)
The ‘far future’ is not just the far future 2020-01-16T15:30:47.164Z · score: 30 (19 votes)
AI and X-risk Strategy Unconference at EA Hotel in November 2019-10-24T10:52:46.521Z · score: 5 (9 votes)
A case for strategy research: what it is and why we need more of it 2019-06-20T20:18:09.025Z · score: 47 (25 votes)

Comments

Comment by david_kristoffersson on State Space of X-Risk Trajectories · 2020-02-10T23:45:13.149Z · score: 1 (1 votes) · EA · GW

Happy to see you found it useful, Adam! Yes, general technological development corresponding to scaling of the vector is exactly the kind of intuition it's meant to carry.

Comment by david_kristoffersson on State Space of X-Risk Trajectories · 2020-02-09T13:45:37.823Z · score: 2 (2 votes) · EA · GW

But beyond the trajectories (and maybe specific distances), are you planning on representing the other elements you mention? Like the uncertainty or the speed along trajectories?

Thanks for your comment. Yes; the other elements, like uncertainty, would definitely be part of further work on the trajectories model.

Comment by david_kristoffersson on Differential progress / intellectual progress / technological development · 2020-02-08T10:13:22.440Z · score: 3 (3 votes) · EA · GW

I think that if I could unilaterally and definitively decide on the terms, I'd go with "differential technological development" (so keep that one the same), "differential intellectual development", and "differential development". I.e., I'd skip the word "progress", because we're really talking about something more like "lasting changes", without the positive connotations.

I agree, "development" seems like a superior word to reduce ambiguities. But as you say, this is a summary post, so it might not the best place to suggest switching up terms.

Here's two long form alternatives to "differential progress"/"differential development": differential societal development, differential civilizational development.

Comment by David_Kristoffersson on [deleted post] 2020-01-31T11:46:23.569Z

Thanks Tobias, I think you make a really good point! You're definitely right that there are some in the cause area who don't think the technological transformation is likely.

I don't think you've established that the 'technological transformation' is essential.

What I wanted to say with this post is that it's essential to the view of a large majority in the cause area. The article is not really meant to do a good job at arguing that it should be essential to peoples' views.

It's possible I'm wrong about the size of the majority; but this was definitely my impression.

You may believe that shaping AI / the technological transformation would offer far more leverage than other interventions, but some will disagree with that, which is a strong reason to not include this in the definition.

I personally believe in something like a mix of shaping technology with other interventions, such as the ones you mentioned ("moral circle expansion, improving international cooperation, improving political processes (e.g. trying to empower future people, voting reform, reducing polarisation").

Comment by david_kristoffersson on EA Survey 2019 Series: Geographic Distribution of EAs · 2020-01-24T10:14:51.623Z · score: 4 (3 votes) · EA · GW

The long term future is especially popular among EAs living in Oxford, not surprising given the focus of the Global Priorities Institute on longtermism

Even more than that, The Future of Humanity Institute has been in Oxford since 2005!

Comment by david_kristoffersson on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-20T10:25:07.424Z · score: 1 (1 votes) · EA · GW

I'm not arguing "AI will definitely go well by default, so no one should work on it". I'm arguing "Longtermists currently overestimate the magnitude of AI risk".

Thanks for the clarification Rohin!

I also agree overall with reallyeli.

Comment by david_kristoffersson on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-18T15:00:30.708Z · score: 4 (3 votes) · EA · GW

I'm sympathetic to many of the points, but I'm somewhat puzzled by the framing that you chose in this letter.

Why AI risk might be solved without additional intervention from longtermist

Sends me the message that longtermists should care less about AI risk.

Though, the people in the "conversations" all support AI safety research. And, from Rohin's own words:

Overall, it feels like there's around 90% chance that AI would not cause x-risk without additional intervention by longtermists.

10% chance of existential risk from AI sounds like a problem of catastrophic proportions to me. It implies that we need many more resources spent on existential risk reduction. Though perhaps not strictly on technical AI safety. Perhaps more marginal resources should be directed to strategy-oriented research instead.

Comment by david_kristoffersson on The ‘far future’ is not just the far future · 2020-01-17T18:52:01.079Z · score: 1 (1 votes) · EA · GW

Good point, 'x-risk' is short and 'reduction' should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in "I work with x-risk", just as "I work with/in global poverty" works. Though some interjections that occur to me in the moment are: "the cause of x-risk" feels clumsy, "letter, dash, and then a word" feels like an odd construct, and it's a bit negatively oriented.

Comment by david_kristoffersson on The ‘far future’ is not just the far future · 2020-01-17T11:45:28.199Z · score: 2 (2 votes) · EA · GW

Thank you for your thoughtful comment!

All work is future oriented Indeed. You don't tend to employ the word 'future' or emphasize it for most work though.

One alternative could be 'full future', signifying that it encompasses both the near and long term.

I think there should be space for new and more specific terms. 'Long term' has strengths, but it's overloaded with many meanings. 'Existential risk reduction' is specific but quite a mouthful; something shorter would be great. I'm working on another article where I will offer one new alternative.

Comment by david_kristoffersson on On AI Weapons · 2020-01-07T14:23:02.753Z · score: 1 (1 votes) · EA · GW

Excellent analysis, thank you! The issue definitely needs a more nuanced discussion. The increasing automation of weaponry (and other technology) won't be stopped globally and pervasively, so we should endeavor to shape how it is developed and applied in a more positive direction.

Comment by david_kristoffersson on A case for strategy research: what it is and why we need more of it · 2019-07-12T06:34:38.729Z · score: 2 (2 votes) · EA · GW

Indeed! We hope we can deliver that sooner rather than later. Though foundational research may need time to properly come to fruition.

Comment by david_kristoffersson on A case for strategy research: what it is and why we need more of it · 2019-07-11T14:14:29.852Z · score: 5 (4 votes) · EA · GW

Thanks for your detailed comment, Max!

Relative to my own intuitions, I feel like you underestimate the extent to which your "spine" ideally would be a back-and-forth between its different levels 

I agree, the "spine" glosses over a lot of the important dynamics.

I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you're planning to do strategy research going forward (or what methods you'd recommend to others).

Very good points. Both would indeed be highly valuable to the argument. As follow up posts, I'm considering writing up (1) concrete projects in strategy research that seem valuable, and (2) a research agenda.

While I agree that we face substantial strategic uncertainty, I think I'm significantly less optimistic about the marginal tractability of strategy research than you seem to be.

Yeah, we're more optimistic than you here. I don't think it's possible to do useful completely "tactics and data free" strategy research. But I do think there is highly valuable strategy research to do that can be grounded with a smaller amount of tactics and data gathering.
What tactics research and data gathering is key? I think this is a strategic question and I think we're currently just scratching the surface.

For example, while I tend to be excited about work that, say, immediately helps Open Phil to determine their funding allocation, I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as "how to best allocate resources between reducing various existential risks" in the abstract.

I agree that it seems like that could easily be a bad use of time for "external researchers" to do that. I'm somewhat optimistic about these researchers examining sub-questions that would inform how to do the allocation.

Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I'm therefore reluctant to call them "research".

I think the idea cluster of existential risk reduction was formed through something I'd call "research". I think, in a certain way, we need more work of this type. But it also needs to be different in some important way in order to create new valuable knowledge. We hope to do work of this nature.

Comment by david_kristoffersson on What new EA project or org would you like to see created in the next 3 years? · 2019-06-26T17:52:19.604Z · score: 2 (2 votes) · EA · GW

Fiscal sponsorship can be very helpful for new groups!

Though regarding attorney fees:

Official nonprofit status can take many months to get in the US, and cost $10-30k of attorney fees.

Where are you getting this from? Attorney fees are on the order of $2-5k.

https://nonprofitelite.com/how-much-will-it-cost-to-get-501c3-tax-exempt-2/

CPA’s and attorneys who specialize in nonprofit organizations routinely charge $2,500–$5,000 for preparation of IRS Form 1023 applications for small organizations, and $6,000-$15,000 for more complex ventures. 

The following two firms used to post public quotes on the order of $2-3k in 2018 (it seems they've taken down the public quotes):

https://www.501c3.org/501c3-services/start-a-501c3-nonprofit/

https://www.harborcompliance.com/nonprofit

Starting a non-profit includes non-trivial amounts of other kinds of work though.

Comment by david_kristoffersson on Long Term Future Fund: November grant decisions · 2018-12-10T23:51:13.741Z · score: 1 (1 votes) · EA · GW

Good points.

Perhaps funding organizations would like better ways of figuring out the risks of supporting new projects? I think valuable work could be done here.

One way how to think about it* is projecting the space along two axes: "project size" and "risks/establishedness".

Justin Shovelain came up with that. (Justin and I were both on the strategy team of AISC 1.)