Posts

Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z · score: 29 (10 votes)
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z · score: 52 (20 votes)
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z · score: 60 (23 votes)
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z · score: 137 (73 votes)
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z · score: 41 (19 votes)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z · score: 19 (13 votes)
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z · score: 35 (29 votes)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z · score: 21 (11 votes)

Comments

Comment by habryka on EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th · 2019-09-13T20:20:36.379Z · score: 9 (6 votes) · EA · GW

Note: We (the Long Term Future Fund) will likely publish our writeups for the last round of grants within the next few days, which should give applicants some more data on what kind of grants we are likely to fund in the future.

Comment by habryka on Are we living at the most influential time in history? · 2019-09-07T22:02:17.371Z · score: 2 (3 votes) · EA · GW

At least in Will's model, we are among the earliest human generations, so I don't think this argument holds very much, unless you posit a very fast diminishing prior (which so far nobody has done).

Comment by habryka on Are we living at the most influential time in history? · 2019-09-05T04:55:43.717Z · score: 12 (8 votes) · EA · GW
I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I tried engaging with the post for 2-3 hours and was working on a response, but ended up kind of bouncing off at least in part because the definition of hingyness didn't seem particularly action-relevant to me, mostly for the reasons that Gregory Lewis and Kit outlined in their comments.

I also think a major issue with the current definition is that I don't know of any technology or ability to reliably pass on resources to future centuries, which introduces a natural strong discount factor into the system, but which seems like a major consideration in favor of spending resources now instead of trying to pass them on (and likely fail, as illustrated in Robin Hanson's original "giving later" post).

Comment by habryka on Are we living at the most influential time in history? · 2019-09-05T04:41:43.967Z · score: 13 (6 votes) · EA · GW

While I agree with you that is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely's suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

I do think that the focus on is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don't really know why we care about the question of "are we in the most influential time in history?". What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any individual's actions).

Comment by habryka on Why were people skeptical about RAISE? · 2019-09-04T17:06:47.920Z · score: 21 (11 votes) · EA · GW

I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.

To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn't actually have the skills to execute on it, which I think few people have.

Comment by habryka on To what extent is Paradigm Academy a front organization for, or a covert rebrand of Leverage Research? · 2019-09-03T02:38:42.175Z · score: 8 (6 votes) · EA · GW

[Epistemic status: Talked to Geoff a month ago about the state of Leverage, trying to remember the details of what was said, but not super confident I am getting everything right]

My sense is that I would not classify Leverage Research as having been disbanded, though it did scale down quite significantly and does appear to have changed shape in major ways. Leverage Research continues to exist as an organization with about 5 staff, and continues to share a very close relationship with Paradigm Academy, though I do believe that those organizations have become more distinct recently (no longer having any shared staff, and no longer having any shared meetings, but still living in the same building and both being led by Geoff).

Comment by habryka on Reading on how to maintain motivation? · 2019-08-27T21:14:22.197Z · score: 5 (3 votes) · EA · GW

A large fraction of "Minding Our Way" is about this: http://mindingourway.com/

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-08-26T16:23:30.985Z · score: 2 (1 votes) · EA · GW

Yes! Due to a bunch of other LTFF things taking up my time I was planning to post my reply to this around the same time as the next round of grant announcements.

Comment by habryka on Changes to EA Funds management teams (June 2019) · 2019-08-08T23:01:27.379Z · score: 8 (3 votes) · EA · GW

In his email to us he only mentioned time-constraints (in particular I think his other commitments at Bellroy and helping with MIRI seemed to ramp up around that time, though I also think the fund took more time than he had initially expected).

Comment by habryka on EA Forum Prize: Winners for June 2019 · 2019-07-29T02:06:22.918Z · score: 4 (3 votes) · EA · GW

This updated me a bit, and I think I now at least partially retract that part of my comment.

Comment by habryka on EA Forum Prize: Winners for June 2019 · 2019-07-26T18:14:07.992Z · score: 25 (17 votes) · EA · GW

I think the Information security careers for GCR reduction post is a relatively bad first place, and made me update reasonably strong downwards on the signal of the price.

It's not that the post is bad, but I didn't perceive it to contribute much to intellectual progress in any major way, and to me mostly parsed as an organizational announcement. The post obviously got a lot of upvotes, which is good because it was an important announcement, but I think a large part of that is because it was written by Open Phil (which is what makes it an important announcement) [Edit: I believe this less strongly now than I did at the time of writing this comment. See my short thread with Peter_Hurford]. I expect the same post written by someone else would have not received much prominence and I expect would have very unlikely been selected for a price.

I think it's particularly bad for posts to get prizes that would have been impossible to write when not coming from an established organization. I am much less confident about posts that could have been written by someone else, but that happened to have been written by someone in a full time role at an EA organization.

Comment by habryka on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-07-26T01:08:36.788Z · score: 7 (4 votes) · EA · GW

Thanks for the response!

I think you misunderstood what I was saying at least a bit, in that I did read the post in reasonably close detail (about a total of half an hour of reading) and was aware of most of your comment.

I will try to find the time to write a longer response that tries to explain my case in more detail, but can't currently make any promises. I expect there are some larger inferential distances here that would take a while to cross for both of us.

Comment by habryka on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-07-25T19:39:06.148Z · score: 34 (11 votes) · EA · GW

First of all, I think evaluations like this are quite important and a core part of what I think of as EA's value proposition. I applaud the effort and dedication that went into this report, and would like to see more people trying similar things in the future.

Tee Barnett asked me for feedback in a private message. Here is a very slightly edited version of my response (hency why it is more off-the-cuff than I would usually post on the forum):

-------

Hmm, I don't know. I looked at the cost-effectiveness section and feel mostly that the post is overemphasizing formal models. Like, after reading the whole thing, and looking at the spreadsheet for 5 minutes I am still unable to answer the following core questions:

  • What is the basic argument for Donational?
    • Does that argument hold up after looking into it in more detail?
    • How does the quality of that argument compare against other things in the space?
  • What has donational done so far?
  • What evidence do we have about its operations?
  • If you do a naive simple fermi estimate on donational's effectiveness, what is the bottom line?

I think I would have preferred just one individual writing a post titled "Why I am not excited about Donational", that just tries to explain clearly, like you would in a conversation, why they don't think it's a good idea, or how they have come to change their mind.

Obviously I am strongly in favor of people doing evaluations like this, though I don't think I am a huge fan of the format that this one chose.

------- (end of quote)

On a broader level, I think there might be some philosophical assumptions about the way this post deals with modeling cause prioritization that I disagree with. I have this sense that the primary purpose of mathematical analysis in most contexts is to help someone build a deeper understanding of a problem by helping them make their assumptions explicit and to clarify the consequences of their assumptions, and that after writing down their formal models and truly understanding their consequences, most decision makers are well-advised to throw away the formal models and go with what their updated gut-sense is.

When I look at this post, I have a lot of trouble understanding the actual reasons for why someone might think Donational is a good idea, and what arguments would (and maybe have) convinced them otherwise. Instead I see a large amount of rigor being poured into a single cost-effectiveness model, with a result that I am pretty confident could have been replaced by some pretty straightforward fermi point-estimates.

I think there is nothing wrong with also doing sensitivity analyses and more complicated parameter estimation, but in this context it seems that all of that mostly obscures the core aspects of the underlying uncertainty and makes it harder for both the reader to understand what the basic case for Donational is (and why it fails), and (in my model) for the people constructing the model to actually interface with the core questions at hand.

All of this doesn't mean that the tools employed here are never the correct tools to be used, but I do think that when trying to produce an evaluation that is primarily designed for external consumption, I would prefer much more emphasis to be given to clear explanations of the basic idea behind the organizations and an explanation of a set of cruxes and observations that would change the evaluators mind, instead of this much emphasis on both the creation of detailed mathematical models and the explanation of those models.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-07-15T20:51:31.974Z · score: 3 (2 votes) · EA · GW

Update: I was just wrong, Matt is indeed primarily HK

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-07-14T18:31:32.202Z · score: 6 (4 votes) · EA · GW

Stefan Torges from REG recently asked me for our room for funding, and I sent him the following response back:

About the room for funding question, here are my rough estimates (this is for money in addition to our expected donations of about $1.6M per year): 
75% confidence threshold: ~$1M
50%: ~$1.5M
25%: ~$3M 
10%: ~$5M
Happy to provide more details on what kind of funding I would expect in the different scenarios. 

The value of these marginal grants doesn't feel like it would go down more than 20% than our current worst grants, since in every round I feel like there is a large number of grants that are highly competitive with the lowest-ranked grants we do make.

In other words, I think we have significant room for funding at about the quality level of grants we are currently making.

Comment by habryka on I find this forum increasingly difficult to navigate · 2019-07-05T23:47:46.727Z · score: 9 (3 votes) · EA · GW

On LessWrong we intentionally didn't want to encourage pictures in the comments, since that provides a way to hijack people's attention in a way that seemed too easy. You can use markdown syntax to add pictures, both in the markdown editor and the WYSIWYG editor.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-06-30T18:20:36.185Z · score: 2 (1 votes) · EA · GW

Answer turned out to be closer to 3 months.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-25T20:21:32.583Z · score: 3 (2 votes) · EA · GW

This seems reasonable. I changed it to say "ethical".

Comment by habryka on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T20:44:42.484Z · score: 13 (6 votes) · EA · GW

I was given a student loan by an EA, which I think was likely a major factor in me being able to work on the things I am working on now.

Comment by habryka on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T19:45:22.914Z · score: 12 (3 votes) · EA · GW

We have basically all of the technology to do that on the EA Forum as soon as CEA activates the sequences and recommendations features, which I expect to happen at some point in the next few weeks.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-24T19:42:58.308Z · score: 1 (2 votes) · EA · GW

Hmm, I don't think so. Though I am not fully sure. Might depend on the precise definition.

It feels metaethical because I am responding to a perceived confusion of "what defines moral value?", and not "what things are moral?".

I think "adding up people's experience over the course of their life determines whether an act has good consequences or not" is a confused approach to ethics, which feels more like a metaethical instead of an ethical disagreement.

However, happy to use either term if anyone feels strongly, or happy to learn that this kind of disagreement falls clearly into either "ethics" or "metaethics".

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-21T01:40:11.883Z · score: 6 (3 votes) · EA · GW
I used the word 'relegate', because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong.

Also not how it is intended to work on LessWrong. There is some (around 30%) loss in average visibility but there are many important posts that are on personal blogposts on LessWrong. The distinction is more nuanced and being left on personal blogpost is definitely not primarily a signifier of quality.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-21T00:50:43.511Z · score: 2 (1 votes) · EA · GW

I was responding to this section, which immediately follows your quote:

While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available.

I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people's values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:

Ahh! No! Let's be very clear about this: the robot is predicting which outcomes would follow from which actions, and it's ranking them, and it's taking the actions that lead to the best outcomes. Actions are rated according to what they achieve. Actions do not themselves have intrinsic worth!
Do you see where these naïve philosophers went confused? They have postulated an agent which treats actions like ends, and tries to steer towards whatever action it most prefers — as if actions were ends unto themselves.
You can't explain why the agent takes an action by saying that it ranks actions according to whether or not taking them is good. That begs the question of which actions are good!
This agent rates actions as "good" if they lead to outcomes where the agent has lots of stamps in its inventory. Actions are rated according to what they achieve; they do not themselves have intrinsic worth.
The robot program doesn't contain reality, but it doesn't need to. It still gets to affect reality. If its model of the world is correlated with the world, and it takes actions that it predicts leads to more actual stamps, then it will tend to accumulate stamps.
It's not trying to steer the future towards places where it happens to have selected the most micro-stampy actions; it's just steering the future towards worlds where it predicts it will actually have more stamps.
Now, let me tell you my second story:
Once upon a time, a group of naïve philosophers encountered a group of human beings. The humans seemed to keep selecting the actions that gave them pleasure. Sometimes they ate good food, sometimes they had sex, sometimes they made money to spend on pleasurable things later, but always (for the first few weeks) they took actions that led to pleasure.
But then one day, one of the humans gave lots of money to a charity.
"How can this be?" the philosophers asked, "Humans are pleasure-maximizers!" They thought for a few minutes, and then said, "Ah, it must be that their pleasure from giving the money to charity outweighed the pleasure they would have gotten from spending the money."
Then a mother jumped in front of a car to save her child.
The naïve philosophers were stunned, until suddenly one of their number said "I get it! The immediate micro-pleasure of choosing that action must have outweighed —
People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.
People will tell you that, because we're trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.
But I have a message for you: You can, in fact, care about the outer world.
And you can steer it, too. If you want to.
Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-20T22:27:32.217Z · score: 11 (6 votes) · EA · GW

For whatever it's worth, my ethical intuitions suggest that optimizing for happiness is not a particularly sensible goal. I personally care relatively little about my self-reported happiness levels, and wouldn't be very excited about someone optimizing for them.

Kahneman has done some research on this, and if I remember correctly changed his mind publicly a few years ago from his previous position in Thinking Fast and Slow to a position that values life-satisfaction a lot more than happiness (and life-satisfaction tends to trade off against happiness in many situations).

This was the random article I remember reading about this. Take it with all the grains of salt of normal popular science reporting. Here are some quotes (note that I disagree with the "reducing suffering" part as an alternative focus):

At about the same time as these studies were being conducted, the Gallup polling company (which has a relationship with Princeton) began surveying various indicators among the global population. Kahneman was appointed as a consultant to the project.
“I suggested including measures of happiness, as I understand it – happiness in real time. To these were added data from Bhutan, a country that measures its citizens’ happiness as an indicator of the government’s success. And gradually, what we know today as Gallup’s World Happiness Report developed. It has also been adopted by the UN and OECD countries, and is published as an annual report on the state of global happiness.
“A third development, which is very important in my view, was a series of lectures I gave at the London School of Economics in which I presented my findings about happiness. The audience included Prof. Richard Layard – a teacher at the school, a British economist and a member of the House of Lords – who was interested in the subject. Eventually, he wrote a book about the factors that influence happiness, which became a hit in Britain,” Kahneman said, referring to “Happiness: Lessons from a New Science.”
“Layard did important work on community issues, on improving mental health services – and his driving motivation was promoting happiness. He instilled the idea of happiness as a factor in the British government’s economic considerations.
“The involvement of economists like Layard and Deaton made this issue more respectable,” Kahneman added with a smile. “Psychologists aren’t listened to so much. But when economists get involved, everything becomes more serious, and research on happiness gradually caught the attention of policy-making organizations.
“At the same time,” said Kahneman, “a movement has also developed in psychology – positive psychology – that focuses on happiness and attributes great importance to internal questions like meaning. I’m less certain of that.
[...]
Kahneman studied happiness for over two decades, gave rousing lectures and, thanks to his status, contributed to putting the issue on the agenda of both countries and organizations, principally the UN and the OECD. Five years ago, though, he abandoned this line of research.
“I gradually became convinced that people don’t want to be happy,” he explained. “They want to be satisfied with their life.”
A bit stunned, I asked him to repeat that statement. “People don’t want to be happy the way I’ve defined the term – what I experience here and now. In my view, it’s much more important for them to be satisfied, to experience life satisfaction, from the perspective of ‘What I remember,’ of the story they tell about their lives. I furthered the development of tools for understanding and advancing an asset that I think is important but most people aren’t interested in.
“Meanwhile, awareness of happiness has progressed in the world, including annual happiness indexes. It seems to me that on this basis, what can confidently be advanced is a reduction of suffering. The question of whether society should intervene so that people will be happier is very controversial, but whether society should strive for people to suffer less – that’s widely accepted.

I don't fully agree with all of the above, but a lot of the gist seems correct and important.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-20T22:25:35.681Z · score: 2 (1 votes) · EA · GW

[Made this into a top-level comment]

Comment by habryka on Why the EA Forum? · 2019-06-20T19:34:28.491Z · score: 3 (2 votes) · EA · GW

Hacker news has downvotes, though they are locked behind a karma threshold, though overall I see more comments downvoted on HN than on LW or the EA Forum (you can identify them by the text being more greyish and harder to read).

Comment by habryka on Why the EA Forum? · 2019-06-20T18:11:35.292Z · score: 2 (1 votes) · EA · GW

The problem is that if your post got downvoted and displayed in chronological order, this often means you will get even more downvotes (in parts because having things in chronological order means people vote more harshly because people want to directly discourage people posting bad content, and also because your visibility doesn't reduce, which means more people have the opportunity to downvote)

Comment by habryka on Why the EA Forum? · 2019-06-20T08:07:58.684Z · score: 3 (2 votes) · EA · GW

Huh, that's particularly weird because I don't have any of that problem with LessWrong.com, which runs on the same codebase. So it must be something unique to the EA forum situation.

Comment by habryka on You Should Write a Forum Bio · 2019-06-18T16:50:27.697Z · score: 2 (1 votes) · EA · GW

Hmm, good point. We should generally clean up that user edit page.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-06-11T20:10:02.926Z · score: 8 (3 votes) · EA · GW

(Note, I am currently more time-constrained than I had hoped to be when writing these responses, so the above was written a good bit faster and with less reflection than my other pieces of feedback. This means errors and miscommunication is more likely than usual. I apologize for that.)

I ended up writing some feedback to Jeffrey Ladish, which covered a lot of my thoughts on ALLFED. 

My response to Jeffrey

Building off of that comment, here are some additional thoughts: 

  • As I mentioned in the response linked above, I currently feel relatively hesitant about civilizational collapse scenarios and so find the general cause area of most of ALLFED's work to be of comparatively lower importance than the other areas I tend to recommend grants in
  • Most of ALLFED's work does not seem to help me resolve the confusions I listed in the response linked above, or provide much additional evidence for any of my cruxes, but instead seems to assume that the intersection of civilizational collapse and food shortages is the key path to optimize for. At this point, I would be much more excited about work that tries to analyze civilizational collapse much more broadly, instead of assuming such a specific path. 
  • I have some hesitations about the structure of ALLFED as an organization. I've had relatively bad experiences interacting with some parts of your team and heard similar concerns from others. The team also appears to be partially remote, which I think is a major cost for research teams, and have its primary location be in Alaska where I expect it will be hard for you to attract talent and also engage with other researchers on this topic (some of these models are based on conversations I've had with Finan who used to work at ALLFED, but left because of it being located in Alaska). 
  • I generally think ALLFED's work is of decent quality, helpful to many and made with well-aligned intentions, I just don't find it's core value proposition compelling enough to be excited about grants to it
Comment by habryka on There's Lots More To Do · 2019-06-08T22:33:25.504Z · score: 19 (8 votes) · EA · GW

While I agree with a lot of the critiques in this comment, I do think it isn't really engaging with the core point of Ben's post, which I do think is actually an interesting one.

The question that Ben is trying to answer is "how large is the funding gap for interventions that can save lives for around $5000?". And for that, the question is not "how much money would it take to eliminate all communicable diseases?", but instead is the question "how much money do we have to spend until the price of saving a life via preventing communicable diseases becomes significantly higher than $5k?". The answer to the second question is upper-bounded by the first question, which is why Ben is trying to answer that one, but that only serves to estimate the $5k/life funding gap.

And I think he does have a reasonable point there, in that I think the funding gap on interventions at that level of cost-effectiveness does seem to me to be much lower than the available funding in the space, making the impact of a counterfactual donation likely a lot lower than that (though the game theory here is complicated and counterfactuals are a bit hard to evaluate, making this a non-obvious point).

I think, though I have very high uncertainty bounds around all of this, is that the true number is closer to something in the space of $20k-$30k in terms of donations that would have a counterfactual impact of saving a life. I don't think this really invalidates a lot of the core EA principles as Ben seems to think it implies, but it does make me unhappy with some of the marketing around EA health interventions.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-06-05T02:05:44.326Z · score: 3 (2 votes) · EA · GW

I have a bunch of complicated thoughts here. Overall I have been quite happy with the reception to this, and think the outcomes of the conversations on the post have been quite good.

I am a bit more time-strapped than usual, so I will probably wait on writing a longer retrospective until I set aside a bunch of time to answer questions on the next set of writeups.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-29T02:35:30.451Z · score: 14 (4 votes) · EA · GW

Feedback that I sent to Jeffrey Ladish about his application:

Excerpts from the application

I would like to spend five months conducting a feasibility analysis for a new project that has the potential to be built into an organization. The goal of the project would be to increase civilizational resilience to collapse in the event of a major catastrophe -- that is, to preserve essential knowledge, skills, and social technology necessary for functional human civilization.

The concrete results of this work would include an argument for why or why not a project aimed at rebuilding after collapse would be feasible, and at what scale.

Several scholars and EAs have investigated this question before, so I plan to build off existing work to avoid reinventing the wheel. In particular, [Beckstead 2014](https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf) investigates whether bunkers or shelters might help civilization recover from a major catastrophe. He enumerates many scenarios in which shelters would *not* be helpful, but concludes with two scenarios worthy of deeper analysis: “global food crisis” and “social collapse”. I plan to focus on “social collapse”, noting that a global food crisis may also lead to social collapse.

I expect my feasibility investigation to cover the following questions:

- Impact: what would it take for such a project to actually impact the far future?

- Tractability: what (if any) scope and scale of project might be both feasible *and* useful?

- Neglectedness: what similar projects already exist?

Example questions:

Impact:

- How fragile is the global supply chain? For example, how might humans lose the ability to manufacture semiconductors?

- What old manufacturing technologies and skills (agricultural insights? steam engine-powered factories?) would be most essential to rebuilding key capacities?

- What social structures would facilitate both survival through major catastrophes and coordination through rebuilding efforts?

Neglectedness:

- What efforts exist to preserve knowledge into the future (seed banks, book archives)? Human lives (private & public bunkers, civil defense efforts)?

Tractability:

- What funding might be available for projects aimed at civilizational resilience?

- Are there skilled people who would commit to working on such a project? Would people be willing to relocate to a remote location if needed?

- What are the benefits of starting a non profit vs. other project structures?

(3)

I believe the best feedback for measuring the impact of this research will be to solicit personal feedback on the quality of the feasibility argument I produce. I would like to present my findings to Anders Sandberg, Carl Shulman, Nick Beckstead, & other experts.

If I can present a case for a civilizational resilience project which those experts find compelling, I would hope to launch a project with that goal. Conversely, if I can present a strong case that such a project would not be effective, my work could deter others from pursuing an ineffective project.

My thoughts

I feel broadly confused about the value of working on improving the recovery from civilizational collapse, but overall feel more hesitant than enthusiastic. I have so far not heard of a civilization collapse scenario that seems likely to me and in which we have concrete precautions we can take to increase the likelihood of recovery.

Since I've initially read your application, I have had multiple in-person conversations with both you and Finan Adamson who used to work at ALLFED, and you both have much better models of the considerations around civilizational collapse than I do. This has made me understand your models a lot more, but has so far not updated me much towards civilizational collapse being both likely and tractable. However, I have updated my value estimate of looking into this cause area in more depth and writing up the considerations around it, since I think there is enough uncertainty and potential value in this domain that getting more clarity would be worth quite a bit.

I think at the moment, I would not be that enthusiastic about someone building a whole organization around efforts to improve recovery chances from civilizational collapse, but do think that there is potentially a lot of value in individual researchers making a better case for that kind of work and mapping out the problem space more.

I think my biggest cruxes in this space are something like the following:

  • Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
  • Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
  • Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it's quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
  • Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
  • Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)

I think answering any mixture of these affirmatively could convince me that it is worth investing significantly more resources into this, and that it might make sense to divert resources from catastrophic (and existential) risk prevention to working on improved recovery from catastrophic events, which I think is the tradeoff I am facing with my recommendations.

I do think that a serious investigation into the question of recovery from catastrophic events is an important part of something like "covering all the bases" in efforts to improving the long-term-future. However, the field is currently still resource constrained enough that I don't think that is sufficient for me to recommend funding to it.

Overall, I think I am more positive on making a grant like this than when I first read this, though not necessarily that much more. I have however updated positively on you in particular and think that if we want someone to write up and perform research in this space, that you are a decent candidate for it. This was partially a result of talking to you, reading some of your non-published writing and having some people I trust vouch for you, though I still haven’t really investigated this whole area enough to be confident that the kind of research you are planning to do is really what is needed.

Comment by habryka on How to use the Forum · 2019-05-18T23:43:19.410Z · score: 9 (5 votes) · EA · GW

Yeah, it's definitely unchecked by default. We are currently working on an editor rework that should get rid of this annoyance. We currently need to allow users to switch to markdown to make it possible for mobile users to properly edit stuff, but that shouldn't be a problem anymore after we are done with the rework.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T06:13:04.703Z · score: 11 (4 votes) · EA · GW

On the question of whether we should have an iterative process: I do view this publishing of the LTF-responses as part of an iterative process. Given that we are planning to review applications every few months, you responding to what I wrote allows us to update on your responses for next round, which will be relatively soon.

Comment by habryka on What caused EA movement growth to slow down? · 2019-05-17T05:59:45.129Z · score: 4 (2 votes) · EA · GW

As someone who is quite familiar with what drives traffic to EA and Rationality related websites, 2015 marks the end of Harry Potter and the Methods of Rationality, which (whatever you might think about it) was probably the single biggest recruitment device that has existed in at least the rationality community's history (though I also think it was also a major driver to the EA community). It is also the time Eliezer broadly stopped posting online, and he obviously had a very outsized effect on recruitment.

I also know that during 2015 (which is when I started working at CEA), CEA was investing very heavily in trying to grow the community, which included efforts of trying to get people like Elon Musk to talk at EAG 2015, which I do think was also a major draw to the community. A lot of the staff responsible for that focus on growth left over the following years and CEA stopped thinking as much in terms of growth (I think whether that was good or bad is complicated, though I mostly think that that shift was good).

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T05:51:39.269Z · score: 8 (2 votes) · EA · GW
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.

I want to make sure that there isn't any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund.

I don't currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T05:47:29.092Z · score: 8 (2 votes) · EA · GW

(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to)

As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
* Long-term trajectories of human civilization is on (among other things) the relative importance of extinction vs. sub-extinction risks.
* The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives is on strategy for how to reduce GCR in a world that is mostly not dedicated to reducing GCR.
* Towards an integrated assessment of global catastrophic risk outlines an agenda for identifying and evaluating the best ways of reducing the entirety of global catastrophic risk.
See also our pages on Cross-Risk Evaluation & Prioritization, Solutions & Strategy, and perhaps also Risk & Decision Analysis.
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.

Of those, I had read "Long-term trajectories of human civilization" and "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation).

I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum).

I also generally found the arguments in them not particularly compelling (in particular I found the arguments in "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).

I highlighted the "A model for the probability of nuclear war" not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don't think that applies to any of the three papers you linked above.

I don't currently have a great operationalization of what I mean by "fundamental confusions around global catastrophic risks" so I am sorry for not being able to be more clear on this. One kind of bad operationalization might be "research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space". It seems plausible to me that you are currently aiming to write some papers with a goal like this in mind, but I don't think most of GCRI's papers achieve that. The "A model for the probability of nuclear war" did feel like a paper that might actually achieve that, though from what you said it might have not actually have had that goal.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T04:59:00.653Z · score: 8 (2 votes) · EA · GW

Thanks for posting the response! Some short clarifications:

We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI's work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.

My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn't update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable).

It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.

I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-16T03:23:30.677Z · score: 18 (6 votes) · EA · GW

This is the (very slightly edited) feedback that I sent to GCRI based on their application (caveat that GCR-policy is not my expertise and I only had relatively weak opinions in the discussion around this grant, so this should definitely not be seen as representative of the broader opinion of the fund):

I was actually quite positive on this grant, so the primary commentary I can provide is a summary of what would have been sufficient to move me to be very excited about the grant.
Overall, I have to say that I was quite positively surprised after reading a bunch of GCRI's papers, which I had not done before (in particular the paper that lists and analyzes all the nuclear weapon close-calls).
I think the biggest thing that made me hesitant about strongly recommending GCRI, is that I don't have a great model of who GCRI is trying to reach. I am broadly not super excited about reaching out to policy makers at this stage of the GCR community's strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there. I do have some models of capacity-building that suggest some concrete actions, but those have more to do with building functional research institutions that are focused on recruiting top-level talent to think more about problems related to the long term future.
I noticed that while I ended up being quite positively surprised by the GCRI papers, I hadn't read any of them up to that point, and neither had any of the other fund members. This made me think that we are likely not the target audience of those papers. And while I did find them useful, I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions  around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.
I think the key thing that I would need to be very excited about GCRI is to understand and be excited by target group that GCRI is trying to communicate to. My current model suggests that GCRI is primarily trying to reach existing policy makers, which seems unlikely to contribute to furthering the conceptual progress around global catastrophic risks much.

Seth wrote a great response that I think he is open to posting to the forum.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-16T03:20:01.451Z · score: 7 (3 votes) · EA · GW

Sorry for the long delays on this, I am still planning to get back to you, there were just some other things that ended up taking up all of my LTF-Fund allocated time which are now resolved, so I should be able to write up my thoughts soon.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-01T01:19:53.964Z · score: 8 (4 votes) · EA · GW

I think if we had a vetting process that people could trust would reliably cause you to identify good people, even if they made a bunch of recent critical statements of high-status institutions or something in that reference class (or had their most recent project fail dramatically, etc.), then I think that might be fine.

But I think having such a vetting process and having that vetting process have a very low false negative rate and having it be transparent that that vetting process is that good are difficult enough to make it too costly.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-27T21:43:25.773Z · score: 16 (9 votes) · EA · GW

I am hesitant about this. I think to serve as a functional social safety net that allows people to take high-risk actions (including in the social domain, in the form of criticisms of high-status people or institutions), I think a high barrier to entry for the EA-Hotel might drastically reduce the psychological safety it could provide to many people.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-26T04:27:41.126Z · score: 4 (2 votes) · EA · GW

Yes, that corresponds to point (1), not point (2)

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-26T02:10:24.325Z · score: 43 (15 votes) · EA · GW

This is the feedback that I sent to Greg about his EA-Hotel application, published with his permission. (He also provided some good responses that I hope he can reply with)

Thoughts on the EA Hotel:

The EA Hotel seems broadly pretty promising, though I do also have a good amount of concerns. First, the reasons why I am excited about the EA Hotel:

Providing a safety net: I think psychological safety matters a lot for people being able to take risks and have creative thoughts. Given that I think most of the value of the EA community comes from potentially pursuing high-risk projects and proposing new unconventional ideas, improving things on this dimension strikes me as pretty key for the success of the overall community.

I expect the EA Hotel has a chance to serve as a cheap distributed safety net for a lot of people who are worried that if they start working on EA stuff, they will run out of money soon and will then potentially end up having to take drastic actions as they run out of money. The EA Hotel can both significantly extend those people's runway, but also soften the costs of running out of money significantly for anyone who is working on EA-related ideas.

Acting on historical interest: There has been a significant historical interest in creating an EA Hub in a location with much lower living expenses, and from a process perspective I think we should very strongly reward people who feel comfortable acting on that level of community interest. Even if the EA Hotel turns out to be a bad idea, it strikes me as important that community members can take risks like this and have at least their expenses reimbursed afterwards (even if it turns out that the idea doesn't work out when implemented), as long as they went about pursuing the project in broadly reasonable terms.

Building high-dedication cultures: I generally think that developing strong cultures of people who have high-levels of dedication is a good way of multiplying the efforts of the people involved, and is generally something that should be encouraged. I think the EA Hotel has a chance to develop a strong high-dedication culture because moving to it requires a level of sacrifice (moving to Blackpool) that will only cause people above a pretty high dedication-threshold to show up. I do also think this can backfire (see later section on concerns).

I do however also have a set of concerns about the hotel. I think over the past few weeks as more things have been written about the hotel, I have started feeling more positive towards it, and would likely recommend a grant to the EA Hotel in the next LTF-Fund grant rounds, though I am not certain.

I think the EA Hotel is more likely than most other projects I have recommended grants to to be net negative, though I don't think it has a significant chance to be hugely negative.

Here are the concrete models around my concerns:

1. I think the people behind the EA Hotel where initially overeager to publicize the EA Hotel via broad media outreach in things like newspapers and others media outlets with broad reach. I think interaction with the media is well-modeled by a unilateralist curse-like scenario in which many participants have the individual choice to create a media narrative, and whoever moves first frames a lot of the media conversation. In general, I think it is essential for organizations in the long term future space to recognize this kind of dynamic and be hesitant to take unilateral action in cases like this.

I think the EA Hotel does not benefit much from media attention, and the community at large likely suffers from the EA Hotel being widely discussed in the media (not because it's weird, which is a dimension on which I think EA is broadly far too risk-averse to, but instead because it communicates the presence of free resources that are for the taking of anyone vaguely associated with the community, which tends to attract unaligned people and cause adversarial scenarios).

Note: Greg responded to this and I now think this point is mostly false, though I still think something in this space went wrong.

2. I think there is a significant chance of the culture of the EA Hotel becoming actively harmful for the people living there, and also spark unnecessary conflict in the broader community. I think there are two reasons why I am more worried about this than for most other locations:

  • I expect the EA Hotel to attract a kind of person who is pretty young, highly dedicated and looking for some guidance on what to do with their life. I think this makes them a particularly easy and promising target for people who tend to abuse that kind of trust relationship and who are looking for social influence.
  • I expect the hotel to attract people who have a somewhat contrarian position in the community, for a variety of different reasons. I think some of it is the initial founding effect that I already observed, but another likely cause is that the hotel will likely be filled with highly dedicated people who are not being offered jobs or grants that would allow them to work from other locations, which I think can cause many of these people to feel disenfranchised from the community and feel a (potentially quite valid) sense of frustration.
    • I am not at all opposed to helping people who are dissatisfied with things in the community to coordinate on causing change, and usually think that's good. But I think locality matters a lot for dispute resolution and I think it's plausible that the EA Hotel could form a geographically and memetically isolated group that is predisposed for conflict with the rest of the EA community in a way that could result in a lot of negative-sum conflict.
  • Generally high-dedication cultures are more likely to cause people to overcommit to to take drastic actions that they later regret. I think this is usually worth the cost, but is compounded with some of the other factors I list here.

3. I don't have a sense that Greg wants to really take charge on the logistics of running the hotel, and don't have a great candidate for someone else to run it. Though it seems pretty plausible that we could find someone to run it if we invest some time into finding someone.

Summary:

Overall, I think all of my concerns can be overcome, at which point I would be quite excited about supporting the hotel. It seems easy to change the way the hotel relates to the media, I think there are a variety of things one could do to avoid cultural problems, and I think we could find someone to who can take charge on the logistics of running the hotel.

At the moment, I think I would be in favor of giving a grant that covers the runway of the hotel for the next year. (There is the further question of whether they should get enough money to buy the hotel next door, which is something I am much less certain about)

Comment by habryka on Lecture Videos from Cambridge Conference on Catastrophic Risk · 2019-04-24T04:22:33.532Z · score: 5 (4 votes) · EA · GW

It's pretty rarely requested, so it's not super high on my priority list. My guess is whenever we end up doing our bigger editor upgrade which we planned to happen later in the year.

Comment by habryka on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T17:58:58.984Z · score: 6 (3 votes) · EA · GW

Yep, I saw that. I didn't actually intend to criticize your use of the quiz, sorry if it came across that way. I just gave it a try and figured I would contribute some data.

(This doesn't mean I agree with how 80k communicates information. I haven't kept up at all with 80k's writing, so I don't have any strong opinions either way here)

Comment by habryka on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T04:05:10.288Z · score: 14 (6 votes) · EA · GW

I got them on basically every setting that remotely applied to me.

Comment by habryka on EA Hotel fundraiser 4: concrete outputs after 10 months · 2019-04-18T03:41:55.038Z · score: 6 (3 votes) · EA · GW

I think sadly pretty low, based on my current model of the time constraints of everyone, and also CEA logistical constraints.

Comment by habryka on EA Hotel fundraiser 4: concrete outputs after 10 months · 2019-04-17T22:39:34.826Z · score: 21 (7 votes) · EA · GW

(This is just my personal perspective and does not aim to reflect the opinions of anyone else on the LTF-Fund)

I am planning to send more feedback on this to the EA Hotel people.

I have actually broadly come around to the EA Hotel being a good idea, but at the time we made the grant decision there was a lot less evidence and writeups around, and it was those writeups by a variety of people that convinced me it is likely a good idea, with some caveats.