Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-25T20:21:32.583Z · score: 3 (2 votes) · EA · GW

This seems reasonable. I changed it to say "ethical".

Comment by habryka on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T20:44:42.484Z · score: 12 (5 votes) · EA · GW

I was given a student loan by an EA, which I think was likely a major factor in me being able to work on the things I am working on now.

Comment by habryka on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T19:45:22.914Z · score: 8 (2 votes) · EA · GW

We have basically all of the technology to do that on the EA Forum as soon as CEA activates the sequences and recommendations features, which I expect to happen at some point in the next few weeks.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-24T19:42:58.308Z · score: 1 (2 votes) · EA · GW

Hmm, I don't think so. Though I am not fully sure. Might depend on the precise definition.

It feels metaethical because I am responding to a perceived confusion of "what defines moral value?", and not "what things are moral?".

I think "adding up people's experience over the course of their life determines whether an act has good consequences or not" is a confused approach to ethics, which feels more like a metaethical instead of an ethical disagreement.

However, happy to use either term if anyone feels strongly, or happy to learn that this kind of disagreement falls clearly into either "ethics" or "metaethics".

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-21T01:40:11.883Z · score: 4 (2 votes) · EA · GW
I used the word 'relegate', because that appears to be how promotions to the Frontpage on LessWrong work, and because I was under the impression the EA Forum had similar administration norms to LessWrong.

Also not how it is intended to work on LessWrong. There is some (around 30%) loss in average visibility but there are many important posts that are on personal blogposts on LessWrong. The distinction is more nuanced and being left on personal blogpost is definitely not primarily a signifier of quality.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-21T00:50:43.511Z · score: 2 (1 votes) · EA · GW

I was responding to this section, which immediately follows your quote:

While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available.

I think emotional states are a quite bad metric to optimize for and that life satisfaction is a much better measure because it actually measures something closer to people's values being fulfilled. Valuing emotional states feels like a map territory confusion in a way that I Nate Soares tried to get at in his stamp collector post:

Ahh! No! Let's be very clear about this: the robot is predicting which outcomes would follow from which actions, and it's ranking them, and it's taking the actions that lead to the best outcomes. Actions are rated according to what they achieve. Actions do not themselves have intrinsic worth!
Do you see where these naïve philosophers went confused? They have postulated an agent which treats actions like ends, and tries to steer towards whatever action it most prefers — as if actions were ends unto themselves.
You can't explain why the agent takes an action by saying that it ranks actions according to whether or not taking them is good. That begs the question of which actions are good!
This agent rates actions as "good" if they lead to outcomes where the agent has lots of stamps in its inventory. Actions are rated according to what they achieve; they do not themselves have intrinsic worth.
The robot program doesn't contain reality, but it doesn't need to. It still gets to affect reality. If its model of the world is correlated with the world, and it takes actions that it predicts leads to more actual stamps, then it will tend to accumulate stamps.
It's not trying to steer the future towards places where it happens to have selected the most micro-stampy actions; it's just steering the future towards worlds where it predicts it will actually have more stamps.
Now, let me tell you my second story:
Once upon a time, a group of naïve philosophers encountered a group of human beings. The humans seemed to keep selecting the actions that gave them pleasure. Sometimes they ate good food, sometimes they had sex, sometimes they made money to spend on pleasurable things later, but always (for the first few weeks) they took actions that led to pleasure.
But then one day, one of the humans gave lots of money to a charity.
"How can this be?" the philosophers asked, "Humans are pleasure-maximizers!" They thought for a few minutes, and then said, "Ah, it must be that their pleasure from giving the money to charity outweighed the pleasure they would have gotten from spending the money."
Then a mother jumped in front of a car to save her child.
The naïve philosophers were stunned, until suddenly one of their number said "I get it! The immediate micro-pleasure of choosing that action must have outweighed —
People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.
People will tell you that, because we're trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.
But I have a message for you: You can, in fact, care about the outer world.
And you can steer it, too. If you want to.
Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-20T22:27:32.217Z · score: 11 (6 votes) · EA · GW

For whatever it's worth, my ethical intuitions suggest that optimizing for happiness is not a particularly sensible goal. I personally care relatively little about my self-reported happiness levels, and wouldn't be very excited about someone optimizing for them.

Kahneman has done some research on this, and if I remember correctly changed his mind publicly a few years ago from his previous position in Thinking Fast and Slow to a position that values life-satisfaction a lot more than happiness (and life-satisfaction tends to trade off against happiness in many situations).

This was the random article I remember reading about this. Take it with all the grains of salt of normal popular science reporting. Here are some quotes (note that I disagree with the "reducing suffering" part as an alternative focus):

At about the same time as these studies were being conducted, the Gallup polling company (which has a relationship with Princeton) began surveying various indicators among the global population. Kahneman was appointed as a consultant to the project.
“I suggested including measures of happiness, as I understand it – happiness in real time. To these were added data from Bhutan, a country that measures its citizens’ happiness as an indicator of the government’s success. And gradually, what we know today as Gallup’s World Happiness Report developed. It has also been adopted by the UN and OECD countries, and is published as an annual report on the state of global happiness.
“A third development, which is very important in my view, was a series of lectures I gave at the London School of Economics in which I presented my findings about happiness. The audience included Prof. Richard Layard – a teacher at the school, a British economist and a member of the House of Lords – who was interested in the subject. Eventually, he wrote a book about the factors that influence happiness, which became a hit in Britain,” Kahneman said, referring to “Happiness: Lessons from a New Science.”
“Layard did important work on community issues, on improving mental health services – and his driving motivation was promoting happiness. He instilled the idea of happiness as a factor in the British government’s economic considerations.
“The involvement of economists like Layard and Deaton made this issue more respectable,” Kahneman added with a smile. “Psychologists aren’t listened to so much. But when economists get involved, everything becomes more serious, and research on happiness gradually caught the attention of policy-making organizations.
“At the same time,” said Kahneman, “a movement has also developed in psychology – positive psychology – that focuses on happiness and attributes great importance to internal questions like meaning. I’m less certain of that.
[...]
Kahneman studied happiness for over two decades, gave rousing lectures and, thanks to his status, contributed to putting the issue on the agenda of both countries and organizations, principally the UN and the OECD. Five years ago, though, he abandoned this line of research.
“I gradually became convinced that people don’t want to be happy,” he explained. “They want to be satisfied with their life.”
A bit stunned, I asked him to repeat that statement. “People don’t want to be happy the way I’ve defined the term – what I experience here and now. In my view, it’s much more important for them to be satisfied, to experience life satisfaction, from the perspective of ‘What I remember,’ of the story they tell about their lives. I furthered the development of tools for understanding and advancing an asset that I think is important but most people aren’t interested in.
“Meanwhile, awareness of happiness has progressed in the world, including annual happiness indexes. It seems to me that on this basis, what can confidently be advanced is a reduction of suffering. The question of whether society should intervene so that people will be happier is very controversial, but whether society should strive for people to suffer less – that’s widely accepted.

I don't fully agree with all of the above, but a lot of the gist seems correct and important.

Comment by habryka on Announcing the launch of the Happier Lives Institute · 2019-06-20T22:25:35.681Z · score: 2 (1 votes) · EA · GW

[Made this into a top-level comment]

Comment by habryka on Why the EA Forum? · 2019-06-20T19:34:28.491Z · score: 3 (2 votes) · EA · GW

Hacker news has downvotes, though they are locked behind a karma threshold, though overall I see more comments downvoted on HN than on LW or the EA Forum (you can identify them by the text being more greyish and harder to read).

Comment by habryka on Why the EA Forum? · 2019-06-20T18:11:35.292Z · score: 2 (1 votes) · EA · GW

The problem is that if your post got downvoted and displayed in chronological order, this often means you will get even more downvotes (in parts because having things in chronological order means people vote more harshly because people want to directly discourage people posting bad content, and also because your visibility doesn't reduce, which means more people have the opportunity to downvote)

Comment by habryka on Why the EA Forum? · 2019-06-20T08:07:58.684Z · score: 3 (2 votes) · EA · GW

Huh, that's particularly weird because I don't have any of that problem with LessWrong.com, which runs on the same codebase. So it must be something unique to the EA forum situation.

Comment by habryka on You Should Write a Forum Bio · 2019-06-18T16:50:27.697Z · score: 2 (1 votes) · EA · GW

Hmm, good point. We should generally clean up that user edit page.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-06-11T20:10:02.926Z · score: 6 (2 votes) · EA · GW

(Note, I am currently more time-constrained than I had hoped to be when writing these responses, so the above was written a good bit faster and with less reflection than my other pieces of feedback. This means errors and miscommunication is more likely than usual. I apologize for that.)

I ended up writing some feedback to Jeffrey Ladish, which covered a lot of my thoughts on ALLFED. 

My response to Jeffrey

Building off of that comment, here are some additional thoughts: 

  • As I mentioned in the response linked above, I currently feel relatively hesitant about civilizational collapse scenarios and so find the general cause area of most of ALLFED's work to be of comparatively lower importance than the other areas I tend to recommend grants in
  • Most of ALLFED's work does not seem to help me resolve the confusions I listed in the response linked above, or provide much additional evidence for any of my cruxes, but instead seems to assume that the intersection of civilizational collapse and food shortages is the key path to optimize for. At this point, I would be much more excited about work that tries to analyze civilizational collapse much more broadly, instead of assuming such a specific path. 
  • I have some hesitations about the structure of ALLFED as an organization. I've had relatively bad experiences interacting with some parts of your team and heard similar concerns from others. The team also appears to be partially remote, which I think is a major cost for research teams, and have its primary location be in Alaska where I expect it will be hard for you to attract talent and also engage with other researchers on this topic (some of these models are based on conversations I've had with Finan who used to work at ALLFED, but left because of it being located in Alaska). 
  • I generally think ALLFED's work is of decent quality, helpful to many and made with well-aligned intentions, I just don't find it's core value proposition compelling enough to be excited about grants to it

Long Term Future Fund and EA Meta Fund applications open until June 28th

2019-06-10T20:37:51.048Z · score: 55 (20 votes)
Comment by habryka on There's Lots More To Do · 2019-06-08T22:33:25.504Z · score: 17 (7 votes) · EA · GW

While I agree with a lot of the critiques in this comment, I do think it isn't really engaging with the core point of Ben's post, which I do think is actually an interesting one.

The question that Ben is trying to answer is "how large is the funding gap for interventions that can save lives for around $5000?". And for that, the question is not "how much money would it take to eliminate all communicable diseases?", but instead is the question "how much money do we have to spend until the price of saving a life via preventing communicable diseases becomes significantly higher than $5k?". The answer to the second question is upper-bounded by the first question, which is why Ben is trying to answer that one, but that only serves to estimate the $5k/life funding gap.

And I think he does have a reasonable point there, in that I think the funding gap on interventions at that level of cost-effectiveness does seem to me to be much lower than the available funding in the space, making the impact of a counterfactual donation likely a lot lower than that (though the game theory here is complicated and counterfactuals are a bit hard to evaluate, making this a non-obvious point).

I think, though I have very high uncertainty bounds around all of this, is that the true number is closer to something in the space of $20k-$30k in terms of donations that would have a counterfactual impact of saving a life. I don't think this really invalidates a lot of the core EA principles as Ben seems to think it implies, but it does make me unhappy with some of the marketing around EA health interventions.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-06-05T02:05:44.326Z · score: 2 (1 votes) · EA · GW

I have a bunch of complicated thoughts here. Overall I have been quite happy with the reception to this, and think the outcomes of the conversations on the post have been quite good.

I am a bit more time-strapped than usual, so I will probably wait on writing a longer retrospective until I set aside a bunch of time to answer questions on the next set of writeups.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-29T02:35:30.451Z · score: 14 (4 votes) · EA · GW

Feedback that I sent to Jeffrey Ladish about his application:

Excerpts from the application

I would like to spend five months conducting a feasibility analysis for a new project that has the potential to be built into an organization. The goal of the project would be to increase civilizational resilience to collapse in the event of a major catastrophe -- that is, to preserve essential knowledge, skills, and social technology necessary for functional human civilization.

The concrete results of this work would include an argument for why or why not a project aimed at rebuilding after collapse would be feasible, and at what scale.

Several scholars and EAs have investigated this question before, so I plan to build off existing work to avoid reinventing the wheel. In particular, [Beckstead 2014](https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf) investigates whether bunkers or shelters might help civilization recover from a major catastrophe. He enumerates many scenarios in which shelters would *not* be helpful, but concludes with two scenarios worthy of deeper analysis: “global food crisis” and “social collapse”. I plan to focus on “social collapse”, noting that a global food crisis may also lead to social collapse.

I expect my feasibility investigation to cover the following questions:

- Impact: what would it take for such a project to actually impact the far future?

- Tractability: what (if any) scope and scale of project might be both feasible *and* useful?

- Neglectedness: what similar projects already exist?

Example questions:

Impact:

- How fragile is the global supply chain? For example, how might humans lose the ability to manufacture semiconductors?

- What old manufacturing technologies and skills (agricultural insights? steam engine-powered factories?) would be most essential to rebuilding key capacities?

- What social structures would facilitate both survival through major catastrophes and coordination through rebuilding efforts?

Neglectedness:

- What efforts exist to preserve knowledge into the future (seed banks, book archives)? Human lives (private & public bunkers, civil defense efforts)?

Tractability:

- What funding might be available for projects aimed at civilizational resilience?

- Are there skilled people who would commit to working on such a project? Would people be willing to relocate to a remote location if needed?

- What are the benefits of starting a non profit vs. other project structures?

(3)

I believe the best feedback for measuring the impact of this research will be to solicit personal feedback on the quality of the feasibility argument I produce. I would like to present my findings to Anders Sandberg, Carl Shulman, Nick Beckstead, & other experts.

If I can present a case for a civilizational resilience project which those experts find compelling, I would hope to launch a project with that goal. Conversely, if I can present a strong case that such a project would not be effective, my work could deter others from pursuing an ineffective project.

My thoughts

I feel broadly confused about the value of working on improving the recovery from civilizational collapse, but overall feel more hesitant than enthusiastic. I have so far not heard of a civilization collapse scenario that seems likely to me and in which we have concrete precautions we can take to increase the likelihood of recovery.

Since I've initially read your application, I have had multiple in-person conversations with both you and Finan Adamson who used to work at ALLFED, and you both have much better models of the considerations around civilizational collapse than I do. This has made me understand your models a lot more, but has so far not updated me much towards civilizational collapse being both likely and tractable. However, I have updated my value estimate of looking into this cause area in more depth and writing up the considerations around it, since I think there is enough uncertainty and potential value in this domain that getting more clarity would be worth quite a bit.

I think at the moment, I would not be that enthusiastic about someone building a whole organization around efforts to improve recovery chances from civilizational collapse, but do think that there is potentially a lot of value in individual researchers making a better case for that kind of work and mapping out the problem space more.

I think my biggest cruxes in this space are something like the following:

  • Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?
  • Can we build any reasonable models about what our bottlenecks will be for recovery after a significant global catastrophe? (This is likely dependent on an analysis of what specific catastrophes are most likely and what state they leave humanity in)
  • Are there major risks that have a chance to wipe out more than 90% of the population, but not all of it? My models of biorisk suggests it's quite hard to get to 90% mortality, I think most nuclear winter scenarios also have less than a 90% food reduction impact
  • Are there non-population-level dependent ways in which modern civilization is fragile that might cause widespread collapse and the end of scientific progress? If so, are there any ways to prepare for them?
  • Are there strong reasons to expect the existential risk profile of a recovered civilization to be significantly better than for our current civilization? (E.g. maybe a bad experience with nuclear weapons would make the world much more aware of the dangers of technology)

I think answering any mixture of these affirmatively could convince me that it is worth investing significantly more resources into this, and that it might make sense to divert resources from catastrophic (and existential) risk prevention to working on improved recovery from catastrophic events, which I think is the tradeoff I am facing with my recommendations.

I do think that a serious investigation into the question of recovery from catastrophic events is an important part of something like "covering all the bases" in efforts to improving the long-term-future. However, the field is currently still resource constrained enough that I don't think that is sufficient for me to recommend funding to it.

Overall, I think I am more positive on making a grant like this than when I first read this, though not necessarily that much more. I have however updated positively on you in particular and think that if we want someone to write up and perform research in this space, that you are a decent candidate for it. This was partially a result of talking to you, reading some of your non-published writing and having some people I trust vouch for you, though I still haven’t really investigated this whole area enough to be confident that the kind of research you are planning to do is really what is needed.

Comment by habryka on How to use the Forum · 2019-05-18T23:43:19.410Z · score: 7 (4 votes) · EA · GW

Yeah, it's definitely unchecked by default. We are currently working on an editor rework that should get rid of this annoyance. We currently need to allow users to switch to markdown to make it possible for mobile users to properly edit stuff, but that shouldn't be a problem anymore after we are done with the rework.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T06:13:04.703Z · score: 9 (3 votes) · EA · GW

On the question of whether we should have an iterative process: I do view this publishing of the LTF-responses as part of an iterative process. Given that we are planning to review applications every few months, you responding to what I wrote allows us to update on your responses for next round, which will be relatively soon.

Comment by habryka on What caused EA movement growth to slow down? · 2019-05-17T05:59:45.129Z · score: 4 (2 votes) · EA · GW

As someone who is quite familiar with what drives traffic to EA and Rationality related websites, 2015 marks the end of Harry Potter and the Methods of Rationality, which (whatever you might think about it) was probably the single biggest recruitment device that has existed in at least the rationality community's history (though I also think it was also a major driver to the EA community). It is also the time Eliezer broadly stopped posting online, and he obviously had a very outsized effect on recruitment.

I also know that during 2015 (which is when I started working at CEA), CEA was investing very heavily in trying to grow the community, which included efforts of trying to get people like Elon Musk to talk at EAG 2015, which I do think was also a major draw to the community. A lot of the staff responsible for that focus on growth left over the following years and CEA stopped thinking as much in terms of growth (I think whether that was good or bad is complicated, though I mostly think that that shift was good).

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T05:51:39.269Z · score: 8 (2 votes) · EA · GW
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.

I want to make sure that there isn't any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund.

I don't currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T05:47:29.092Z · score: 8 (2 votes) · EA · GW

(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to)

As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
* Long-term trajectories of human civilization is on (among other things) the relative importance of extinction vs. sub-extinction risks.
* The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives is on strategy for how to reduce GCR in a world that is mostly not dedicated to reducing GCR.
* Towards an integrated assessment of global catastrophic risk outlines an agenda for identifying and evaluating the best ways of reducing the entirety of global catastrophic risk.
See also our pages on Cross-Risk Evaluation & Prioritization, Solutions & Strategy, and perhaps also Risk & Decision Analysis.
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.

Of those, I had read "Long-term trajectories of human civilization" and "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation).

I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum).

I also generally found the arguments in them not particularly compelling (in particular I found the arguments in "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).

I highlighted the "A model for the probability of nuclear war" not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don't think that applies to any of the three papers you linked above.

I don't currently have a great operationalization of what I mean by "fundamental confusions around global catastrophic risks" so I am sorry for not being able to be more clear on this. One kind of bad operationalization might be "research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space". It seems plausible to me that you are currently aiming to write some papers with a goal like this in mind, but I don't think most of GCRI's papers achieve that. The "A model for the probability of nuclear war" did feel like a paper that might actually achieve that, though from what you said it might have not actually have had that goal.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-17T04:59:00.653Z · score: 8 (2 votes) · EA · GW

Thanks for posting the response! Some short clarifications:

We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI's work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.

My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn't update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable).

It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.

I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-16T03:23:30.677Z · score: 16 (5 votes) · EA · GW

This is the (very slightly edited) feedback that I sent to GCRI based on their application (caveat that GCR-policy is not my expertise and I only had relatively weak opinions in the discussion around this grant, so this should definitely not be seen as representative of the broader opinion of the fund):

I was actually quite positive on this grant, so the primary commentary I can provide is a summary of what would have been sufficient to move me to be very excited about the grant.
Overall, I have to say that I was quite positively surprised after reading a bunch of GCRI's papers, which I had not done before (in particular the paper that lists and analyzes all the nuclear weapon close-calls).
I think the biggest thing that made me hesitant about strongly recommending GCRI, is that I don't have a great model of who GCRI is trying to reach. I am broadly not super excited about reaching out to policy makers at this stage of the GCR community's strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there. I do have some models of capacity-building that suggest some concrete actions, but those have more to do with building functional research institutions that are focused on recruiting top-level talent to think more about problems related to the long term future.
I noticed that while I ended up being quite positively surprised by the GCRI papers, I hadn't read any of them up to that point, and neither had any of the other fund members. This made me think that we are likely not the target audience of those papers. And while I did find them useful, I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions  around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.
I think the key thing that I would need to be very excited about GCRI is to understand and be excited by target group that GCRI is trying to communicate to. My current model suggests that GCRI is primarily trying to reach existing policy makers, which seems unlikely to contribute to furthering the conceptual progress around global catastrophic risks much.

Seth wrote a great response that I think he is open to posting to the forum.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-16T03:20:01.451Z · score: 4 (2 votes) · EA · GW

Sorry for the long delays on this, I am still planning to get back to you, there were just some other things that ended up taking up all of my LTF-Fund allocated time which are now resolved, so I should be able to write up my thoughts soon.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-05-01T01:19:53.964Z · score: 8 (4 votes) · EA · GW

I think if we had a vetting process that people could trust would reliably cause you to identify good people, even if they made a bunch of recent critical statements of high-status institutions or something in that reference class (or had their most recent project fail dramatically, etc.), then I think that might be fine.

But I think having such a vetting process and having that vetting process have a very low false negative rate and having it be transparent that that vetting process is that good are difficult enough to make it too costly.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-27T21:43:25.773Z · score: 16 (9 votes) · EA · GW

I am hesitant about this. I think to serve as a functional social safety net that allows people to take high-risk actions (including in the social domain, in the form of criticisms of high-status people or institutions), I think a high barrier to entry for the EA-Hotel might drastically reduce the psychological safety it could provide to many people.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-26T04:27:41.126Z · score: 4 (2 votes) · EA · GW

Yes, that corresponds to point (1), not point (2)

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-26T02:10:24.325Z · score: 43 (15 votes) · EA · GW

This is the feedback that I sent to Greg about his EA-Hotel application, published with his permission. (He also provided some good responses that I hope he can reply with)

Thoughts on the EA Hotel:

The EA Hotel seems broadly pretty promising, though I do also have a good amount of concerns. First, the reasons why I am excited about the EA Hotel:

Providing a safety net: I think psychological safety matters a lot for people being able to take risks and have creative thoughts. Given that I think most of the value of the EA community comes from potentially pursuing high-risk projects and proposing new unconventional ideas, improving things on this dimension strikes me as pretty key for the success of the overall community.

I expect the EA Hotel has a chance to serve as a cheap distributed safety net for a lot of people who are worried that if they start working on EA stuff, they will run out of money soon and will then potentially end up having to take drastic actions as they run out of money. The EA Hotel can both significantly extend those people's runway, but also soften the costs of running out of money significantly for anyone who is working on EA-related ideas.

Acting on historical interest: There has been a significant historical interest in creating an EA Hub in a location with much lower living expenses, and from a process perspective I think we should very strongly reward people who feel comfortable acting on that level of community interest. Even if the EA Hotel turns out to be a bad idea, it strikes me as important that community members can take risks like this and have at least their expenses reimbursed afterwards (even if it turns out that the idea doesn't work out when implemented), as long as they went about pursuing the project in broadly reasonable terms.

Building high-dedication cultures: I generally think that developing strong cultures of people who have high-levels of dedication is a good way of multiplying the efforts of the people involved, and is generally something that should be encouraged. I think the EA Hotel has a chance to develop a strong high-dedication culture because moving to it requires a level of sacrifice (moving to Blackpool) that will only cause people above a pretty high dedication-threshold to show up. I do also think this can backfire (see later section on concerns).

I do however also have a set of concerns about the hotel. I think over the past few weeks as more things have been written about the hotel, I have started feeling more positive towards it, and would likely recommend a grant to the EA Hotel in the next LTF-Fund grant rounds, though I am not certain.

I think the EA Hotel is more likely than most other projects I have recommended grants to to be net negative, though I don't think it has a significant chance to be hugely negative.

Here are the concrete models around my concerns:

1. I think the people behind the EA Hotel where initially overeager to publicize the EA Hotel via broad media outreach in things like newspapers and others media outlets with broad reach. I think interaction with the media is well-modeled by a unilateralist curse-like scenario in which many participants have the individual choice to create a media narrative, and whoever moves first frames a lot of the media conversation. In general, I think it is essential for organizations in the long term future space to recognize this kind of dynamic and be hesitant to take unilateral action in cases like this.

I think the EA Hotel does not benefit much from media attention, and the community at large likely suffers from the EA Hotel being widely discussed in the media (not because it's weird, which is a dimension on which I think EA is broadly far too risk-averse to, but instead because it communicates the presence of free resources that are for the taking of anyone vaguely associated with the community, which tends to attract unaligned people and cause adversarial scenarios).

Note: Greg responded to this and I now think this point is mostly false, though I still think something in this space went wrong.

2. I think there is a significant chance of the culture of the EA Hotel becoming actively harmful for the people living there, and also spark unnecessary conflict in the broader community. I think there are two reasons why I am more worried about this than for most other locations:

  • I expect the EA Hotel to attract a kind of person who is pretty young, highly dedicated and looking for some guidance on what to do with their life. I think this makes them a particularly easy and promising target for people who tend to abuse that kind of trust relationship and who are looking for social influence.
  • I expect the hotel to attract people who have a somewhat contrarian position in the community, for a variety of different reasons. I think some of it is the initial founding effect that I already observed, but another likely cause is that the hotel will likely be filled with highly dedicated people who are not being offered jobs or grants that would allow them to work from other locations, which I think can cause many of these people to feel disenfranchised from the community and feel a (potentially quite valid) sense of frustration.
    • I am not at all opposed to helping people who are dissatisfied with things in the community to coordinate on causing change, and usually think that's good. But I think locality matters a lot for dispute resolution and I think it's plausible that the EA Hotel could form a geographically and memetically isolated group that is predisposed for conflict with the rest of the EA community in a way that could result in a lot of negative-sum conflict.
  • Generally high-dedication cultures are more likely to cause people to overcommit to to take drastic actions that they later regret. I think this is usually worth the cost, but is compounded with some of the other factors I list here.

3. I don't have a sense that Greg wants to really take charge on the logistics of running the hotel, and don't have a great candidate for someone else to run it. Though it seems pretty plausible that we could find someone to run it if we invest some time into finding someone.

Summary:

Overall, I think all of my concerns can be overcome, at which point I would be quite excited about supporting the hotel. It seems easy to change the way the hotel relates to the media, I think there are a variety of things one could do to avoid cultural problems, and I think we could find someone to who can take charge on the logistics of running the hotel.

At the moment, I think I would be in favor of giving a grant that covers the runway of the hotel for the next year. (There is the further question of whether they should get enough money to buy the hotel next door, which is something I am much less certain about)

Comment by habryka on Lecture Videos from Cambridge Conference on Catastrophic Risk · 2019-04-24T04:22:33.532Z · score: 5 (4 votes) · EA · GW

It's pretty rarely requested, so it's not super high on my priority list. My guess is whenever we end up doing our bigger editor upgrade which we planned to happen later in the year.

Long-Term Future Fund: April 2019 grant recommendations

2019-04-23T07:00:00.000Z · score: 136 (72 votes)
Comment by habryka on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T17:58:58.984Z · score: 6 (3 votes) · EA · GW

Yep, I saw that. I didn't actually intend to criticize your use of the quiz, sorry if it came across that way. I just gave it a try and figured I would contribute some data.

(This doesn't mean I agree with how 80k communicates information. I haven't kept up at all with 80k's writing, so I don't have any strong opinions either way here)

Comment by habryka on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T04:05:10.288Z · score: 14 (6 votes) · EA · GW

I got them on basically every setting that remotely applied to me.

Comment by habryka on EA Hotel fundraiser 4: concrete outputs after 10 months · 2019-04-18T03:41:55.038Z · score: 6 (3 votes) · EA · GW

I think sadly pretty low, based on my current model of the time constraints of everyone, and also CEA logistical constraints.

Comment by habryka on EA Hotel fundraiser 4: concrete outputs after 10 months · 2019-04-17T22:39:34.826Z · score: 21 (7 votes) · EA · GW

(This is just my personal perspective and does not aim to reflect the opinions of anyone else on the LTF-Fund)

I am planning to send more feedback on this to the EA Hotel people.

I have actually broadly come around to the EA Hotel being a good idea, but at the time we made the grant decision there was a lot less evidence and writeups around, and it was those writeups by a variety of people that convinced me it is likely a good idea, with some caveats.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-15T23:18:21.550Z · score: 4 (2 votes) · EA · GW

Yeah, that's what I intended to say. "In the world where I come to the above opinion, I expect my crux will have been that whatever made CFAR historically work, is still working"

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T21:14:51.811Z · score: 3 (2 votes) · EA · GW

Will update to say "help facilitate". Thanks for the correction!

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T23:37:05.857Z · score: 4 (2 votes) · EA · GW

He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T22:09:01.648Z · score: 21 (10 votes) · EA · GW

I think that people should feel comfortable sharing their system-1 expressions, in a way that does not immediately imply judgement.

I am thinking of stuff like the non-violent communication patterns, where you structure your observation in the following steps:

1. List a set of objective observations

2. Report your experience upon making those observations

3. Then your personal interpretations of those experiences and what they imply about your model of the world

4. Your requests that follow from those models

I think it's fine to stop part-way through this process, but that it's generally a good idea to not skip any steps. So I think it's fine to just list observations, and it's fine to just list observations and then report how you feel about those things, as long as you clearly indicate that this is your experience and doesn't necessarily involve judgement. But it's a bad idea to immediately skip to the request/judgement step.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T19:27:44.868Z · score: 6 (4 votes) · EA · GW

I will get back to you, but it will probably be a few days. It seems fairer to first send feedback to the people I said I would send private feedback too, and then come back to the public feedback requests.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T19:04:46.084Z · score: 38 (11 votes) · EA · GW

I don't get compensated, though I also don't think compensation would make much of a difference for me or anyone else on the fund (except maybe Alex).

Everyone on the fund is basically dedicating all of their resources towards EA stuff, and is generally giving up most of their salary potential for working in EA. I don't think it would make super much sense for us to get more money, given that we are already de-facto donating everything above a certain threshold (either literally in the case of the two Matts, or indirectly by taking a paycut and working in EA).

I think if people give more money to the fund because they come to trust the decisions of the fund more, then that seems like it would incentivize more things like this. Also if people bring up strong arguments against any of the reasoning I explained above, then that is a great win, since I care a lot about our fund distributions getting better.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T18:45:51.497Z · score: 14 (10 votes) · EA · GW

I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think weirdness is overall a pretty bad predictor of impact, both in the positive and negative direction. I think it's a good emotion to pay attention to, because often you can learn valuable things from it, but I think it only sometimes tends to give rise to real arguments in favor or against an idea.

It is also very susceptible to framing effects. The comment above says "$39,000 to make unsuccessful youtube videos". That sure sounds naive and weird, but the whole argument relies on the word "unsuccessful" which is a pure framing device and fully unsubstantiated.

And, even though I think weirdness is only a mediocre predictor of impact, I am quite confident that the degree to which a grant or a grantee is perceived as intuitively weird by broad societal standards, is still by far the biggest predictor of whether your project can receive a grant from any major EA granting body (I don't think this is necessarily the fault of the granting bodies, but is instead a result of a variety of complicated social incentives that force their hand most of the time).

I think this has an incredibly negative effect on the ability of the Effective Altruism community to make progress on any of the big problems we care about, and I really don't think we want to push further in that direction.

I think you want to pay attention to whether you perceive something as weird, but I don't think that feeling should be among your top considerations when evaluating an idea or project, and I think right now it is usually the single biggest consideration in most discourse.

After chatting with you about this via PMs, I think you aren't necessarily making that mistake, since I think you do emphasize that there are many arguments that could convince you that something weird is still a good idea.

I think in particular it is important that "something being perceived as weird is definitely not sufficient reason to dismiss it as an effective intervention" to be common knowledge and part of public discourse. As well as "if someone is doing something that looks weird to me, without me having thought much about it or asked them much about their reasons for doing things, then that isn't super much evidence about what they are doing being a bad idea".

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T03:30:56.274Z · score: 11 (7 votes) · EA · GW

The primary thing I expect him to do with this grant is to work together with John Salvatier on doing research on skill transfer between experts (which I am partially excited about because that's the kind of thing that I see a lot of world-scale model building and associated grant-making being bottlenecked on).

However, as I mentioned in the review, if he finds that he can't contribute to that as effectively as he thought, I want him to feel comfortable pursuing other research avenues. I don't currently have a short-list of what those would be, but would probably just talk with him about what research directions I would be excited about, if he decides to not collaborate with John. One of the research projects he suggested was related to studying historical social movements and some broader issues around societal coordination mechanisms that seemed decent.

I primarily know about the work he has so far produced with John Salvatier, and also know that he demonstrated general competence in a variety of other projects, including making money managing a small independent hedge fund, running a research project for the Democracy Defense Fund, doing some research at Brown university, and participating in some forecasting tournaments and scoring well.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T03:00:01.423Z · score: 6 (3 votes) · EA · GW

Hmm, I guess it depends a bit on how you view this.

If you model this in terms of "total financial resources going to EA-aligned people", then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.

If you want to model it as "money controlled directly by EA institutions" then it's closer to your number.

I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T02:51:12.593Z · score: 4 (2 votes) · EA · GW

Ah, yes. The second one. Will update.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T02:50:36.200Z · score: 16 (11 votes) · EA · GW

Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it's unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn't, as I have argued below).

The outcome might be that some people might start disliking HPMoR, but that doesn't seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.

I have some vague feeling that there might be some more weird downstream effects of this, but I don't think I have any concrete models of how they might happen, and would be interested in hearing more of people's concerns.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T01:29:58.286Z · score: 10 (4 votes) · EA · GW

Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don't think I understand what kind of reputational risks you are worried about.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T00:14:27.588Z · score: 12 (7 votes) · EA · GW

Here is my rough fermi:

My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.

Since people's competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.

To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T00:08:23.949Z · score: 15 (4 votes) · EA · GW

Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:

1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.

The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.

I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.

2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:

  • I don't think the aim of this grant should be "to recruit IMO and EGMO winners into the EA community". I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
    • I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don't think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
    • Jan's comment summarized the concerns I have here reasonably well.
  • As Misha said, this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
  • As Ben Pace said, I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
3. I'm not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?

Misha responded to this. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn't be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don't see any problems with this.

4. Not necessarily that risky funds shouldn't be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.

My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: "Why might you choose to not donate to this fund?"

First, donors who prefer to support established organizations. The fund managers have a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.

This is the first and top reason we list why someone might not want to donate to this fund. This doesn't necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.

From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T23:31:16.690Z · score: 17 (11 votes) · EA · GW

Seems good.

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

I am overall still quite positive on CFAR. I have significant concerns, but the total impact CFAR had over the course of its existence strikes me as very large and easily worth the resources it has taken up so far.

I don't think it's the correct choice for CFAR to take irreversible action right now because they correctly decided to not run a fall fundraiser, and I still assign significant probability to CFAR actually being on the right track to continue having a large impact. My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

I considered this for quite a while, but ultimately decided against it. I think grantmakers should generally be very hesitant to make earmarked or conditional grants to organizations, without knowing the way that organization operates in close detail. Some things that might seem easy to change from the outside often turn out to be really hard to change for good reasons, and this also has the potential to create a kind of adversarial relationship where the organization is incentivized to do the minimum amount of effort necessary to meet the conditions of the grant, which I think tends to make transparency a lot harder.

Overall, I much more strongly prefer to recommend unconditional grants with concrete suggestions for what changes would cause future unconditional grants to be made to the organization, while communicating clearly what kind of long-term performance metrics or considerations would cause me to change my mind.

I expect to communicate extensively with CFAR over the coming weeks, talk to most of its staff members, generally get a better sense of how CFAR operates and think about the big-picture effects that CFAR has on the long-term future and global catastrophic risk. I think I am likely to then either:

  • make recommendations for a set of changes with conditional funding,
  • decide that CFAR does not require further funding from the LTF,
  • or be convinced that CFAR's current plans make sense and that they should have sufficient resources to execute those plans.
Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T20:34:02.898Z · score: 66 (21 votes) · EA · GW

Here is a rough summary of the process, it's hard to explain spreadsheets in words so this might end up sounding a bit confusing:

  • We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from -5 to +5 for each application
  • Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
  • We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
  • Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
  • As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes

However, some fund members weren't super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.

We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.

Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.

Comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T20:14:54.305Z · score: 10 (3 votes) · EA · GW

I agree. Though I think I expect the ratio of funds-distributed/staff to roughly stay the same, at least for a bit, and probably go up a bit.

I think older and larger organizations will have smaller funds-distributed/staff ratios, but I think that's mostly because coordinating people is hard and marginal productiveness of a hire goes down a lot after the initial founders, so you need to hire a lot more people to produce the same quality of output.

Major Donation: Long Term Future Fund Application Extended 1 Week

2019-02-16T23:28:45.666Z · score: 41 (19 votes)

EA Funds: Long-Term Future fund is open to applications until Feb. 7th

2019-01-17T20:25:29.163Z · score: 19 (13 votes)

Long Term Future Fund: November grant decisions

2018-12-02T00:26:50.849Z · score: 35 (29 votes)

EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)

2018-11-21T03:41:38.850Z · score: 21 (11 votes)