Long Term Future Fund: April 2019 grant decisions

post by Habryka · 2019-04-08T01:00:10.890Z · score: 131 (61 votes) · EA · GW · 195 comments


  Grant Recipients
  Grant Rationale
  Writeups by Helen Toner
    Alex Lintz ($17,900) 
  Writeups by Matt Wage
    Tessa Alexanian ($26,250) 
    Shahar Avin ($40,000) 
    Lucius Caviola ($50,000) 
    Ought ($50,000)
  Writeups by Alex Zhu
    Nikhil Kunapuli ($30,000) 
    Anand Srinivasan ($30,000) 
    David Girardo ($30,000) 
  Writeups by Oliver Habryka
    Mikhail Yagudin ($28,000) 
      From the application:
      My thoughts and reasoning
      What effects does reading HPMOR have on people?
      How good of a target group are Math Olympiad winners for these effects?
      Is the team competent enough to execute on their plan?
    Alex Turner ($30,000) 
      From the application:
      My thoughts and reasoning
      Potential concerns
    Orpheus Lummis ($10,000) 
      From the application :
      My thoughts and reasoning
    Tegan McCaslin ($30,000) 
      From the application:
      My thoughts and reasoning
    Anthony Aguirre ($70,000) 
      From the application:
      My thoughts and reasoning
    Lauren Lee ($20,000) 
      From the application:
      My thoughts and reasoning
    Ozzie Gooen ($70,000) 
      From the application:
      My thoughts and reasoning
    Johannes Heidecke ($25,000) 
      From the application:
      My thoughts and reasoning
    Vyacheslav Matyuhin ($50,000) 
      From the application:
      My thoughts and reasoning
    Jacob Lagerros ($27,000) 
      From the application:
      My thoughts and reasoning
    Connor Flexman ($20,000) 
    Eli Tyre ($30,000) 
    Robert Miles ($39,000) 
      From the application:
      My thoughts and reasoning
    MIRI ($50,000)
      My thoughts and reasoning
      Thoughts on funding gaps
    CFAR ($150,000)

Please note that the following grants are only recommendations, as all grants are still pending an internal due diligence process by CEA.

This post contains our allocation and some explanatory reasoning for our Q1 2019 grant round. We opened up an application for grant requests earlier this year which was open for about one month, after which we received an unanticipated large donation of about $715k. This caused us to reopen the application for another two weeks. We then used a mixture of independent voting and consensus discussion to arrive at our current grant allocation.

What is listed below is only a set of grant recommendations to CEA, who will run these by a set of due-diligence tests to ensure that they are compatible with their charitable objectives and that making these grants will be logistically feasible.

Grant Recipients

Each grant recipient is followed by the size of the grant and their one-sentence description of their project.

Total distributed: $923,150

Grant Rationale

Here we explain the purpose for each grant and summarize our reasoning behind their recommendation. Each summary is written by the fund member who was most excited about recommending the relevant grant (plus some constraints on who had time available to write up their reasoning). These differ a lot in length, based on how much available time the different fund members had to explain their reasoning.

Writeups by Helen Toner

Alex Lintz ($17,900)

A two-day, career-focused workshop to inform and connect European EAs interested in AI governance

Alex Lintz and some collaborators from EA Zürich proposed organizing a two-day workshop for EAs interested in AI governance careers, with the goals of giving participants background on the space, offering career advice, and building community. We agree with their assessment that this space is immature and hard to enter, and believe their suggested plan for the workshop looks like a promising way to help participants orient to careers in AI governance.

Writeups by Matt Wage

Tessa Alexanian ($26,250)

A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers

We are funding Tessa Alexanian to run a one day biosecurity summit, immediately following the SynBioBeta industry conference. We have also put Tessa in touch with some experienced people in the biosecurity space who we think can help make sure the event goes well.

Shahar Avin ($40,000)

Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers

We are funding Shahar Avin to help him hire an academic research assistant and for other miscellaneous research expenses. We think positively of Shahar’s past work (for example this report), and multiple people we trust recommended that we fund him.

Lucius Caviola ($50,000)

Conducting postdoctoral research at Harvard on the psychology of EA/long-termism

We are funding Lucius Caviola for a 2-year postdoc at Harvard working with Professor Joshua Greene. Lucius plans to study the psychology of effective altruism and long-termism, and an EA academic we trust had a positive impression of him. We are splitting the cost of this project with the EA Meta Fund because some of Caviola’s research (on effective altruism) is a better fit for the Meta Fund while some of his research (on long-termism) is a better fit for our fund.

Ought ($50,000)

We funded Ought in our last round of grants, and our reasoning for funding them in this round is largely the same. Additionally, we wanted to help Ought diversify its funding base because it currently receives almost all its funding from only two sources and is trying to change that.

Our comments from last round:

Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications. We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant. Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future.

Writeups by Alex Zhu

Nikhil Kunapuli ($30,000)

A study of safe exploration and robustness to distributional shift in biological complex systems

Nikhil Kunapuli is doing independent deconfusion research for AI safety. His approach is to develop better foundational understandings of various concepts in AI safety, like safe exploration and robustness to distributional shift, by exploring these concepts in complex systems science and theoretical biology, domains outside of machine learning for which these concepts are also applicable. To quote an illustrative passage from his application:

When an organism within an ecosystem develops a unique mutation, one of several things can happen. At the level of the organism, the mutation can either be neutral in terms of fitness, maladaptive and leading to reduced reproductive success and/or death, or adaptive. For an adaptive mutation, the upgraded fitness of the organism will change the fitness landscape for all other organisms within the ecosystem, and in response, the structure of the ecosystem will either be perturbed into a new attractor state or destabilized entirely, leading to ecosystem collapse. Remarkably, most mutations do not kill their hosts, and most mutations also do not lead to ecosystem collapse. This is actually surprising when one considers the staggering complexity present within a single genome (tens of thousands of genes deeply intertwined through genomic regulatory networks) as well as an ecosystem (billions of organisms occupying unique niches and constantly co-evolving). One would naïvely think that a system so complex must be highly sensitive to change, and yet these systems are actually surprisingly robust. Nature somehow figured out a way to create robust organisms that could respond to and function in a shifting environment, as well as how to build ecosystems in which organisms could be free to safely explore their adjacent possible new forms without killing all other species.

Nikhil spent a summer doing research for the New England Complex Systems Institute. He also spent 6 months as the cofounder and COO of an AI hardware startup, which he left because he decided that direct work on AI safety is more urgent and important.

I recommended that we fund Nikhil because I think Nikhil’s research directions are promising, and because I personally learn a lot about AI safety every time I talk with him. The quality of his work will be assessed by researchers at MIRI.

Anand Srinivasan ($30,000)

Formalizing perceptual complexity with application to safe intelligence amplification

Anand Srinivasan is doing independent deconfusion research for AI safety. His angle of attack is to develop a framework that will allow researchers to make provable claims about what specific AI systems can and cannot do, based off of factors like their architectures and their training processes. For example, AlphaGo can “only have thoughts” about patterns on Go boards and lookaheads, which aren’t expressive enough to encode thoughts about malicious takeover.

AI researchers can build safe and extremely powerful AI systems by relying on intuitive judgments of their capabilities. However, these intuitions are non-rigorous and prone to error, especially since powerful optimization processes can generate solutions that are totally novel and unexpected to humans. Furthermore, competitive dynamics will incentivize rationalization about which AI systems are safe to deploy. Under fast takeoff assumptions, a single rogue AI system could lead to human extinction, making it particularly unreliable for us to rely exclusively on intuitive judgments about which AI systems are safe. Anand’s goal is to develop a framework that formalizes these intuitions well enough to permit future AI researchers to make provable claims about what future AI systems can and can’t internally represent.

Anand was the CTO of an enterprise software company that he cofounded with me, where he managed a six-person engineering team for two years. Upon leaving the company, he decided to refocus his efforts toward building safe AGI. Before dropping out of MIT, Anand worked on Ising models for fast image classification and fuzzy manifold learning (which was later independently published as a top paper at NIPS).

I recommended that we fund Anand because I think Anand’s research directions are promising, and I personally learn a lot about AI safety every time I talk with him. The quality of Anand’s work will be assessed by researchers at MIRI.

David Girardo ($30,000)

A research agenda rigorously connecting the internal and external views of value synthesis

David Girardo is doing independent deconfusion research for AI safety. His angle of attack is to elucidate the ontological primitives for representing hierarchical abstractions, drawing from his experience with type theory, category theory, differential geometry, and theoretical neuroscience.

I decided to fund David because I think David’s research directions are very promising, and because I personally learn a lot about AI safety every time I talk with him. Tsvi Benson-Tilsen, a MIRI researcher, has also recommended that David get funding. The quality of David’s work will be assessed by researchers at MIRI.

Writeups by Oliver Habryka

I have a broad sense that funders in EA tend to give little feedback to organizations they are funding, as well as organizations that they explicitly decided not to fund (usually due to time constraints). So in my writeups below I tried to be as transparent as possible in explaining the real reasons for what caused me to believe a grant was a good idea, what my biggest hesitations are, and took a lot of opportunities to explain background models of mine that might help others get better at understanding my future decisions in this space.

For some of the grants below, I think there exist more publicly defensible (or easier to understand) arguments for the grants that I recommended. However I tried to explain the actual models that drove my decisions for these grants, which are often hard to put into a few paragraphs of text, and so I apologize in advance for some of the explanations below almost certainly being a bit hard to understand.

Note that when I’ve written about how I hope a grant will be spent, this was in aid of clarifying my reasoning and is in no way meant as a restriction on what the grant should be spent on. The only restriction is that it should be spent on the project they applied for in some fashion, plus any further legal restrictions that CEA requires.

Mikhail Yagudin ($28,000)

Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020

From the application:

EA Russia has the oral agreements with IMO [International Math Olympiad] 2020 (Saint Petersburg, Russia) & EGMO [European Girls’ Mathematical Olympiad] 2019 (Kyiv, Ukraine) organizers to give HPMORs [copies of Harry Potter and the Methods of Rationality] to the medalists of the competitions. We would also be able to add an EA / rationality leaflet made by CFAR (I contacted Timothy Telleen-Lawton on that matter).

My thoughts and reasoning

[Edit & clarification: The books will be given out by the organisers of the IMO and EGMO as prizes for the 650 people who got far enough to participate, all of which are "medalists".]

My model for the impact of this grant roughly breaks down into three questions:

  1. What effects does reading HPMOR have on people?
  2. How good of a target group are Math Olympiad winners for these effects?
  3. Is the team competent enough to execute on their plan?

What effects does reading HPMOR have on people?

My models of the effects of HPMOR stem from my empirical observations and my inside view on rationality training.

How good of a target group are Math Olympiad winners for these effects?

I think that Math Olympiad winners are a very promising demographic within which to find individuals who can contribute to improving the long-term future. I believe Math Olympiads select strongly on IQ as well as (weakly) on conscientiousness and creativity, which are all strong positives. Participants are young and highly flexible; they have not yet made too many major life commitments (such as which university they will attend), and are in a position to use new information to systematically change their lives’ trajectories. I view handing them copies of an engaging book that helps teach scientific, practical and quantitative thinking as a highly asymmetric tool for helping them make good decisions about their lives and the long-term future of humanity.

I’ve also visited and participated in a variety of SPARC events, and found the culture there (which is likely to be at least somewhat representative of Math Olympiad culture) very healthy in a broad sense. Participants displayed high levels of altruism, a lot of willingness to help one another, and an impressive amount of ambition to improve their own thinking and affect the world in a positive way. These observations make me optimistic about efforts that build on that culture.

I think it’s important when interacting with minors, and attempting to improve (and thus change) their life trajectories, to make sure to engage with them in a safe way that is respectful of their autonomy and does not put social pressures on them in ways they may not yet have learned to cope with. In this situation, Mikhail is working with/through the institutions that run the IMO and EGMO, and I expect those institutions to (a) have lots of experience with safeguarding minors and (b) have norms in place to make sure that interactions with the students are positive.

Is the team competent enough to execute on their plan?

I don’t have a lot of information on the team, don’t know Mikhail, and have not received any major strong endorsement for him and his team, which makes this the weakest link in the argument. However, I know that they are coordinating both with SPARC (which also works to give books like HPMOR to similar populations) and the team behind the highly successful Russian printing of HPMOR, two teams who have executed this kind of project successfully in the past. So I felt comfortable recommending this grant, especially given its relatively limited downside.

Alex Turner ($30,000)

Building towards a “Limited Agent Foundations” thesis on mild optimization and corrigibility

From the application:

I am a third-year computer science PhD student funded by a graduate teaching assistantship; to dedicate more attention to alignment research, I am applying for one or more trimesters of funding (spring term starts April 1).


Last summer, I designed an approach to the “impact measurement” subproblem of AI safety: “what equation cleanly captures what it means for an agent to change its environment, and how do we implement it so that an impact-limited paperclip maximizer would only make a few thousand paperclips?”. I believe that my approach, Attainable Utility Preservation (AUP), goes a long way towards answering both questions robustly, concluding:

> By changing our perspective from “what effects on the world are ‘impactful’?” to “how can we stop agents from overfitting their environments?”, a natural, satisfying definition of impact falls out. From this, we construct an impact measure with a host of desirable properties […] AUP agents seem to exhibit qualitatively different behavior […]

Primarily, I aim both to output publishable material for my thesis and to think deeply about the corrigibility and mild optimization portions of MIRI’s machine learning research agenda. Although I’m excited by what AUP makes possible, I want to lay the groundwork of deep understanding for multiple alignment subproblems. I believe that this kind of clear understanding will make positive AI outcomes more likely.

My thoughts and reasoning

I’m excited about this because:

Potential concerns

These intuitions, however, are a bit in conflict with some of the concrete research that Alex has actually produced. My inside views on AI Alignment make me think that work on impact measures is very unlikely to result in much concrete progress on what I perceive to be core AI Alignment problems, and I have talked to a variety of other researchers in the field who share that assessment. I think it’s important that this grant not be viewed as an endorsement of the concrete research direction that Alex is pursuing, but only as an endorsement of the higher-level process that he has been using while doing that research.

As such, I think it was a necessary component of this grant that I have talked to other people in AI Alignment whose judgment I trust, who do seem excited about Alex’s work on impact measures. I think I would not have recommended this grant, or at least this large of a grant amount, without their endorsement. I think in that case I would have been worried about a risk of diverting attention from what I think are more promising approaches to AI Alignment, and a potential dilution of the field by introducing a set of (to me) somewhat dubious philosophical assumptions.

Overall, while I try my best to form concrete and detailed models of the AI Alignment research space, I don’t currently devote enough time to it to build detailed models that I trust enough to put very large weight on my own perspective in this particular case. Instead, I am mostly deferring to other researchers in this space that I do trust, a number of whom have given positive reviews of Alex’s work.

In aggregate, I have a sense that the way Alex went about working on AI Alignment is a great example for others to follow, I’d like to see him continue, and I am excited about the LTF Fund giving out more grants to others who try to follow a similar path.

Orpheus Lummis ($10,000)

Upskilling in contemporary AI techniques, deep RL and AI safety, before pursuing a ML PhD

From the application :

Notable planned subprojects:

My thoughts and reasoning

We funded Orpheus in our last grant round to run an AI Safety Unconference just after NeurIPS. We’ve gotten positive testimonials from the event, and I am overall happy about that grant.

I do think that of the grants I recommended this round, this is probably the one I feel least confident about. I don’t know Orpheus very well, and while I have received generally positive reviews of their work, I haven’t yet had the time to look into any of those reviews in detail, and haven’t seen clear evidence about the quality of their judgment. However, what I have seen seems pretty good, and if I had even a tiny bit more time to spend on evaluating this round’s grants, I would probably have spent it reaching out to Orpheus and talking with them more in person.

In general, I think time for self-study and reflection can be exceptionally important for people starting to work in AI Alignment. This is particularly true if they are following a more conventional academic path which could easily cause them to try to immediately work on contemporary AI capabilities research, because I generally think this has negative value even for people concerned about safety (though I do have some uncertainty here). I think giving people working on more classical ML research the time and resources to explore the broader implications of their work on safety, if they are already interested in that, is a good use of resources.

I am also excited about building out the Montreal AI Alignment community, and having someone who both has the time and skills to organize events and can understand the technical safety work seems likely to have good effects.

This grant is also the smallest grant we are funding this round, making me more comfortable with a bit less due diligence than the other grants, especially since this grant seems unlikely to have any large negative consequences.

Tegan McCaslin ($30,000)

Conducting independent research into AI forecasting and strategy questions

From the application:

1) I’d like to independently pursue research projects relevant to AI forecasting and strategy, including (but not necessarily limited to) some of the following:

I am actively pursuing opportunities to work with or under more senior AI strategy researchers [..], so my research focus within AI strategy is likely to be influenced by who exactly I end up working with. Otherwise I expect to spend some short period of time at the start generating more research ideas and conducting pilot tests on the order of several hours into their tractability, then choosing which to pursue based on an importance/tractability/neglectedness framework.


2) There are relatively few researchers dedicated full-time to investigating AI strategy questions that are not immediately policy-relevant. However, there nonetheless exists room to contribute to the research on existential risks from AI with approaches that fit into neither technical AI safety nor AI policy/governance buckets.

My thoughts and reasoning

Tegan has been a member of the X-risk network for several years now, and recently left AI Impacts. She is now looking for work as a researcher. Two considerations made me want to recommend that the LTF Fund make a grant to her.

  1. It’s easier to relocate someone who has already demonstrated trust and skills than to find someone completely new.
    1. This is (roughly) advice given by YCombinator to startups, and I think it’s relevant to the X-risk community. It’s cheaper for Tegan to move around and find the place for her to do her best work relative to an outsider who has not already worked within the X-risk network. A similarly skilled individual who is not already part of the network will need to spend a few years understanding the community and demonstrating that they can be trusted. So I think it is a good idea to help Tegan explore other parts of the community to work in.
  2. It’s important to give good researchers runway while they find the right place.
    1. For many years, the X-risk community has been funding-bottlenecked, keeping salaries low. A lot of progress has been made on this front and I hope that we’re able to fix this. Unfortunately, the current situation means that when a hire does not work out, the individual often doesn’t have much runway while reorienting, updating on what didn’t work out, and subsequently trialing at other organizations.
    2. This moves them much more quickly into an emergency mode, where everything must be optimized for short-term income, rather than long-term updating, skill building, and research. As such, I think it is important for Tegan to have a comfortable amount of runway while doing solo research and trialling at various organizations in the community.

While I haven’t spent the time to look into Tegan’s research in any depth, the small amount I did read looked promising. The methodology of this post is quite exciting, and her work there and on other pieces seems very thorough and detailed.

That said, my brief assessment of Tegan’s work was not the reason why I recommended this grant, and if Tegan asks for a new grant in 6 months to focus on solo research, I will want to spend significantly more time reading her output and talking with her, to understand how these questions were chosen and what precise relation they have to forecasting technological progress in AI.

Overall, I think Tegan is in a good place to find a valuable role in our collective X-risk reduction project, and I’d like her to have the runway to find that role.

Anthony Aguirre ($70,000)

A major expansion of the Metaculus prediction platform and its community

From the application:

The funds would be used to expand the Metaculus prediction platform along with its community. Metaculus.com is a fully-functional prediction platform with ~10,000 registered users and >120,000 predictions made to date on more than >1000 questions. The goals of Metaculus are:

There are two major high-priority expansions possible with funding in place. The first would be an integrated set of extensions to improve user interaction and information-sharing. This would include private messaging and notifications, private groups, a prediction “following” system to create micro-teams within individual questions, and various incentives and systems for information-sharing.

The second expansion would link questions into a network. Users would express links between questions, from very simple (“notify me regarding question Y when P(X) changes substantially) to more complex (“Y happens only if X happens, but not conversely”, etc.) Information can also be gleaned from what users actually do. The strength and character of these relations can then generate different graphical models that can be explored interactively, with the ultimate goal of a crowd-sourced quantitative graphical model that could structure event relations and propagate new information through the network.

My thoughts and reasoning

For this grant, and also the grants to Ozzie Gooen and Jacob Lagerros, I did not have enough time to write up my general thoughts on forecasting platforms and communities. I hope to later write a post with my thoughts here. But for a short summary, see my thoughts on Ozzie Gooen’s grant.

I am generally excited about people building platforms for coordinating intellectual labor, particularly on topics that are highly relevant to the long-term future. I think Metaculus has been providing a valuable service for the past few years, both in improving our collective ability to forecast a large variety of important world events and in allowing people to train and demonstrate their forecasting skills, which I expect to become more relevant in the future.

I am broadly impressed with how cooperative and responsive the Metaculus team has been in helping organizations in the X-risk space get answers to important questions, or provide software services to them (e.g. I know that they are helping Jacob Lagerros and Ben Goldhaber set up a private Metaculus instance focused on AI)

I don’t know Anthony well, and overall I am quite concerned that there is no full-time person on this project. My model is that projects like this tend to go a lot better if they have one core champion who has the resources to fully dedicate themselves to the project, and it currently doesn’t seem that Anthony is able to do that.

My current model is that Metaculus will struggle as a platform without a fully dedicated team or at least individual champion, though I have not done a thorough investigation of the Metaculus team and project, so I am not very confident of this. One of the major motivations for this grant is to ensure that Metaculus has enough resources to hire a potential new champion for the project (who ideally also has programming skills or UI design skills to allow them to directly work on the platform). That said, Metaculus should use the money as best they see fit.

I am also concerned about the overlap of Metaculus with the Good Judgment Project, and currently have a sense that it suffers from being in competition with it, while also having access to substantially fewer resources and people.

The requested grant amount was for $150k, but I am currently not confident enough in this grant to recommend filling the whole amount. If Metaculus finds an individual new champion for the project, I can imagine strongly recommending that it gets fully funded, if the new champion seems competent.

Lauren Lee ($20,000)

Working to prevent burnout and boost productivity within the EA and X-risk communities

From the application:

(1) After 2 years as a CFAR instructor/researcher, I’m currently in a 6-12 month phase of reorienting around my goals and plans. I’m requesting a grant to spend the coming year thinking about rationality and testing new projects.

(2) I want to help individuals and orgs in the x-risk community orient towards and achieve their goals.

(A) I want to train the skill of dependability, in myself and others.

This is the skill of a) following through on commitments and b) making prosocial / difficult choices in the face of fear and aversion. The skill of doing the correct thing, despite going against incentive gradients, seems to be the key to virtue.

One strategy I’ve used is to surround myself with people with shared values (CFAR, Bay Area) and trust the resulting incentive gradients. I now believe it is also critical to be the kind of person who can take correct action despite prevailing incentive structures.

Dependability is also related to thinking clearly. Your ability to make the right decision depends on your ability to hold and be with all possible realities, especially painful and aversive ones. Most people have blindspots that actively prevent this.

I have some leads on how to train this skill, and I’d like both time and money to test them.

(B) Thinking clearly about AI risk

Most people’s decisions in the Bay Area AI risk community seem model-free. They themselves don’t have models of why they’re doing what they’re doing; they’re relying on other people “with models” to tell them what to do and why. I’ve personally carried around such premises. I want to help people explore where their ‘placeholder premises’ are and create safety for looking at their true motivations, and then help them become more internally and externally aligned.

(C) Burnout

Speaking of “not getting very far.” My personal opinion is that most ex-CFAR employees left because of burnout; I’ve written what I’ve learned here, see top 2 comments: [https://forum.effectivealtruism.org/posts/NDszJWMsdLCB4MNoy/burnout-what-is-it-and-how-to-treat-it#87ue5WzwaFDbGpcA7 [EA · GW]]. I’m interested in working with orgs and individuals to prevent burnout proactively.

(3) Some possible measurable outputs / artifacts:

My thoughts and reasoning

Lauren worked as an instructor at CFAR for about 2 years, until Fall 2018. I review CFAR’s impact as an institution below; in general, I believe it has helped set a strong epistemic foundation for the community and been successful in recruitment and training. I have a great appreciation for everyone who helps them with their work.

Lauren is currently in a period of reflection and reorientation around her life and the problem of AGI, in part due to experiencing burnout in the months before she left CFAR. To my knowledge, CFAR has never been well-funded enough to offer high salaries to its employees, and I think it is valuable to ensure that people who work at EA orgs and burn out have the support to take the time for self-care after quitting due to long-term stress. Ideally, I think this should be improved by higher salaries that allow employees to build significant runway to deal with shocks like this, but I think that the current equilibrium of salary levels in EA does not make that easy. Overall, I think it’s likely that staff at highly valuable EA orgs will continue burning out, and I don’t currently see it as an achievable target to not have this happen (though I am in favor of people people working on solving the problem).

I do not know Lauren well enough to evaluate the quality of her work on the art of human rationality, but multiple people I trust have given positive reviews (e.g. see Alex Zhu above), so I am also interested to read her output on the subjects she is thinking about.

I think it’s very important that people who work on developing an understanding of human rationality take the time to add their knowledge into our collective understanding, so that others can benefit from and build on top of it. Lauren has begun to write up her thoughts on topics like burnout, intentions, dependability, circling, and curiosity, and her having the space to continue to write up her ideas seemed like a significant additional positive outcome of this grant.

I think that she should probably aim to make whatever she does valuable enough that individuals and organizations in the community wish to pay her directly for her work. It’s unlikely that I would recommend renewing this grant for another 6 month period in the absence of a relatively exciting new research project/direction, and if Lauren were to reapply, I would want to have a much stronger sense that the projects she was working on were producing lots of value before I decided to recommend funding her again.

In sum, this grant hopefully helps Lauren to recover from burning out, get the new rationality projects she is working on off the ground, potentially identify a good new niche for her to work in (alone or at an existing organization), and write up her ideas for the community.

Ozzie Gooen ($70,000)

Build infrastructure for the future of effective forecasting efforts

From the application:

What I will do

I applied a few months ago and was granted $20,000 (thanks!). My purpose for this money is similar but greater in scope to the previous round. The previous funding has given me the security to be more ambitious, but I’ve realized that additional guarantees of funding should help significantly more. In particular, engineers can be costly and it would be useful to secure additional funding in order to give possible hires security.

My main overall goal is to advance the use of predictive reasoning systems for purposes most useful for Effective Altruism. I think this is an area that could eventually make use of a good deal of talent, so I have come to see my work at this point as foundational.

This work is in a few different areas that I think could be valuable. I expect that after a while a few parts will emerge as the most important, but think it is good to experiment early when the most effective route is not yet clear.

I plan to use additional funds to scale my general research and development efforts. I expect that most of the money will be used on programming efforts.


Foretold is a forecasting application that handles full probability distributions. I have begun testing it with users and have been asked for quite a bit more functionality. I’ve also mapped out the features that I expect people will eventually desire, and think there is a significant amount of work that would be significantly useful.

One particular challenge is figuring out the best way to handle large numbers of questions (1000 active questions plus, at a time.) I believe this requires significant innovations in the user interface and backend architecture. I’ve made some wireframes and have experimented with different methods, and believe I have a pragmatic path forward, but will need to continue to iterate.

I’ve talked with members of multiple organizations at this point who would like to use Foretold once it has a specific set of features, and cannot currently use any existing system for their purposes. […]


Ken is a project to help organizations set up and work with structured data, in essence allowing them to have private versions of Wikidata. Part of the project is Ken.js, a library which I’m beginning to integrate with Foretold.

Expected Impact

The main aim of EA forecasting would be to better prioritize EA actions. I think that if we could have a powerful system set up, it could make us better at predicting the future, better at understanding what things are important and better at coming to a consensus on challenging topics.


In the short term, I’m using heuristics like metrics regarding user activity and upvotes on LessWrong. I’m also getting feedback by many people in the EA research community. In the medium to long term, I hope to set up evaluation/estimation procedures for many projects and would include this one in that process.

My thoughts and reasoning

This grant is to support Ozzie Gooen in his efforts to build infrastructure for effective forecasting. Ozzie requested $70,000 to hire a software engineer who would support him on his work on the prediction platform www.foretold.iothat he is working on.

Johannes Heidecke ($25,000)

Supporting aspiring researchers of AI alignment to boost themselves into productivity

From the application:

(1) We would like to apply for a grant to fund an upcoming camp in Madrid that we are organizing. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. Participants will work in groups on tightly-defined research projects in strategy and technical AI safety. Expert advisors from AI Safety/Strategy organizations will help refine proposals to be tractable and relevant. This allows for time-efficient use of advisors’ knowledge and research experience, and ensures that research is well-aligned with current priorities. More information: https://aisafetycamp.com/

(2) The field of AI alignment is talent-constrained, and while there is a significant number of young aspiring researchers who consider focussing their career on research on this topic, it is often very difficult for them to take the first steps and become productive with concrete and relevant projects. This is partially due to established researchers being time-constrained and not having time to supervise a large number of students. The goals of AISC are to help a relatively large number of high-talent people to take their first concrete steps in research on AI safety, connect them to collaborate, and efficiently use the capacities of experienced researchers to guide them on their path.

(3) We send out evaluation questionnaires directly after the camp and in regular intervals after the camp has passed. We measure impact on career decisions and collaborations and keep track of concrete output produced by the teams, such as blog posts or published articles.

We have successfully organized two camps before and are in the preparation phase for the third camp taking place in April 2019 near Madrid. I was the main organizer for the second camp and am advising the core team of the current camp, as well as organizing funding.

An overview of previous research projects from the first 2 camps can be found here:



We have evaluated the feedback from participants of the first two camps in the following two documents:



My thoughts and reasoning

I’ve talked with various participants of past AI Safety camps and heard broadly good things across the board. I also generally have a positive impression of the people involved, though I don’t know any of the organizers very well.

The material and testimonials that I’ve seen so far suggest that the camp successfully points participants towards a technical approach to AI Alignment, focusing on rigorous reasoning and clear explanations, which seems good to me.

I am not really sure whether I’ve observed significant positive outcomes of camps in past years, though this might just be because I am less connected to the European community these days.

I also have a sense that there is a lack of opportunities for people in Europe to productively work on AI Alignment related problems, and so I am particularly interested in investing in infrastructure and events there. This does however make this a higher-risk grant, since I think this means this event and the people surrounding it might become the main location for AI Alignment in Europe, and if the quality of the event and the people surrounding it isn’t high enough, this might cause long-term problems for the AI Alignment community in Europe.


I also coordinated with Nicole Ross from CEA’s EA Grants project, who had considered also making a grant to the camp. We decided it would be better for the LTF Fund team to make this grant, though we wanted to make sure that some of the concerns Nicole had with this grant were summarized in our announcement:

This seems to roughly mirror my concerns above.

I would want to engage with the organizers a fair bit more before recommending a renewal of this grant, but I am happy about the project as a space for Europeans to get engaged with alignment ideas and work on them for a week together with other technical and engaged people.

Broadly, the effects of the camp seem very likely to be positive, while the (financial) cost of the camp seems small compared to the expected size of the impact. This makes me relatively confident that this grant is a good bet.

Vyacheslav Matyuhin ($50,000)

An offline community hub for rationalists and EAs

From the application:

Our team is working on the offline community hub for rationalists and EAs in Moscow called Kocherga (details on Kocherga are here).

We want to make sure it keeps existing and grows into the working model for building new flourishing local EA communities around the globe.

Our key assumptions are:

  1. There’s a gap between the “monthly meetup” EA communities and the larger (and significantly more productive/important) communities. That gap is hard to close for many reasons.
  2. Solving this issue systematically would add a lot of value to the global EA movement and, as a consequence, the long-term future of humanity.
  3. Closing the gap requires a lot of infrastructure, both organizational and technological.

So we work on building such an infrastructure. We also keep in mind the alignment and goodharting issues (building a big community of people who call themselves EAs but who don’t actually share EA virtues would be bad, obviously).


Concretely, we want to:

  1. Add 2 more people to our team.
  2. Implement our new community building strategy (which includes both organizational tasks such as new events and processes for seeding new working groups, and technological tasks such as implementing a website which allows people from the community to announce new private meetups or team up for coaching or mastermind groups)
  3. Improve our rationality workshops (in terms of scale and content quality). Workshops are important for attracting new community members, for keeping the high epistemic standards of the community and for making sure that community members can be as productive as possible.

To be able to do this, we need to cover our current expenses somehow until we become profitable on our own.

My thoughts and reasoning

The Russian rationality community is surprisingly big, which suggests both a certain level of competence from some of its core organizers and potential opportunities for more community building. The community has:

This grant is to the team that runs the Kocherga anti-cafe.

Their LessWrong write-up suggests:

I find myself having slightly conflicted feelings about the Russian rationality community trying to identify and integrate more with the EA community. I think a major predictor of how excited I have historically been about community building efforts has been a group’s emphasis on improving members’ judgement and thinking skills, as well as the degree to which it emphasizes high epistemic standards and careful thinking. I am quite excited about how Kocherga seems to have focused on those issues so far, and I am worried that this integration and change of identity will reduce that focus (as I think it has for some local and student groups that made a similar transition). That said, I think the Kocherga group has shown quite good judgement on this dimension (see here), which addresses many of my concerns, though I am still interested in thinking and talking about these issues further.

I’m somewhat concerned that I’m not aware of any major insights or unusually talented people from this community, but I expect the language barrier to be a big part of what is preventing me from hearing about those things. And I am somewhat confused about how to account for interesting ideas that don’t spread to the projects I care most about.

I think there are benefits to having an active Russian community that can take opportunities that are only available for people in Russia, or at least people who speak Russian. This particularly applies to policy-oriented work on AI alignment and other global catastrophic risks, which is also a domain that I feel confused about and have a hard time evaluating.

For a lot of the work that I do feel comfortable evaluating, I expect the vast majority of intellectual progress to be made in the English-speaking world, and as such, the question of how talent can flow from Russia to the existing communities working on the long-term future seems quite important. I hope this grant can facilitate a stronger connection between the rest of the world and the Russian community, to improve that talent and idea flow.

This grant seemed like a slightly better fit for the EA Meta fund. They decided not to fund it, so we made it instead, since it still seemed like a strong proposal to us.

What I have seen so far makes me confident that this grant is a good idea. However, before we make more grants like this, I would want to talk more to the organizers involved and generally get more information on the structure and culture of the Russian EA and rationality communities.

Jacob Lagerros ($27,000)

Building infrastructure to give x-risk researchers superforecasting ability with minimal overhead

From the application:

Build a private platform where AI safety and policy researchers have direct access to a base of superforecaster-equivalents, and where aspiring EAs with smaller opportunity costs but excellent calibration perform useful work.


I previously received two grants to work on this project: a half-time salary from EA Grants, and a grant for direct project expenses from BERI. Since then, I dropped out of a Master’s programme to work full-time on this, seeing that was the only way I could really succeed at building something great. However, during that transition there were some logistical issues with other grantmakers (explained in more detail in the application), hence I applied to the LTF for funding for food, board, travel and the runway to make more risk-neutral decisions and capture unexpected opportunities in the coming ~12 months of working on this.”

My thoughts and reasoning

There were three main factors behind my recommending this grant:

  1. My object-level reasons for recommending this grant are quite similar to my reasons for recommending Ozzie Gooen’s and Anthony Aguirre’s.
  2. Jacob has been around the community for about 3 years. The output of his that I’ve seen has included (amongst other things) competently co-directing EAGxOxford 2016, and some thoughtful essays on LessWrong (e.g. 1, 2, 3, 4).
  3. Jacob’s work seems useful to me, and is being funded on the recommendation of the FHI Research Scholars Programme and the Berkeley Existential Risk Initiative. He is also collaborating with others I’m excited about (Metaculus and Ozzie Gooen).

However, I did not assess the grant in detail, as the only reason Jacob asked for a grant was due to logistical complications with other grantmakers. Since FHI and BERI have already investigated the project in more detail, I was happy to suggest we pick up the slack to ensure Jacob has the runway to pursue his work.

Connor Flexman ($20,000)

Perform independent research in collaboration with John Salvatier

I am recommending this grant with more hesitation than most of the other grants in this round. The reasons for hesitation are as follows:

However, despite these reservations, I think this grant is a good choice. The two primary reasons are:

  1. Connor himself has worked on a variety of research and community building projects, and both by my own assessment and other people I talked to, has significant potential in becoming a strong generalist researcher, which I think is an axis on which a lot of important projects are bottlenecked.
  2. This grant was strongly recommended to me by John Salvatier, who is funded by an EA Grant and whose work I am generally excited about.

John did some very valuable community organizing while he lived in Seattle and is now working on developing techniques to facilitate skill transfer between experts in different domains. I think it is exceptionally hard to develop effective techniques for skill transfer, and more broadly techniques to improve people’s rationality and reasoning skills, but am sufficiently impressed with John’s thinking that I think he might be able to do it anyway (though I still have some reservations).

John is currently collaborating with Connor and requested funding to hire him to collaborate on his projects. After talking to Connor I decided it would be better to recommend a grant to Connor directly, encouraging him to continue working with John but also allowing him to switch towards other research projects if he finds he can’t contribute as productively to John’s research as he expects.

Overall, while I feel some hesitation about this grant, I think it’s very unlikely to have any significant negative consequences, and I assign some significant probability that this grant can help Connor develop into an excellent generalist researcher of a type that I feel like EA is currently quite bottlenecked on.

Eli Tyre ($30,000)

Broad project support for rationality and community building interventions

Eli has worked on a large variety of interesting and valuable projects over the last few years, many of them too small to have much payment infrastructure, resulting in him doing a lot of work without appropriate compensation. I think his work has been a prime example of picking low-hanging fruit by using local information and solving problems that aren’t worth solving at scale, and I want him to have resources to continue working in this space.

Concrete examples of projects he has worked on that I am excited about:

I think Eli has exceptional judgment, and the goal of this grant is to allow him to take actions with greater leverage by hiring contractors, paying other community members for services, and paying for other varied expenses associated with his projects.

Robert Miles ($39,000)

Producing video content on AI alignment

From the application:

My goals are:

  1. To communicate to intelligent and technically-minded young people that AI Safety:
    1. is full of hard, open, technical problems which are fascinating to think about
    2. is a real existing field of research, not scifi speculation
    3. is a growing field, which is hiring
  2. To help others in the field communicate and advocate better, by providing high quality, approachable explanations of AIS concepts that people can share, instead of explaining the ideas themselves, or sharing technical documents that people won’t read
  3. To motivate myself to read and internalise the papers and textbooks, and become a technical AIS researcher in future

My thoughts and reasoning

I think video is a valuable medium for explaining a variety of different concepts (for the best examples of this, see 3Blue1Brown, CGP Grey, and Khan Academy). While there are a lot of people working directly on improving the long term future by writing explanatory content, Rob is the only person I know who has invested significantly in getting better at producing video content. I think this opens a unique set of opportunities for him.

The videos on his Youtube channel pick up an average of ~20k views. His videos on the official Computerphile channel often pick up more than 100k views, including for topics like logical uncertainty and corrigibility (incidentally, a term Rob came up with).

More things that make me optimistic about Rob’s broad approach:

Rob is the first skilled person in the X-risk community working full-time on producing video content. Being the very best we have in this skill area, he is able to help the community in a number of novel ways (for example, he’s already helping existing organizations produce videos about their ideas).

Rob made a grant request during the last round, in which he explicitly requested funding for a collaboration with RAISE to produce videos for them. I currently don’t think that working with RAISE is the best use of Rob’s talent, and I’m skeptical of the product RAISE is currently trying to develop. I think it’s a better idea for Rob to focus his efforts on producing his own videos and supporting other organizations with his skills, though this grant doesn’t restrict him to working with any particular organization and I want him to feel free to continue working on RAISE if that is the project he thinks is currently most valuable.

Overall, Rob is developing a new and valuable skill within the X-risk community, and executing on it in a very competent and thoughtful way, making me pretty confident that this grant is a good idea.

MIRI ($50,000)

My thoughts and reasoning

In sum, I think MIRI is one of the most competent and skilled teams attempting to improve the long-term future, I have a lot of trust in their decision-making, and I’m strongly in favor of ensuring that they’re able to continue their work.

Thoughts on funding gaps

Despite all of this, I have not actually recommended a large grant to MIRI.

However, this is all complicated by a variety of countervailing considerations, such as the following three:

  1. Power law distributions of impact only really matter in this way if we can identify which interventions we expect to be in the right tail of impact, and I have a lot of trouble properly bounding my uncertainty here.
  2. If we are faced with significant uncertainty about cause areas, and we need organizations to have worked in an area for a long time before we can come to accurate estimates about its impact, then it’s a good idea to invest in a broad range of organizations in an attempt to get more information. This is related to common arguments around “explore/exploit tradeoffs”.
  3. Sometimes, making large amounts of funding available to one organization can have negative consequences for the broader ecosystem of a cause area. Also, giving an organization access to more funding than it can use productively may cause it to make too many hires or lose focus by trying to scale too quickly. Having more funding often also attracts adversarial actors and increases competitive stakes within an organization, making it a more likely target for attackers.

I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round.

CFAR ($150,000)

[Edit: It seems relevant to mention that LessWrong is currently receiving operational support from CFAR, in a way that makes me technically an employee of CFAR (similar to how ACE and 80K were/are part of CEA for a long time). However, LessWrong operates as a completely separate entity with its own fundraising and hiring procedures, and I don't feel any hesitation or pressure to critique CFAR openly because of that relation. Though I find myself a tiny bit more hesitant to speak harshly of specific individuals, simply because I am only working a floor away from the CFAR offices and that does have some psychological effect on me. Though the same was true for CEA while LessWrong was located in the CEA office for a few months, and was true for residents of my group house while LessWrong was located in the living room of my group house for most of the past two years, so I don't think this effect is particularly large.]

I think that CFAR’s intro workshops have historically had a lot of positive impact. I think they have done so via three pathways.

  1. Establishing epistemic norms: I think CFAR workshops are quite good at helping the EA and rationality community establish norms about what good discourse and good reasoning look like. As a concrete example of this, the concept of Double Crux has gotten traction in the EA and rationality communities, which has improved the way ideas and information spread throughout the community, how ideas get evaluated, and what kinds of projects get resources. More broadly, I think CFAR workshops have helped in establishing a set of common norms about what good reasoning and understanding look like, similar to the effect of the sequences on LessWrong.
    1. I think that it’s possible that the majority of the value of the EA and rationality communities comes from having that set of shared epistemic norms that allows them to reason collaboratively in a way that most other communities cannot (in the same way that what makes science work is a set of shared norms around what constitutes valid evidence and how new knowledge gets created).
    2. As an example of the importance of this: I think a lot of the initial arguments for why AI risk is a real concern were “weird” in a way that was not easily compatible with a naive empiricist worldview that I think is pretty common in the broader intellectual world.
      1. In particular, the arguments for AI risk are hard to test with experiments or empirical studies, but hold up from the perspective of logical and philosophical reasoning and are generated by a variety of good models of broader technological progress, game theory, and related areas of study. But for those arguments to find traction, they required a group of people with the relevant skills and habits of thought for generating, evaluating, and having extended intellectual discourse about these kinds of arguments.
  2. Training: A percentage of intro workshop participants (many of whom were already working on important problems within X-risk) have seen significant improvements in competence; as a result, they became substantially more effective in their work.
  3. Recruitment: CFAR has helped many people move from passive membership in the EA and rationality community to having strong social bonds in the X-risk network.

While I do think that CFAR has historically caused a significant amount of impact, I feel hesitant about this grant because I am unsure whether CFAR can continue to create the same amount of impact in the future. I have a few reasons for this:

However, there are some additional considerations that led me to recommending this grant.

In the last year, I had some concerns about the way CFAR communicated a lot of its insights, and I sensed an insufficient emphasis on a kind of robust and transparent reasoning that I don’t have a great name for. I don’t think the communication style I was advocating for is always the best way to make new discoveries, but is very important for establishing broader community-wide epistemic norms and enables a kind of long-term intellectual progress that I think is necessary for solving the intellectual challenges we’ll need to overcome to avoid global catastrophic risks. I think CFAR is likely to respond to last year’s events by improving their communication and reasoning style in this respect (from my perspective).

My overall read is that CFAR is performing a variety of valuable community functions and has a strong enough track record that I want to make sure that it can continue existing as an institution. I didn’t have enough time this grant round to understand how the future of CFAR will play out; the current grant amount seems sufficient to ensure that CFAR does not have to take any drastic action until our next grant round. By the next grant round, I plan to have spent more time learning and thinking about CFAR’s trajectory and future, and to have a more confident opinion about what the correct funding level for CFAR is.


Comments sorted by top scores.

comment by Ben_Kuhn · 2019-04-10T09:45:18.611Z · score: 111 (50 votes) · EA · GW

I think we should think carefully about the norm being set by the comments here.

This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.

But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.

If you value transparency in EA and want to see more of it (and you're not a donor to the LTF fund), it seems to me like you should chill out here. That doesn't mean don't question the grants, but it does mean you should:

  • Apply even more principle of charity than usual
  • Take time to phrase your question in the way that's easiest to answer
  • Apply some filter and don't ask unimportant questions
  • Use a tone that minimizes stress for the person you're questioning
comment by Michelle_Hutchinson · 2019-04-10T14:02:54.368Z · score: 42 (20 votes) · EA · GW

I strongly agree with this. EA funds seemed to have a tough time finding grant makers who were both qualified and had sufficient time, and I would expect that to be partly because of the harsh online environment previous grant makers faced. The current team seems to have impressively addressed the worries people had in terms of donating to smaller and more speculative projects, and providing detailed write-ups on them. I imagine that in depth, harsh attacks on each grant decision will make it still harder to recruit great people for these committees, and mean those serving on them are likely to step down sooner. That's not to say we shouldn't be discussing the grants - presumably it's useful for the committee to hear other people's views on the grants to get more information about them. But following Ben's suggestions seems crucial to EA funds continuing to be a useful way of donating into the future. In addition, to try to engage more in collaborative truthseeking rather than adversarial debate, we might try to:

  • Focus on constructive information / suggestions for future grants rather than going into depth on what's wrong with grants already given.
  • Spend at least as much time describing which grants you think are good and how, so that they can be built on, as on things you disagree with.
comment by Milan_Griffes · 2019-04-10T15:14:55.485Z · score: 24 (10 votes) · EA · GW


I think it's great that the Fund is trending towards more transparency & a broader set of grantees (cf. November 2018 grant report [EA · GW], cf. July 2018 concerns about the Fund [EA · GW]).

And I really appreciate the level of care & attention that Oli is putting towards this thread. I've found the discussion really helpful.

comment by Milan_Griffes · 2019-04-10T15:54:06.714Z · score: 17 (9 votes) · EA · GW

Relatedly, is Oli getting compensated for the work he's putting in to the Longterm Future Fund?

Seems good to move towards a regime wherein:

  • The norm is to write up detailed, public grant reports
  • Community members ask a bunch of questions about the grant decisions
  • The norm is that a representative of the grant-making staff fields all of these questions, and is compensated for doing so
comment by Habryka · 2019-04-10T19:04:46.084Z · score: 37 (10 votes) · EA · GW

I don't get compensated, though I also don't think compensation would make much of a difference for me or anyone else on the fund (except maybe Alex).

Everyone on the fund is basically dedicating all of their resources towards EA stuff, and is generally giving up most of their salary potential for working in EA. I don't think it would make super much sense for us to get more money, given that we are already de-facto donating everything above a certain threshold (either literally in the case of the two Matts, or indirectly by taking a paycut and working in EA).

I think if people give more money to the fund because they come to trust the decisions of the fund more, then that seems like it would incentivize more things like this. Also if people bring up strong arguments against any of the reasoning I explained above, then that is a great win, since I care a lot about our fund distributions getting better.

comment by Milan_Griffes · 2019-04-10T19:21:13.785Z · score: 8 (6 votes) · EA · GW

Got it.

The reason compensation seems good is that it formalizes the duty of engaging with the community's discourse, which probably pushes us further towards the above regime.

Right now, the community is basically banking on you & other fund managers caring a lot about engaging with the community. This is great, and it's great that you do.

Layering on compensation seems like a way of bolstering this engagement. If someone is compensated to do this engagement, then there's increased incentive for them to do it. (Though there's probably some weirdness around Goodhart-ing here.)

cf. Role of ombudsperson in public governance

comment by Khorton · 2019-04-10T19:40:30.886Z · score: 7 (4 votes) · EA · GW

Compensation is also good in case you ever retire and someone else with different financial needs takes over (but it doesn't seem super important - there are other things you could solve first).

comment by Raemon · 2019-04-10T20:23:05.852Z · score: 4 (2 votes) · EA · GW

I think that makes sense but in practice is something that makes more sense to handle through their day jobs. (If they went the route of hiring someone for whom managing the fund was their actual day job I'd agree that generally higher salaries would be good, for mostly the same reason they'd be good across the board in EA)

comment by Stefan_Schubert · 2019-04-10T10:29:17.488Z · score: 17 (10 votes) · EA · GW

Agree with this, especially the comments about rudeness. This also means that I disagree with Oli's comment elsewhere in this thread:

that people should feel free to express any system-1 level reactions they have to these grants.

In line with what Ben says, I think people should apply a filter to their system-1 level reactions, and not express them whatever they are.

comment by Habryka · 2019-04-10T22:09:01.648Z · score: 16 (8 votes) · EA · GW

I think that people should feel comfortable sharing their system-1 expressions, in a way that does not immediately imply judgement.

I am thinking of stuff like the non-violent communication patterns, where you structure your observation in the following steps:

1. List a set of objective observations

2. Report your experience upon making those observations

3. Then your personal interpretations of those experiences and what they imply about your model of the world

4. Your requests that follow from those models

I think it's fine to stop part-way through this process, but that it's generally a good idea to not skip any steps. So I think it's fine to just list observations, and it's fine to just list observations and then report how you feel about those things, as long as you clearly indicate that this is your experience and doesn't necessarily involve judgement. But it's a bad idea to immediately skip to the request/judgement step.

comment by Stefan_Schubert · 2019-04-10T23:05:20.308Z · score: 4 (5 votes) · EA · GW

OK, that is clarifying. Maybe your original comment could have been clearer, since this framing is quite different.

The issue that you raise in this comment is a big debate, and this is maybe not the place to discuss it in detail. In any case, as stated my view is that people should think carefully before they comment, and not run with their immediate feelings on sensitive topics.

comment by Davis_Kingsley · 2019-04-08T21:13:55.984Z · score: 74 (40 votes) · EA · GW

I don't agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.

Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I'm not clear on is to what extent CEA will "automatically" follow these recommendations and to what extent there will be significant further review.

comment by Habryka · 2019-04-08T21:21:06.416Z · score: 13 (10 votes) · EA · GW

I will make sure to update this post with any new information about whether CEA can actually make these grants. My current guess is that maybe 1-2 grants will not be logistically feasible, but the vast majority should have no problem.

comment by Elityre · 2019-04-11T01:08:29.228Z · score: 8 (7 votes) · EA · GW
I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka

Hear, hear.

I feel proud of the commitment to epistemic integrity that I see here.

comment by Peter_Hurford · 2019-04-08T21:16:28.327Z · score: 67 (29 votes) · EA · GW

Thanks Habryka for raising the bar on the amount of detail given in grant explanations.

comment by Khorton · 2019-04-08T22:33:39.405Z · score: 44 (31 votes) · EA · GW

I'm happy that people are pushing back on some of these grants, and even happier that Habryka is responding to graciously. However, I'm concerned that some comments are bordering on unhelpfully personal.

I'd suggest that, when criticising a particular project, commentors should try to explain the rule or policy that would help grant-makers avoid the same problem in the future. That should also help us avoid making things personal.

Examples I stole from other comments and reworded:

-"I'm skeptical of the grant to X because I think grantmakers should recuse themselves from granting to their friends." (I saw this criticism but don't actually know who it's referring to.)

-"I don't think any EA Funds should be given to printing books that haven't been professionally edited."

-"I think that people like Lauren should have funds available after they burn out, but I don't think the Long-Term Future Fund is the right source of post-burnout funds."

comment by Habryka · 2019-04-09T00:01:37.791Z · score: 23 (14 votes) · EA · GW

I agree with this, but also think that people should feel free to express any system-1 level reactions they have to these grants. In my experience it can often be quite hard to formalize a critique into a concrete, operationalized set of policy changes, even if the critique itself is good and valid, and I don't think I want to force all commenters to fully formalize their beliefs before they can express them here.

I do think the end goal of the conversation should be a set of policies that the LTF-Fund can implement.

comment by Raemon · 2019-04-09T19:33:12.685Z · score: 28 (16 votes) · EA · GW

I have a weird mix of feelings and guesses here.

I think it's good on the margin for people to be able to express opinions without needing to formalize them into recommendations for the reason stated here. I think the overall conversation happening here is very important.

I do still feel pretty sad looking at the comments here — some of the commenters seem to not have a model of what they're incentivizing.

They remind me of the stereotype of a parent who's kid has moved away and grown up, and doesn't call very often. And periodically the kid does call, but the first thing they hear is the parent complaining "why don't you ever call me?", which makes the kid less likely to call home.

EA is vetting [EA · GW] constrained [EA · GW].

EA is network [EA · GW] constrained [EA · GW].

These are actual hard problems, that we're slowly addressing by building network infrastructure. The current system is not optimal or fair, but progress won't go faster by complaining about it.

It can potentially go faster via improvements in strategy, and re-allocating resources. But each of those improvements will come in a tradeoff. You could hire more grantmakers full-time, but those grantmakers are generally working full-time on something else comparably important.

This writeup is unusually thorough, and Habryka has been unusually willing to engage with comments and complaints. But I think Habryka has higher-than-average willingness to deal with that.

When I imagine future people considering

a) whether to be a grantmaker,

b) whether to write up their reasons publicly

c) whether to engage with comments on those reasons

I predict that some of the comments on this thread to make all of those less likely (in escalating order). It also potentially makes grantees less likely to consent to public discussion of their evaluation, since it might get ridiculed in the comments.

Because EA is vetting constrained, I think public discussion of grant-reasoning is particularly important. It's one of the mechanisms that'll give people a sense of what projects will get funded and what goes into a grantmaking process, and get a lot of what's currently 'insider knowledge' more publicly accessible.

comment by toonalfrink · 2019-04-10T13:38:06.437Z · score: 7 (6 votes) · EA · GW

As a potential grant recipient (not in this round) I might be biased, but I feel like there is a clear answer to this. No one is able to level up without criticism, and the quality of your decisions will often be bottlenecked by the amount of feedback you receive.

Negative feedback isn't inherently painful. This is only true if there is an alief that failure is not acceptable. Of course the truth is that failure is necessary for progress, and if you truly understand this, negative feedback feels good. Even if it's in bad faith.

Given that grantmakers are essentially at the steering wheel of EA, we can't afford for those people to not internalize this. They need to know all the criticism to make a good decision, they should cherish it.

Of course we can help them get this state of mind by celebrating their willingness to open up to scrutiny, along with the scrutiny

comment by Khorton · 2019-04-10T19:11:16.029Z · score: 16 (10 votes) · EA · GW

I think on a post with 100+ comments the quality of decisions is more likely to be bottlenecked by the quality of feedback than the quantity. Being able to explain why you think something is a bad idea usually results in higher quality feedback, which I think will result in better decisions than just getting a lot of quick intuition-based feedback.

comment by RyanCarey · 2019-04-08T13:45:27.053Z · score: 42 (21 votes) · EA · GW

This is a strong set of grants, much stronger than the EA community would've been able to assemble a couple of years ago, which is great to see.

When will you be accepting further applications and making more grants?

comment by Habryka · 2019-04-08T18:41:54.714Z · score: 14 (6 votes) · EA · GW

I don't know yet. My guess is in around two months.

comment by Risto_Uuk · 2019-04-08T08:56:52.388Z · score: 38 (20 votes) · EA · GW

You received almost 100 applications as far as I'm aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn't have time to vet them all. What other reasons did you have for rejecting applications?

comment by Habryka · 2019-04-08T23:46:43.320Z · score: 34 (15 votes) · EA · GW

Hmm, I don't think I am super sure what a good answer to this would look like. Here are some common reasons for why I think a grant was not a good idea to recommend:

  • The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
  • The mainline outcome of the grant was good, but there were potential negative consequences that the applicant did not consider or properly account for, and I did not feel like I could cause the applicant to understand the downside risk they have to account for without investing significant effort and time
  • The grant was only tenuously EA-related and seemed to have been submitted to a lot of applications relatively indiscriminately
  • I was unable to understand the goals, implementation or other details of the grant
  • I simply expected the proposed plan to not work, for a large variety of reasons. Here are some of the most frequent:
    • The grant was trying to achieve something highly ambitious while seeming to allocate very little resources to achieving that outcome
    • The grantee had a track record of work that I did not consider to be of sufficient quality to achieve what they set out to do
  • In some cases the applicant asked for less than our minimum grant amount of $10,000
comment by Peter_Hurford · 2019-04-09T05:34:49.121Z · score: 53 (27 votes) · EA · GW

Thanks for the transparent answers.

The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?

In some cases the applicant asked for less than our minimum grant amount of $10,000

This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.

comment by Habryka · 2019-04-09T17:39:11.993Z · score: 45 (11 votes) · EA · GW
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant.

I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we've had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).

I don't think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.

As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don't know Tyler that well, I wasn't sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)

The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it's often very hard to get a good sense of the competence of an applicant.

comment by Habryka · 2019-04-09T17:19:34.114Z · score: 20 (9 votes) · EA · GW
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.

A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it's hard to get $1k grants.

comment by Ben_Kuhn · 2019-04-09T23:28:15.675Z · score: 17 (9 votes) · EA · GW

Wow! This is an order of magnitude larger than I expected. What's the source of the overhead here?

comment by Habryka · 2019-04-10T00:14:27.588Z · score: 11 (6 votes) · EA · GW

Here is my rough fermi:

My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.

Since people's competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.

To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.

comment by Ben_Kuhn · 2019-04-10T13:27:42.442Z · score: 13 (7 votes) · EA · GW

If one person-year is 2000 hours, then that implies you're valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.

This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I'm sure there are other overheads that I don't know about, but I'm curious if you (or someone from CEA) knows what they are?

[Not trying to imply that CEA is failing to optimize here or anything—I'm mostly curious plus have a professional interest in money transfer logistics—so feel free to ignore]

comment by Jonas Vollmer · 2019-04-10T14:00:50.043Z · score: 16 (10 votes) · EA · GW

I actually think the $10k grant threshold doesn't make a lot of sense even if we assume the details of this "opportunity cost" perspective are correct. Grants should fulfill the following criterion:

"Benefit of making the grant" ≥ "Financial cost of grant" + "CEA's opportunity cost from distributing a grant"

If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of "market efficiency" of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)

IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid "$10k" rule. Since EA Fund committees generally consist of smart people, I think they'd be able to understand and implement this well.

comment by Michelle_Hutchinson · 2019-04-10T15:56:20.665Z · score: 11 (5 votes) · EA · GW

This sounds pretty sensible to me. On the other hand, if people are worried about it being harder for people who are already less plugged in to networks to get funding, you might not want an additional dimension on which these harder-to-evaluate grants could lose out compared to easier to evaluate ones (where the latter end up having a lower minimum threshold).

It also might create quite a bit of extra overhead for granters having to decide the opportunity cost case by case, which could reduce the number of grants they can make, or again push towards easier to evaluate ones.

comment by Jonas Vollmer · 2019-04-11T07:39:10.837Z · score: 4 (3 votes) · EA · GW

I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it's a concern and it updates me a bit towards this not being a good idea.

I wasn't suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here [EA · GW]). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.

ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.

comment by cole_haus · 2019-04-10T01:44:58.006Z · score: 5 (4 votes) · EA · GW

It's not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total "EA dollars" that the positions cost whereas your model seems to combine "EA dollars" (CEA costs) and "personal dollars" (their total salary).

comment by Habryka · 2019-04-10T03:00:01.423Z · score: 6 (3 votes) · EA · GW

Hmm, I guess it depends a bit on how you view this.

If you model this in terms of "total financial resources going to EA-aligned people", then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.

If you want to model it as "money controlled directly by EA institutions" then it's closer to your number.

I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.

comment by cole_haus · 2019-04-10T05:23:52.556Z · score: 1 (1 votes) · EA · GW

I don't particularly want to try to resolve the disagreement here, but I'd think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?

[1] I'd expect it to vary from person to person depending on their alignment, commitment, competence, etc.

comment by cole_haus · 2019-04-10T01:42:28.034Z · score: 5 (4 votes) · EA · GW

I think you have some math errors:

  • $150k * 1.5 + $60k = $285k rather than $295k
  • Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
comment by Habryka · 2019-04-10T02:51:12.593Z · score: 4 (2 votes) · EA · GW

Ah, yes. The second one. Will update.

comment by Jonas Vollmer · 2019-04-10T13:59:34.112Z · score: 5 (2 votes) · EA · GW

(moved this comment here [EA · GW])

comment by John_Maxwell_IV · 2019-04-09T07:10:17.447Z · score: 10 (14 votes) · EA · GW
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?

I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

I suspect there's low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you'd want each person to be located in a different geographic EA hub. (Perhaps you'd want supernetworkers covering disparate online communities devoted to EA as well.)

This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of "Fund the EA Hotel", maybe the key intervention is "Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses." (BTW, the EA Hotel is actually a pretty good place to be if you're an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)

comment by Habryka · 2019-04-09T16:23:06.148Z · score: 36 (19 votes) · EA · GW
To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network.

That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:

  • Matt Fallshaw: Australia (but also travels a lot)
  • Helen Toner: Georgetown (I think)
  • Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
  • Matt Wage: New York

I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).

comment by John_Maxwell_IV · 2019-04-09T19:36:53.187Z · score: 5 (3 votes) · EA · GW

Good to know!

comment by RyanCarey · 2019-04-10T22:54:28.137Z · score: 4 (2 votes) · EA · GW

Isn't Matt in HK?

comment by Habryka · 2019-04-10T23:37:05.857Z · score: 4 (2 votes) · EA · GW

He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)

comment by Habryka · 2019-04-09T17:51:47.000Z · score: 28 (9 votes) · EA · GW
Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

At least for me this doesn't really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I've worked with a lot of people on a lot of projects over the years. I've discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.

The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.

It's not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don't really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.

I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone's competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).

comment by Jess_Whittlestone · 2019-04-09T14:00:36.555Z · score: 31 (25 votes) · EA · GW
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

I'm pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know - being able to do this should be a requirement of the job, IMO. Seconding Peter's question below, I'd be keen to hear if there are any plans to make progress on this.

If you really don't have time to vet applicants, then maybe grant decisions should be made blind, purely on the basis of the quality of the proposal. Another option would be to have a more structured/systematic approach to vetting applicants themselves, which could be anonymous-ish: based on past achievements and some answers to questions that seem relevant and important.

comment by Habryka · 2019-04-09T17:01:56.624Z · score: 35 (12 votes) · EA · GW
but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know

To be clear, we did invest time into vetting applications from people we didn't know, we just obviously have limits to how much time we can invest. I expect this will be a limiting factor for any grant body.

My guess is that if you don't have any information besides the application info, and the plan requires a significant level of skill (as the vast majority of grants do), you have to invest at least an additional 5, often 10, hours of effort into reaching out to them, performing interviews, getting testimonials, analyzing their case, etc. If you don't do this, I expect the average grant to be net negative.

Our review period lasted about one month. At 100 applications, assuming that you create an anonymous review process, this would have resulted in around 250-500 hours of additional work, which would have made this the full-time job for 2-3 of the 5 people on the grant board, plus the already existing ~80 hours of overhead this grant round required from the board. You likely would have filtered out about 50 of them at an earlier stage, so you can maybe cut that in half, resulting in ~2 full-time staff for that review period.

I don't think that level of time-investment is possible for the EA Funds, and if you make it a requirement for being on an EA Fund board, the quality of your grant decisions will go down drastically because there are very few people who have a track record of good judgement in this domain, who are not also holding other full-time jobs. That level of commitment would not be compatible with holding another full-time job, especially not in a leadership position.

I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications. I think it might make sense for us to hire a part-time staff to help with evaluations and do background research as well as interviews for us, but it's currently unclear to me how such a person would be managed and whether their salary would be worth the benefit, but it seems like plausibly the correct choice.

comment by Jess_Whittlestone · 2019-04-09T19:14:22.357Z · score: 25 (19 votes) · EA · GW

Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you've said I do think that more time needs to be going into these grant reviews.

It don't think it's unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn't your responsibility personally!) Maybe it is very difficult for CEA to find people with the relevant expertise who can do that job. But if that's the case, then I think there's a bigger problem (the job isn't being paid well enough, or being valued highly enough by the community), and maybe we should question the case for EA funds distributing so much money.

comment by Habryka · 2019-04-09T20:03:08.622Z · score: 33 (11 votes) · EA · GW

I strongly agree that I would like there to be more people who have the competencies and resources necessary to assess grants like this. With the Open Philanthropy Project having access to ~10 billion dollars, the case for needing more people with that expertise is pretty clear, and my current sense is that there is a broad consensus in EA that finding more people for those roles is among, if not the, top priority.

I think giving less money to EA Funds would not clearly improve this situation from this perspective at all, since most other granting bodies that exist in the EA space have an even higher (funds-distributed)/staff ratio than this.

The Open Philanthropy Project has about 15-20 people assessing grants, and gives out at least 100 million dollars a year, and likely aims to give closer to a $1 billion dollars a year given their reserves.

BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year

My guess is that GiveWell also has about 10 staff assessing grants full-time, making grants of about $20 million dollars

I think at the current level of team-member-involvement, and since I do think there is a significant judgement-component to evaluating grants which allows the average LTF-Fund team member to act with higher leverage, plus the time that anyone involved in the LTF-landscape has to invest to build models and keep up to speed with recent developments, I actually think that the LTF-Fund team is able to make more comprehensive grant assessments per dollar granted than almost any other granting body in the space.

I do think that having more people who can assess grants and help distribute resources like this is key, and think that investing in training and recruiting those people should be one of the top priorities for the community at large.

comment by Milan_Griffes · 2019-04-09T20:09:37.033Z · score: 4 (2 votes) · EA · GW
BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year

Note that BERI has only existed for a little over 2 years, and their grant-making has been pretty lumpy, so I don't think they've yet reached any equilibrium grant-making rate (one which could be believably expressed in terms of $X dollars / year).

comment by Habryka · 2019-04-09T20:14:54.305Z · score: 10 (3 votes) · EA · GW

I agree. Though I think I expect the ratio of funds-distributed/staff to roughly stay the same, at least for a bit, and probably go up a bit.

I think older and larger organizations will have smaller funds-distributed/staff ratios, but I think that's mostly because coordinating people is hard and marginal productiveness of a hire goes down a lot after the initial founders, so you need to hire a lot more people to produce the same quality of output.

comment by Khorton · 2019-04-10T11:03:40.784Z · score: 26 (11 votes) · EA · GW

I would be in favour of this fund using ~5% of its money to pay for staff costs, including a permanent secretariat. The secretariat would probably decrease pressure on grantmakers a little, and improve grant/feedback quality a little, which makes the costs seem worth it. (I know you've already considered this and I want to encourage it!)

I imagine the secretariat would:

-Handle the admin of opening and advertising a funding round

-Respond to many questions on the Forum, Facebook, and by email, and direct more difficult questions to the correct person

-Coordinate the writing of Forum posts like this

-Take notes on what additional information grantmakers would like from applicants, contact applicants with follow-up questions, and suggest iterations of the application form

-(potentially) Manage handover to new grantmakers when current members step down

-(potentially) Sift through applications and remove those which are obviously inappropriate for the Long Term Future Fund

-(potentially) Provide a couple of lines of fairly generic but private feedback for applicants

comment by Evan_Gaensbauer · 2019-04-17T04:28:15.999Z · score: 5 (3 votes) · EA · GW

This strikes me as a great, concrete suggestion. As I tell a lot of people, great suggestions in EA only go somewhere if someone is done with them. I would strongly encourage you to develop this suggestion into its own article on the EA Forum about how the EA Funds can be improved. Please let me know if you are interested in doing so, and I can help out. If you don't think you'll have time to develop this suggestion, please let me know, as I would be interested in doing that myself if you don't have the time.

comment by Evan_Gaensbauer · 2019-04-17T03:57:09.103Z · score: 2 (1 votes) · EA · GW

The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it'd be too difficult to effectively accomplish in the timeframe of a few months.

In general, one issue the EA Funds face other granting bodies in EA don't face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it's distributed, is much more complicated than ones the CEA or a similar organization typically faces.

comment by Milan_Griffes · 2019-04-09T17:24:23.199Z · score: 17 (9 votes) · EA · GW

Thanks for the care & attention you're putting towards all of these replies!

I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications.

Strong +1.

comment by Evan_Gaensbauer · 2019-04-17T03:47:27.296Z · score: 4 (2 votes) · EA · GW

One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn't a "job" in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.

comment by Jess_Whittlestone · 2019-04-09T19:20:56.698Z · score: 37 (15 votes) · EA · GW

I'd be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.

comment by Habryka · 2019-04-09T20:34:02.898Z · score: 65 (20 votes) · EA · GW

Here is a rough summary of the process, it's hard to explain spreadsheets in words so this might end up sounding a bit confusing:

  • We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from -5 to +5 for each application
  • Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
  • We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
  • Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
  • As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes

However, some fund members weren't super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.

We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.

Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.

comment by Evan_Gaensbauer · 2019-04-17T08:14:25.450Z · score: 34 (13 votes) · EA · GW

Summary: This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year. I am measuring the performance of the EA Funds on the basis of what I am calling 'counterfactually unique' grant recommendations. I.e., grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded.

Based on that measure, 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. Having read all the comments, multiple concerns with a few specific grants came up, based on uncertainty or controversy in the estimation of value of these grant recommendations. Even if we exclude those grants from the estimate of counterfactually unique grant recommendations to make a 'conservative' estimate, 16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants.

These numbers are an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago. This grant report generally succeeds at achieving a goal of coordinating donations through the EA Funds to unique recipients who otherwise would have been overlooked for funding by individual donors and larger grantmakers. This report is also the most detailed of its kind, and creates an opportunity to create a detailed assessment of the Long-Term Future Fund's track record going forward. I hope the other EA Funds emulate and build on this approach.

General Assessment

In his 2018 AI Alignment Literature Review and Charity Comparison, Ben Hoskins had the following to say about changes in the management structure of the EA Funds.

I’m skeptical this will solve the underlying problem. Presumably they organically came across plenty of possible grants – if this was truly a ‘lower barrier to giving’ vehicle than OpenPhil they would have just made those grants. It is possible, however, that more managers will help them find more non-controversial ideas to fund.

To clarify, the purpose of the EA Funds has been to allow individual donors relatively smaller than grantmakers like the Open Philanthropy Project (including all donors in EA except other professional, private, non-profit grantmaking organizations) to identify higher-risk grants for projects that are still small enough that they would be missed by an organization like Open Phil. So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

Of the $923,150 of grant recommendations made to Centre for Effective Altruism for the EA Long-Term Future Fund this round of grantmaking, all but $250,000 of it went to the kind of projects or organizations that the Open Philanthropy Project tends to make. To clarify, there isn't a rule or practice of the EA Funds not making those kinds of grant. It's at the discretion of the fund managers to decide if they should recommend grants at a given time to more typical grant recipients in their cause area, or to newer, smaller, and/or less-established projects/organizations. At the time of this grantmaking round, recommendations to better-established organizations like MIRI, CFAR, and Ought were considered the best proportional use of marginal funds allotted for disbursement at this time.

20 (~87% of total number) grant recommendations totalling $723,150 = ~73%

+ 3 (~13% of total number) grant recommendations totalling $200,00 = ~27%

= 23 grant (in total) recommendations totalling $923,150 = 100%

Since this is the most extensive round of grant recommendations from the Long-Term Future Fund to date with the EA Funds' new management structure, this is the best apparent opportunity for evaluating the success of the changes made to how the EA Funds are managed. In this round of grantmaking, 87% of the total number of grant recommendations were for efforts of individuals, totalling 73% of the total amount of money that would be disbursed for these grants, that would otherwise have been missed by individual donors, or larger grantmaking bodies.

In other words, the Long-Term Future (LTF) Fund is directly responsible for 87% of 23 grant recommendations made, totalling 73% of $923.15K worth of unique grants, that, presumably, would not have been counterfactually identified had individual donors not been able to pool and coordinate their donations through the LTF Fund. I keep highlighting these numbers, because they can essentially be thought of as the LTF Funds' current rate of efficiency in fulfilling the purposes it was set up for.

Criticisms and Conservative Estimates

Above is the estimate for the number of grants, and the amount of donations to the EA Funds, that are counterfactually unique to the EA Funds, and can be thought of how effective the impact of the Long-Term Future Fund in particular is. That is the estimate for the grants donors to the EA Funds very probably could not have identified by themselves. Yet another question is would they opt to donate to the grant recommendations that have been just been made by the LTF fund managers? Part of the basis for the EA Funds thus far is to trust the fund mangers' individual discretion based on their years of expertise or professional experience working in the respective cause area. My above estimates are based on the assumption all the counterfactually unique grant recommendations the LTF Funds make are indeed effective. We can think of those numbers as a 'liberal' estimate.

I've at least skimmed or read all 180+ comments on this post thus far, and a few persistent concerns with the grant recommendations have stood out. These were concerns that the evidence basis on which some grant recommendations were made wasn't sufficient to justify the grant, i.e., they were 'too risky.' If we exclude grant recommendations that are subject to multiple, unresolved concerns from the LTF Funds, we can make a 'conservative' estimate of the percentage and dollar value of counterfactually unique grant recommendations made by the LTF Fund.

  • Concerns with 1 grant recommendations worth $28,000 to hand out printed copies of fanfiction HPMoR to international math competition medalists.
  • Concerns with 2 grant recommendations worth $40,000 for individuals who are not currently pursuing one or more specific, concrete projects, but rather are pursuing independent research or self-development. The concern is the grant is based on the fund manager's (managers' ?) personal confidence in the individual, and even explication for the grant recommendations expressed concern with the uncertainty in the value of grants like these.
  • Concerns that with multiple grants made to similar forecasting-based projects, there would be redundancy, in particular concern with 1 grant recommendation worth $70,000 to forecasting company Metaculus that might be better suited to an investment for equity in a startup rather than a grant from a non-profit foundation.

In total, these are 4 grants worth $138,000 that multiple commenters have raised concerns with on the basis the uncertainty for these grants means the grant recommendations don't seem justified. To clarify, I am not making an assumption about the value of these grants are. All I would say about these particular grants is they are unconventional, but that insofar as the EA Funds are intended to be a kind of index fund willing to back more experimental efforts, these projects fit within the established expectations of how the EA Funds are to be manged. Reading all the comments, the one helpful, concrete suggestion was for the LTF Fund to follow-up in the future with grant recipients and publish their takeaways from the grants.

Of the 20 recommendations made for unique grant recipients worth $673,150, if we were to exclude these 4 recommendations worth $138,000, that leaves 16 of 23, or 69.5% of total recommendations, worth $535,150 of $923,150, or ~58% worth of the total grant recommendations, uniquely attributable to the EA Funds. Again, those grant recommendations excluded from this 'conservative' estimate are ruled out based on the uncertainty or lack of confidence in them from commenters, not necessarily the fund managers themselves. While presumably any of the value of any grant recommendation could be disputed, these are the only grant recipients for which multiple commenters have made raised still-unresolved concerns so far. These grants are still initially being made, so whether the best hopes of the fund managers for the value of each of these grants will be borne out is something to follow-up with in the future.


While these numbers don't address suggestions for how the management of the Long-Term Future Fund can still be improved, overall I would say these numbers show the Long-Term Future Fund has made extremely significant improvement since last year at achieving a high rate of counterfactually unique grants to more nascent or experimental projects that are typically missed in EA donations. I think with some suggested improvements like hiring some professional clerical assistance with managing the Long-Term Future Fund, the Long-Term Future Fund is employing a successful approach to making unique grants. I hope the other EA Funds try emulating and building on this approach. The EA Funds are still relatively new, and so to measure their track record of success with their grants remains to be done, but this report provides a great foundation to start doing so.

comment by John_Maxwell_IV · 2019-04-17T19:25:28.267Z · score: 4 (2 votes) · EA · GW
So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

You mean it functions like a venture capital fund or angel investor?

comment by Milan_Griffes · 2019-04-17T16:58:47.615Z · score: 2 (1 votes) · EA · GW

This is great! Thank you for the care & attention you put into creating this audit.

comment by Cullen_OKeefe · 2019-04-10T04:52:06.079Z · score: 32 (14 votes) · EA · GW

Regarding the donation to Lauren Lee:

To the extent that one thinks that funding the runways of burnt-out and/or transitioning EAs is a good idea to enable risk-neutral career decisions (which I do!), I'd note that funding (projects like) the EA Hotel seems like a promising way to do so. The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

comment by Cullen_OKeefe · 2019-04-10T05:19:50.565Z · score: 21 (10 votes) · EA · GW

This could also help free up a significant amount of donation money. My guess is that a central entity that could be (more) risk-neutral than individual EAs would be a more efficient insurer of EA runway needs than individual EAs. Many EAs will never use their runways, and this will mean, at best, significantly delayed donations, which is a high opportunity cost. If runway-saving EAs would otherwise donate (part of) their runways (which I would if I knew the EA community would provide one if needed), there could be net gains in EA cashflow due to the efficiency of a central insurer.

I'm not super confident in this, and I could be wrong for a lot of reasons. Obviously, runways aren't purely altruistic, so one shouldn't expect all runway money to go to donations. And it might be hard or undesirable for EA to provide certain kinds of runway due to, e.g., moral hazard. It might also be hard for EA as a community to provide runways with any reasonable assurance that the outcome will be altruistic (I take this to be one of the main objections to the EA Hotel). Still, I think the idea of insuring EA runway needs could be promising.

comment by toonalfrink · 2019-04-11T19:37:01.113Z · score: 6 (4 votes) · EA · GW

Am certainly open to considering this business model for the hotel.

comment by Milan_Griffes · 2019-04-15T20:35:48.389Z · score: 4 (2 votes) · EA · GW

This is interesting, though the moral hazard / free-riding consideration seems like a big problem.

comment by Cullen_OKeefe · 2019-04-16T03:42:29.014Z · score: 3 (2 votes) · EA · GW

I agree that moral hazard is, but you could also imagine an excludable EA insurance scheme that reduced free-riding. E.g., pay $X/month and if you lose your job you can live here for up to a year.

But since the employed EA community is not as diversified as the whole market, employed EAs may be more liable to systemic shocks that render the insurer insolvent. But of course, there's reinsurance...

comment by toonalfrink · 2019-04-11T19:32:16.668Z · score: 7 (5 votes) · EA · GW

The hotel did apply.

The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

It's about $7500 per person per year

comment by Milan_Griffes · 2019-04-09T05:21:00.581Z · score: 31 (12 votes) · EA · GW


I’ve gotten a sense that the staff isn’t interested in increasing the number of intro workshops, that the intro workshops don’t feel particularly exciting for the staff, and that most staff are less interested in improving the intro workshops than other parts of CFAR. This makes it less likely that those workshops will maintain their quality and impact, and I currently think that those workshops are likely one of the best ways for CFAR to have a large impact.
CFAR is struggling to attract top talent, partially because some of the best staff left, and partially due to a general sense of a lack of forward momentum for the organization. This is a bad sign, because I think CFAR in particular benefits from having highly talented individuals teach at their workshops and serve as a concrete example of the skills they’re trying to teach.

Why a large, unrestricted grant to CFAR, given these concerns? Would a smaller grant catalyze changes such that the organization becomes cash-flow positive?

By the next grant round, I plan to have spent more time learning and thinking about CFAR’s trajectory and future, and to have a more confident opinion about what the correct funding level for CFAR is.

What is going to happen between now & then that will help you learn enough to have a higher-credence view about CFAR?

Seems like a large, unrestricted grant permits further "business-as-usual" operations. Are "business-as-usual" operations the best state for driving your learning as a grant-maker?

comment by PeterMcCluskey · 2019-04-11T16:37:52.375Z · score: 14 (6 votes) · EA · GW

I assume that by "cash-flow positive", you mean supported by fees from workshop participants?

I don't consider that to be a desirable goal for CFAR.

Habryka's analysis focuses on CFAR's track record. But CFAR's expected value comes mainly from possible results that aren't measured by that track record.

My main reason for donating to CFAR is the potential for improving the rationality of people who might influence x-risks. That includes mainstream AI researchers who aren't interested in the EA and rationality communities. The ability to offer them free workshops seems important to attracting the most influential people.

comment by Milan_Griffes · 2019-04-15T20:31:38.890Z · score: 2 (1 votes) · EA · GW
I assume that by "cash-flow positive", you mean supported by fees from workshop participants?

Yes, that's roughly what I mean.

I'm gesturing towards "getting to a business structure where it's straightforward to go into survival mode, wherein CFAR maintains core staff & operations via workshop fees."

Seems like in that configuration, the org wouldn't be as buffeted by the travails of a 6-month or 12-month fundraising cycle.

I agree that being entirely supported by workshop fees wouldn't be a desirable goal-state for CFAR. But having a "survival mode" option at the ready for contingencies seems good.

comment by Habryka · 2019-04-09T19:10:23.105Z · score: 9 (3 votes) · EA · GW
Why a large, unrestricted grant to CFAR, given these concerns? Would a smaller grant catalyze changes such that the organization becomes cash-flow positive?

I have two interpretations of what your potential concerns here might be, so might be good to clarify first. Which of these two interpretations is closer to what you mean?

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

comment by Milan_Griffes · 2019-04-09T19:27:10.649Z · score: 3 (2 votes) · EA · GW

I'm curious about both (1) and (2), as they both seem like plausible alternatives that you may have considered.

comment by Habryka · 2019-04-09T23:31:16.690Z · score: 16 (10 votes) · EA · GW

Seems good.

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

I am overall still quite positive on CFAR. I have significant concerns, but the total impact CFAR had over the course of its existence strikes me as very large and easily worth the resources it has taken up so far.

I don't think it's the correct choice for CFAR to take irreversible action right now because they correctly decided to not run a fall fundraiser, and I still assign significant probability to CFAR actually being on the right track to continue having a large impact. My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

I considered this for quite a while, but ultimately decided against it. I think grantmakers should generally be very hesitant to make earmarked or conditional grants to organizations, without knowing the way that organization operates in close detail. Some things that might seem easy to change from the outside often turn out to be really hard to change for good reasons, and this also has the potential to create a kind of adversarial relationship where the organization is incentivized to do the minimum amount of effort necessary to meet the conditions of the grant, which I think tends to make transparency a lot harder.

Overall, I much more strongly prefer to recommend unconditional grants with concrete suggestions for what changes would cause future unconditional grants to be made to the organization, while communicating clearly what kind of long-term performance metrics or considerations would cause me to change my mind.

I expect to communicate extensively with CFAR over the coming weeks, talk to most of its staff members, generally get a better sense of how CFAR operates and think about the big-picture effects that CFAR has on the long-term future and global catastrophic risk. I think I am likely to then either:

  • make recommendations for a set of changes with conditional funding,
  • decide that CFAR does not require further funding from the LTF,
  • or be convinced that CFAR's current plans make sense and that they should have sufficient resources to execute those plans.
comment by Milan_Griffes · 2019-04-15T20:34:42.402Z · score: 7 (2 votes) · EA · GW

This is super helpful, thanks!

My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.

Perhaps a crux here is whether whatever mechanism historically drove CFAR's impact has already broken or not. (Just flagging, doesn't seem important to resolve this now.)

comment by Habryka · 2019-04-15T23:18:21.550Z · score: 4 (2 votes) · EA · GW

Yeah, that's what I intended to say. "In the world where I come to the above opinion, I expect my crux will have been that whatever made CFAR historically work, is still working"

comment by andzuck · 2019-04-09T20:18:59.764Z · score: 29 (16 votes) · EA · GW

Was wondering if you can explain more about the reasoning for funding Connor Flexman. Right now, the write-up doesn't explain much and makes me curious what "independent research" means. Also would be interested in learning what past projects Connor has worked on that led to this grant.

comment by Habryka · 2019-04-10T03:30:56.274Z · score: 11 (7 votes) · EA · GW

The primary thing I expect him to do with this grant is to work together with John Salvatier on doing research on skill transfer between experts (which I am partially excited about because that's the kind of thing that I see a lot of world-scale model building and associated grant-making being bottlenecked on).

However, as I mentioned in the review, if he finds that he can't contribute to that as effectively as he thought, I want him to feel comfortable pursuing other research avenues. I don't currently have a short-list of what those would be, but would probably just talk with him about what research directions I would be excited about, if he decides to not collaborate with John. One of the research projects he suggested was related to studying historical social movements and some broader issues around societal coordination mechanisms that seemed decent.

I primarily know about the work he has so far produced with John Salvatier, and also know that he demonstrated general competence in a variety of other projects, including making money managing a small independent hedge fund, running a research project for the Democracy Defense Fund, doing some research at Brown university, and participating in some forecasting tournaments and scoring well.

comment by Igor Terzic · 2019-04-08T21:16:27.317Z · score: 28 (27 votes) · EA · GW

I'd like to challenge the downside estimate re: HPMoR distribution funding.

So I felt comfortable recommending this grant, especially given its relatively limited downside

I think that funding this project comes with potentially significant PR and reputational risk, especially considering the goals for the fund. It seems like it might be a much better fit for the Meta fund, rather than for the fund that aims to: "support organizations that work on improving long-term outcomes for humanity".

comment by Habryka · 2019-04-10T01:29:58.286Z · score: 10 (4 votes) · EA · GW

Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don't think I understand what kind of reputational risks you are worried about.

comment by cole_haus · 2019-04-10T01:55:37.185Z · score: 23 (10 votes) · EA · GW

I am not OP but as someone who also has (minor) concerns under this heading:

  • Some people judge HPMoR to be of little artistic merit/low aesthetic quality
  • Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)

If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.

Clearly, there also many people that like HPMoR and don't have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.

comment by Habryka · 2019-04-10T02:50:36.200Z · score: 16 (11 votes) · EA · GW

Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it's unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn't, as I have argued below).

The outcome might be that some people might start disliking HPMoR, but that doesn't seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.

I have some vague feeling that there might be some more weird downstream effects of this, but I don't think I have any concrete models of how they might happen, and would be interested in hearing more of people's concerns.

comment by kbog · 2019-04-12T07:27:01.789Z · score: 3 (4 votes) · EA · GW

Not the book giveaway itself, but posting grant information like this can be very bad PR.

comment by Khorton · 2019-04-12T07:33:56.000Z · score: 1 (1 votes) · EA · GW

I think I agree, but why do you think so?

comment by kbog · 2019-04-12T07:35:51.607Z · score: -8 (6 votes) · EA · GW

I've seen it happen. A grant like this should either not be made, or made in private. Regardless of how well people behave themselves on this forum.

comment by Habryka · 2019-04-08T21:37:19.226Z · score: 9 (3 votes) · EA · GW

(Responding to the second point about which fund is a better fit for this, will respond to the first point separately)

I am broadly confused how to deal with the "which fund is a better fit?" question. Since it's hard to influence the long-term future I expect a lot of good interventions to go via the path of first introducing people to the community, building institutions that can improve our decision-making, and generally opting for building positive feedback loops and resources that we can deploy as soon as concrete opportunities show up.

My current guess is that we should check in with the Meta fund and their grants to make sure that we don't make overlapping grants and that we communicate any concerns, but that as soon as there is an application that we think is worth it from the perspective of the long-term-future that the Meta fund is not covering, that we should feel comfortable filling it, independently of whether it looks a bit like EA-Meta. But I am open to changing my mind on this.

comment by Milan_Griffes · 2019-04-08T23:39:23.162Z · score: 3 (2 votes) · EA · GW

Could this be straightforwardly simplified by bracketing out far future meta work as within the remit of the Long Term Future Fund, and all other meta work (e.g. animal welfare institution-building, global development institution-building) as within the remit of the Meta Fund?

Not sure if that would cleave reality at the joints, but seems like it might.

comment by Habryka · 2019-04-08T23:51:01.515Z · score: 10 (5 votes) · EA · GW

I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.

I think a single granting body is likely to end up missing a large number of good opportunities, and general intuitions arounds hits-based giving make me think that encouraging independence here is better than splitting up every grant into only one domain (this does rely on those granting bodies being able to communicate clearly around downside risk, which I think we can achieve).

comment by rohinmshah · 2019-04-09T16:15:42.475Z · score: 9 (3 votes) · EA · GW

Is this different from having more people on a single granting body?

Possibly with more people on a single granting body, everyone talks to each other more and so can all get stuck thinking the same thing, whereas they would have come up with more / different considerations had they been separate. But this would suggest that granting bodies would benefit from splitting into halves, going over grants individually, and then merging at the end. Would you endorse that suggestion?

comment by Habryka · 2019-04-09T16:18:31.584Z · score: 9 (3 votes) · EA · GW

I don't think you want to go below three people for a granting body, to make sure that you can catch all the potential negative downsides of a grant. My guess is that if you have 6 or more people it would be better to split it into two independent grant teams.

comment by Peter_Hurford · 2019-04-09T05:36:27.927Z · score: 8 (5 votes) · EA · GW
I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.

Yes, this is a great idea to help reduce bias in grantmaking.

comment by jpaddison · 2019-04-08T14:40:22.218Z · score: 27 (16 votes) · EA · GW

Forgive me if you've written it up elsewhere, but do you have a plan for follow-ups? In particular what success looks like in each case.

Thanks for the detailed writeups and for investigating so many grants.

comment by Habryka · 2019-04-08T18:40:48.006Z · score: 13 (8 votes) · EA · GW

I would quite like us to do follow-ups, but the LTF-Fund is primarily time-constrained and solid follow-ups require a level of continuous engagement that I think currently would be quite costly for any of the current fund members.

I do think we might want to look into adding some additional structure to the fund where we maybe employ someone for half-time to follow up with grantees, perform research, help with the writeups, etc. But I haven't thought that through yet.

For now, I expect to perform follow-up evaluations when the same people re-apply for a new grant, in which case I will want to look in detail into how the past grants we gave them performed. I expect a lot of our grantees to reapply, so I do expect this to result in a good amount of coverage. This way there are also real stakes to the re-evaluation, which overall makes me think that I would be more likely to do a good job at them (as well as anyone else who might take them on).

comment by Elityre · 2019-04-11T00:52:04.294Z · score: 24 (12 votes) · EA · GW

A small correction:

Facilitating conversations between top people in AI alignment (I’ve in particular heard very good things about the 3-day conversation between Eric Drexler and Scott Garrabrant that Eli facilitated)

I do indeed facilitate conversations between high level people in AI alignment. I have a standing offer to help with difficult conversations / intractable disagreements, between people working on x-risk or other EA causes.

(I'm aiming to develop methods for resolving the most intractable disagreements in the space. The more direct experience I have trying my existing methods against hard, "real" conversations, the faster that development process can go. So, at least for the moment, it actively helps me when people request my facilitation. And also, a number of people, including Eric and Scott, have found it to be helpful for the immediate conversation.)

However, I co-facilitated that particular conversation between Eric and Scott. The other facilitators were, Eliana Lorch, Anna Salamon, and Owen Cotton Barratt.

comment by Habryka · 2019-04-11T21:14:51.811Z · score: 3 (2 votes) · EA · GW

Will update to say "help facilitate". Thanks for the correction!

comment by Moses · 2019-04-11T18:34:00.403Z · score: 3 (2 votes) · EA · GW

Is there any resource (eg blogpost) for people curious about what "facilitating conversations" involves?

comment by Elityre · 2019-04-12T15:55:00.726Z · score: 16 (10 votes) · EA · GW

At the moment, not really.

There's the classic Double Crux post. Also, here's a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.

If I were to say what I'm trying to do in a sentence: "Help the participants actually understand eachother." Most people generally underestimate how hard this is, which is a large part of the problem.

The good thing that I'm aiming for in a conversation is when "that absurd / confused thing that X-person was saying, clicks into place, and it doesn't just seem reasonable, it seems like a natural way to think about the situation".

Another frame is, "Everything you need to do to make Double Crux actually work."

A quick list of things conversational facilitation, as I do it, involves:

  • Tracking the state of mind of the participants. Tracking what's at stake for each person.
  • Noticing when Double Illusion of Transparency, or talking past eachother, is happening, and having the participants paraphrase or operationalize. Or in the harder cases, getting each view myself, and then acting as an intermediary.
  • Identifying Double Cruxes.
  • Helping the participants to track what's happening in the conversation and how this thread connects to the higher level goals. Cleaving to the query.
  • Keeping track of conversational threads, and promising conversational tacts.
  • Drawing out and helping to clarifying a person's inarticulate objections, when they don't buy an argument but can't say why.
  • Ontological translation: getting each participants conceptual vocabulary to make natural sense to you, and then porting models and arguments back and forth between the differing conceptual vocabularies.

I don't know if that helps. (I have some unpublished drafts on these topics. Eventually they're to go on LessWrong, but I'm likely to publish rough versions on my musings and rough drafts blog, first.)

comment by Moses · 2019-04-12T16:24:35.017Z · score: 5 (4 votes) · EA · GW

Yes, that helps, thanks. "Mediating" might be a word which would convey the idea better.

comment by Elityre · 2019-04-11T01:03:48.296Z · score: 0 (2 votes) · EA · GW

[Are there ways to delete a comment? I started to write a comment here, and then added a bit to the top-level instead. Now I can't make this comment go away?]

comment by MorganLawless · 2019-04-08T20:36:18.083Z · score: 21 (33 votes) · EA · GW

Mr. Habryka,

I do not believe the $28,000 grant to buy copies of HPMOR meets the evidential standard demanded by effective altruism. “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” With all due respect, it seems to me that this grant feels right but lacks evidence and careful analysis.

The Effective Altruism Funds are "for maximizing the effectiveness of your donations" according to the homepage. This grant's claim that buying copies of HPMOR is among the most effective ways to donate $28,000 by way of improving the long-term future rightly demands a high standard of evidence.

You make two principal arguments in justifying the grant. First, the books will encourage the Math Olympiad winners to join the EA community. Second, the book swill teach the Math Olympiad winners important reasoning skills.

If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!

If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.

I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.

I have no idea who Mikhail Yagudin is so have no reason to suspect anything untoward, but the fact that you do not know him or his team augments this grant’s problems, as you are aware.

I understand that the EA Funds are thought of as vehicles to fund higher risk and more uncertain causes. In the words of James Snowden and Elie Hassenfeld, “some donors give to this fund because they want to signal support for GiveWell making grants which are more difficult to justify and rely on more subjective judgment calls, but have the potential for greater impact than our top charities.” They were referring to GiveWell and the Global Health and Development Fund, but I think you would agree that this appetite for riskier donations applies to the other funds, including this Long Term Future Fund.

However, higher risk and uncertainty does not mean no evidentiary standards at all. In fact, uncertain grants such as this one should be accompanied with an abundance of strong intuitive reasoning if there is no empirical evidence to draw from. The reasoning outlined in the forum post does not meet the standard in my view for the reasons I gave in the prior paragraphs.

More broadly, I think this grant would hurt the EA community. Returning to the quote I began with, “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” If I were a newcomer to the EA community and I saw this grant and the associated rationale, I would be utterly disenchanted by the entire movement. I would rightly doubt that this is among the most effective ways to spend $28,000 to improve the long term future and notice the absence of “evidence and careful analysis”. If effective altruism does not demand greater rigor than other charities, then there is no reason for a newcomer to join the effective altruism movement.

So what should be done?

  1. This grant should be directed elsewhere. EA Russia can find other funding to meet its oral promise that should not have been given without already having funding.

  2. EA Funds cannot both be a vehicle for riskier donations as well as the go-to recommendation for effective donations, as is stated in the Introduction to Effective Altruism. This flies in the face of transparency for what a newcomer would expect when donating. This is not the fault of this grant but the grant is emblematic of this broader problem. I also want to reiterate that I think this grant still does not meet the evidentiary standard, even when it is considered under the view of EA Funds as a vehicle for riskier donations.

Sincerely, Morgan Lawless

comment by Misha_Yagudin · 2019-04-08T22:22:19.473Z · score: 36 (13 votes) · EA · GW

Dear Morgan,

In this comment I want to address the following paragraph (#3).

I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.

I think that it is a miscommunication on my side.

EA Russia has the oral agreements with [the organizers of math olympiads]...

We contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize (conditioned on us finding a sponsor). We didn't promise anything to them, and they do not expect anything from us. Also, I would like to say that we hadn't approached them as the EAs (as I am mindful of the reputational risks).

comment by Misha_Yagudin · 2019-04-08T22:55:51.100Z · score: 22 (10 votes) · EA · GW

Dear Morgan,

In this comment I want to address the following paragraph (related to #2).

If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!

a. While I agree that the books you've mentioned are more directly related to EA than HPMoR. I think it would not be possible to give them as a prize. I think the fact that the organizers whom we contacted had read HPMoR significantly contributed to the possibility to give anything at all.

b. I share your concern about HPMoR not being EA enough. We hope to mitigate it via leaflet + SPARC/ESPR.

comment by Ben Pace · 2019-04-09T19:41:44.322Z · score: 20 (8 votes) · EA · GW

I think this comment suggests there's a wide inferential gap here. Let me see if I can help bridge it a little.

If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.

I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it's not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom's is much higher[1].

It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society's massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they'd done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.

In general I think someone's ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA [2]. I don't think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.

I'm focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the 'safest bets'. I am interested to know whether this perspective makes the grant's intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.


[1] I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.

[2] Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you've not already figured out how to incentivise - I don't think we've figured it all out yet.

comment by Habryka · 2019-04-08T20:55:01.200Z · score: 17 (8 votes) · EA · GW

Thanks for your long critique! I will try to respond to as much of it as I can.

As I see it, there are four separate claims in your comment, each of which warrants a separate response:

1. The Long-Term Future Fund should make all of its giving based on a high standard of externally transparent evidence

2. Receiving HPMoRs is unlikely to cause the math olympiad participants to start working on the long-term future, or engage with the existing EA community

3. EA Russia has made an oral promise of delivering HPMoRs without having secured external funding first

4. If the Long-Term Future Fund is making grants that are this risky, they should not be advertised as the go-to vehicle for donations

I will start responding to some of them now, but please let me know if the above summary of your claims seems wrong.

comment by Igor Terzic · 2019-04-08T21:05:51.656Z · score: 17 (8 votes) · EA · GW

I don't think that 2) really captures the objection the way I read it. It seems that on margin, there are much more cost effective ways of engaging math olympiad participants, and that the content distributed could be much more directly EA/AI related at lower cost than distributing 2000 pages of hard copy HPMoR.

comment by Jan_Kulveit · 2019-04-08T23:16:52.995Z · score: 42 (13 votes) · EA · GW

I don't think anyone should be trying to persuade IMO participants to join the EA community, and I also don't think giving them "much more directly EA content" is a good idea.

I would prefer Math Olympiad winners to think about long-term, think better, and think independently, than to "join the EA community". HPMoR seems ok because it is not a book trying to convince you to join a community, but mostly a book about ways how to think, and a good read.

(If they readers eventually become EAs after reasoning independently, it's likely good; if they for example come to the conclusion there are mayor flaws in EA and it's better to engage with the movement critically, it's also good.)

comment by Habryka · 2019-04-08T23:26:20.852Z · score: 7 (2 votes) · EA · GW

Agree with this.

I do think there is value in showing them that there exists a community that cares a lot about the long-term-future, and do think there is some value in them collaborating with that community instead of going off and doing their own thing, but the first priority should be to help them think better and about the long-term at all.

I think none of the other proposed books achieve this very well.

comment by MorganLawless · 2019-04-08T21:05:06.720Z · score: 16 (9 votes) · EA · GW

Hello, first of all, thank you for engaging with my critique. I have some clarifications for your summary of my claims.

  1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

  2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

  3. I'm not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?

  4. Not necessarily that risky funds shouldn't be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.

comment by Habryka · 2019-04-10T00:08:23.949Z · score: 15 (4 votes) · EA · GW

Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:

1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.

The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.

I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.

2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:

  • I don't think the aim of this grant should be "to recruit IMO and EGMO winners into the EA community". I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
    • I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don't think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
    • Jan's [EA · GW] comment [EA · GW] summarized the concerns I have here reasonably well.
  • As Misha said [EA · GW], this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
  • As Ben Pace said [EA · GW], I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
3. I'm not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?

Misha responded to this [EA · GW]. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn't be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don't see any problems with this.

4. Not necessarily that risky funds shouldn't be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.

My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: "Why might you choose to not donate to this fund?"

First, donors who prefer to support established organizations. The fund managers have a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.

This is the first and top reason we list why someone might not want to donate to this fund. This doesn't necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.

From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.

comment by MorganLawless · 2019-04-10T17:51:36.841Z · score: 3 (2 votes) · EA · GW

Thanks for the response. I don’t have the time to draft a reply this week but I’ll get back to you next week.

comment by tcheasdfjkl · 2019-04-08T05:28:35.253Z · score: 17 (15 votes) · EA · GW

"Mikhail Yagudin ($28,000): Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020"

Why does this cost so much?

comment by Habryka · 2019-04-08T05:36:44.176Z · score: 11 (8 votes) · EA · GW

It's a pretty large number of books, from the application:

Giving HPMoRs out would allow EA or Rationalist communities to establish initial contact with about 650 gifted students (~200 for EGMO and ~450 for IMO)
comment by matthew.vandermerwe · 2019-04-08T13:31:35.982Z · score: 16 (14 votes) · EA · GW

$43/unit is still quite high - could you elaborate a bit more?

comment by Misha_Yagudin · 2019-04-08T16:50:27.120Z · score: 22 (11 votes) · EA · GW

Hi Matthew,

1. $43/unit is an upper bound. While submitting an application, I was uncertain about the price of on-demand printing. My current best guess is that EGMO book sets will cost $34..40. I expect printing cost for IMO to be lower (economy of scale).

2. HPMOR is quite long (~2007 pages according to Goodreads). Each EGMO book set consists of 4 hardcover books.

3. There is an opportunity to trade-off money for prestige by printing only the first few chapters.

comment by matthew.vandermerwe · 2019-04-09T09:31:47.747Z · score: 23 (10 votes) · EA · GW

Thanks for clarifying, that seems reasonable.

FWIW I share the view that sending all 4 volumes might not be optimal. I think I'd find it a nuisance to receive such a large/heavy item (~3 litres/~2kg by my estimate) unsolicited.

comment by RyanCarey · 2019-04-08T17:17:53.418Z · score: 16 (10 votes) · EA · GW

It's a bit surprising to me that you'd want to send all four volumes.

comment by alexlintz · 2019-04-09T07:26:13.153Z · score: 18 (10 votes) · EA · GW

Yeah I tend to agree that sending the whole thing is unnecessary. The first 17 chapters of printed version distributed at CFAR workshops (I think, haven't actually been to one) is enough to get people engaged enough to move to the online medium. I'm guessing sending just that small-looking book will make people more likely to read it as seeing a 2k page book would definitely be intimidating enough to stop many from actually starting.

I do tend to think giving the print version is useful as it incurs some sort of reciprocity which should incentivize reading it.

comment by Habryka · 2019-04-08T18:36:23.312Z · score: 12 (9 votes) · EA · GW

I think it's worth trying. My model is that making a good first impression with a top IMO performer is easily worth $50+, and I think the logistical overhead plays out such that I think you pay about $30 additional dollars to make a significantly better impression than by handing out a small 16-chapter booklet, which seems worth it.

comment by RobBensinger · 2019-04-09T06:21:44.792Z · score: 41 (20 votes) · EA · GW

Money-wise this strikes me as a fine thing to try. I'm a little worried that sending people the entire book set might cause some people to not read it who would have read a booklet, because they're intimidated by the size of the thing.

Psychologically, people generally need more buy-in to decide "I'll read the first few chapters of this 1800-page multi-volume book and see what I think" than to decide "I'll read the first few chapters of this 200-page book that has five sequels and see what I think", and even if the intended framing is the latter one, sending all 1800 pages at once might cause some people to shift to the former frame.

One thing that can help with this is to split HPMoR up into six volumes rather than four, corresponding to the book boundaries Eliezer proposed (though it seems fine to me if they're titled 'HPMoR Vol. 1' etc.). Then the first volume or two will be shorter, and feel more manageable. Then perhaps just send the first 3 (or 2?) volumes, and include a note saying something like 'If you like these books, shoot us an email at [email] and we'll ship you the second half of the story, also available on hpmor.com.'

This further carves up the reading into manageable subtasks in a physical, perceptual way. It does carry the risk that some people might stop when they get through the initial volumes. It might be a benefit in its own right to cause email conversations to happen, though, since a back-and-forth can lead to other useful things happening.

comment by Habryka · 2019-04-09T17:12:51.391Z · score: 13 (7 votes) · EA · GW

The thing that makes me more optimistic here is that the organizers of IMO and EGMO themselves have read HPMoR, and that the books are (as far as I understand it) handed out as part of the prize-package of IMO and EGMO.

I think this makes it more natural to award a large significant-seeming prize, and also comes with a strong encouragement to actually give the books a try.

My model is that only awarding the first book would feel a lot less significant, and my current models of human psychology suggests that while it is the case that some people will feel intimidated by the length of the book, the combined effect of being given a much smaller-seeming gift plus the inconvenience of having to send an email or fill out a form or go to a website to continue reading the book is larger than the effect of the size of the book being overwhelming.

The other thing that having full physical copies enables is book-lending. I printed a full copy of HPMoR a few years ago and have borrowed it out to at least 5 people, maybe one of which would have read the book if I had just sent them a link or just borrowed them the first few chapters (I have given out the small booklets and generally had less success at that than loaning parts of my whole printed book series).

However, I am not super confident of this, and the tradeoff strikes me as relatively close. I yesterday also had a longer conversation about this on the EA-Corner discord and after chatting with me for a while a lot of people seemed to think that giving out the whole book was a better idea, but it did take a while, which is some evidence of inferential distance.

comment by RobBensinger · 2019-04-09T19:26:58.637Z · score: 3 (2 votes) · EA · GW

That all makes sense. In principle I like the idea of trying both options at some point, in case one turns out to be obviously better. I do think that splitting things up into 6 books is better than 4, cost allowing, so that the first effort chunk feels smaller.

comment by Habryka · 2019-04-09T20:06:26.680Z · score: 11 (5 votes) · EA · GW

I do agree with that, and this also establishes a canonical way of breaking the books up into parts. @Misha: Do you think that's an option?

comment by Misha_Yagudin · 2019-04-11T17:40:05.977Z · score: 11 (5 votes) · EA · GW

Oliver, Rob, and others thank you for your thoughts.
1. I don't think that experimenting with the variants is an option for EGMO [severe time constraints].
2. For IMO we have more than enough time, and I will incorporate the feedback and considerations into my decision-making.

comment by BryonyMC · 2019-04-21T19:19:34.359Z · score: 1 (1 votes) · EA · GW

Food for thought: just in thinking how to maximize the value of experimenting with distribution; an alternative approach would be to print the first book and distribute to the math olympiads then invest the rest of the money into converting HPMOR into a podcast/audiobook that can be shared more widely and outlining a “next steps” resource to guide readers. If distributing the books fails (depending on your definition of distribution being a “success”) you avoid sinking $28k into books sitting on shelves at home and now have a widely available podcast (to access for free or a small donation) that can increase HPMOR’s reach over time. (FYI the funds raised through small donations for access could be used to sponsor future printings for youth competitions). 

A podcast or a revamped online version becomes a renewable resource, whereas once those books are distributed, they (and the money) are gone. For those interested, the model that comes to mind is HP and the Sacred Text. Using Harry Potter to convey certain ideas or messages is not uncommon given its global reach. HPST is using it for different reasons obviously but HOW they are distributing the idea might be worth pursing with HPMOR too. HP Alliance is another group using HP to convey a message (their focus is on political and social activism). HPMOR could have greater value long-term if there were alternative methods for accessing it beyond a 2000 page series. 

comment by Ben Pace · 2019-04-21T21:02:42.605Z · score: 11 (4 votes) · EA · GW

A high quality podcast has been made (for free, by the excellent fanbase). It’s at www.hpmorpodcast.com.

comment by BryonyMC · 2019-04-21T22:33:19.971Z · score: 3 (2 votes) · EA · GW

This is great, thank you! Surprised I haven't stumbled across this before... Even better if it's already an available resource, it seems worth sharing with the IMO students and other relevant groups (which was the essence of my suggestion above).

comment by Denkenberger · 2019-04-08T06:42:43.142Z · score: 7 (5 votes) · EA · GW

And why so much focus on math rather than science/engineering?

comment by Habryka · 2019-04-08T07:08:15.069Z · score: 7 (6 votes) · EA · GW

I've considered grants to give books to potentially engineering focused competitions (the same group the current grant goes to also asked about whether we would be interested in giving out books to other competition communities), but I currently think the value of math olympiads is likely to be the highest, for the following reasons:

1. There are positive feedback loops in having other institutions in place to serve as a point of contact for people who end up being inspired by the books. For math olympiad winners we have SPARC and ESPR as well as a broader existing network of people engaged with the math olympiad community. This is less the case for other competitions.

2. My sense is that of the olympiad and competition communities, the math olympiad community is the largest, and tends to attract the best people

3. I think mathematics skill transfers more directly into being predictive of general intelligence than other skills, and also seems more relevant to some of the problems that I am most concerned about solving, like technical problems around AI Alignment

I am thinking about recommending grants to additionally give books to be handed out at other competitions, but I think we should wait and see how these grants play out before we invest more resources into giving out books in this way.

comment by Misha_Yagudin · 2019-04-08T17:12:24.665Z · score: 16 (11 votes) · EA · GW

A bit of a tangent to #3. It seems to me that solving AI Alignment requires breakthroughs and the demographic we are targeting is potentially very well equipped to do so

According to “Invisible Geniuses: Could the Knowledge Frontier Advance Faster?” (Agarwal & Gaule 2018), IMO gold medalists are 50x more likely to win a Fields Medal than PhD graduates of US top-10 math programs. (h/t Gwern)

comment by Jonas Vollmer · 2019-04-08T14:39:15.747Z · score: 10 (6 votes) · EA · GW

On #3, this goes in a similar direction.

comment by Milan_Griffes · 2019-04-09T00:47:55.193Z · score: 15 (5 votes) · EA · GW
Overall, I think it’s likely that staff at highly valuable EA orgs will continue burning out, and I don’t currently see it as an achievable target to not have this happen (though I am in favor of people people working on solving the problem).

Very curious to read more about your view on this at some point (perhaps would be best as a standalone post).

From my present vantage, if it's likely that staff at EA orgs will continue burning out in a nonstochastic way, working to address that seems incredibly leveraged.

Broadly, poor mental health & burnout seem quite tractable. See:

And perhaps there are tractable things that can be changed about the organizational & social cultures in which employees of these orgs exist in.

comment by Habryka · 2019-04-09T02:33:46.422Z · score: 18 (8 votes) · EA · GW

I agree that I might want to write a top-level post about this at some point. Here is a super rough version of my current model:

To do things that are as difficult as EAs are trying to do, you usually need someone to throw basically everything they have behind it, similarly to my model of early stage startups. At the same time, your success rates won't be super high because the problems we are trying to solve are often of massive scale, often lack concrete feedback loops, and don't have many proven solutions.

And even if you succeed some amount, it's unlikely that you will be rewarded with a comparable amount of status or resources than you would if you were to build a successful startup. My model is that EA org success tends to look weird and not really translate into wealth or status in the broader world. This puts large cognitive strain on you, in particular given the tendency for high scrupulosity in the community, by introducing cognitive dissonance between your personal benefit and your moral ideals.

This is combined with an environment that is starved on management capacity, and so has very little room to give people feedback on their plans and actions.

Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don't expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.

comment by Milan_Griffes · 2019-04-09T05:46:43.323Z · score: 6 (5 votes) · EA · GW

Thanks for this.

Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don't expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.

There's more to say here, but for now I'll just note that everything in the model above this paragraph is compatible with a world where burnout & mental health are very tractable & very leveraged (and also compatible with a world where they aren't):

  • "throwing everything you have towards the problem" – nudge work norms, group memes, and group myths toward more longterm thinking (e.g. Gwern's interest in Long Content and the Long Now)
  • "massive scale problems" – put more effort towards chunking the problems into easy-to-operationalize chunks
  • "lack of concrete feedback loops" – build more concrete feedback loops, and/or build work methodologies that don't rely on concrete feedback loops (e.g. Wiles' proof of Fermat's Last Theorem)
  • "lack of proven solutions" – prove out solutions, and study what has worked for longterm-thinking cultures in the past. (Some longterm-thinking cultures: China, the Catholic Church, most of Mahayana Buddhism, Judaism)
  • "high-scrupulosity culture" – nudge the culture towards a lower-neuroticism equilibrium
  • "starved on management capacity" – study what has worked for great managers & great institutions in the past, distill lessons from that, then build a culture that trains up strong managers internally and/or attracts great managers from the broader world

Also there's the more general strategy of learning about cultures where burnout isn't a problem (of which there are many), and figuring out what can be brought from those cultures to EA.

comment by oliverbramford · 2019-04-10T14:28:27.610Z · score: 14 (7 votes) · EA · GW

Would you be able to provide any further information regarding the reasons for not recommending the proposal I submitted for an 'X-Risk Project Database'? Ask: $12,375 for user research, setup, and feature development over 6 months.

Project summary:

Create a database of x-risk professionals and their work, starting with existing AI safety/x-risk projects at leading orgs, to improve coordination within the field.

The x-risk field and subfields are globally distributed and growing rapidly, yet x-risk professionals still have no simple way to find out about each other’s current work and capabilities. This results in missed opportunities for prioritisation, feedback and collaboration, thus retarding progress. To improve visibility and coordination within the x-risk field, and to expedite exceptional work, we will create a searchable database of leading x-risk professionals, organisations and their current work.

Application details

p.s. applause for the extensive explanations of grant recommendations!!

comment by cstx · 2019-04-10T21:20:14.456Z · score: 9 (6 votes) · EA · GW

This database from Issa Rice seems relevant to your proposal: https://aiwatch.issarice.com

comment by Habryka · 2019-04-10T19:27:44.868Z · score: 6 (4 votes) · EA · GW

I will get back to you, but it will probably be a few days. It seems fairer to first send feedback to the people I said I would send private feedback too, and then come back to the public feedback requests.

comment by Milan_Griffes · 2019-04-08T23:19:59.302Z · score: 12 (8 votes) · EA · GW

Many of these grants seem to fall under the remit of the EA Meta Fund.

Could you expand more about how Long Term Future Fund grant-making is differentiated from Meta Fund grant-making?

comment by Habryka · 2019-04-08T23:32:40.076Z · score: 8 (5 votes) · EA · GW

I made a short comment here [EA · GW] about this, though obviously there is more to be said on this topic.

comment by baleparalysis · 2019-04-08T20:10:12.293Z · score: 12 (37 votes) · EA · GW

Skeptical about the cost effectiveness of several of these.

Ought - 50k. "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." Is that really your aim now, being a grant dispenser for random AI companies? What happened to saving lives?

"Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant." If they have enough money and this is a token grant, why is it 50k? Why not reduce to 15-20k and spend the rest on something else?

Metaculus - 70k, Ozzie Gooen - 70k, Jacob Lagerros - 27k. These are small companies that need funding; why are you acting as grant-givers here rather than as special interest investors?

Robert Miles, video content on AI alignment - 39k. Isn't this something you guys and/or MIRI should be doing, and could do quickly, for a lot less money, without having to trust that someone else will do it well enough?

Fanfiction handouts - 28k. What's the cost breakdown here? And do you really think this will make you be taken more seriously? If you want to embrace this fanfic as a major propaganda tool, it certainly makes sense to get it thoroughly edited, especially before doing an expensive print run.

CFAR - 150k(!). If they're relying on grants like this to survive, you should absolutely insist that they downsize their staff. This funding definitely shouldn't be unrestricted.

Connor Flexman - 20k. "Techniques to facilitate skill transfer between experts in different domains" is very vague, as is "significant probability that this grant can help Connor develop into an excellent generalist researcher". I would define this grant much more concretely before giving it.

Lauren Lee - 20k. This is ridiculous, I'm sure she's a great person but please don't use the gift you received to provide sinecures to people "in the community".

Nikhil Kunapul's research - 30k and Lucius Caviola's postdoc - 50k. I know you guys probably want to go in a think-tanky direction but I'm still skeptical.

The large gift you received should be used to expand the influence of EA as an entity, not as a one-off. I think you should reconsider grants vs investment when dealing with small companies, the CFAR grant also concerns me, and of course in general I support de-emphasizing AI risk in favor of actual charity.

comment by Peter_Hurford · 2019-04-08T21:15:25.084Z · score: 53 (27 votes) · EA · GW

This comment strikes me as quite uncharitable, but asks really good questions that I do think would be good to see more detail on.

comment by Habryka · 2019-04-08T23:56:53.902Z · score: 15 (7 votes) · EA · GW

I would be interested in other people creating new top-level comments with individual concerns or questions. I think I have difficulty responding to this top-level comment, and expect that other people stating their questions independently will overall result in better discussion.

comment by aarongertler · 2019-04-08T22:38:19.513Z · score: 33 (10 votes) · EA · GW
The large gift you received should be used to expand the influence of EA as an entity, not as a one-off [...] and of course in general I support de-emphasizing AI risk in favor of actual charity.

While I'm not involved in EA Funds donation processing or grantmaking decisions, I'd guess that anyone making a large gift to the Far Future Fund does, in fact, support emphasizing AI risk, and considers funding this branch of scientific research to be "actual charity".

It could make sense for people with certain worldviews to recommend that people not donate to the fund for many reasons, but this particular criticism seems odd in context, since supporting AI risk work is one of the fund's explicit purposes.


I work for CEA, but these views are my own.

comment by baleparalysis · 2019-04-08T23:29:03.777Z · score: -1 (2 votes) · EA · GW

If the donation was specifically earmarked for AI risk, that aside isn't relevant, but most of the comment still applies. Otherwise, AI risk is certainly not the only long term problem.

comment by Habryka · 2019-04-08T23:35:53.955Z · score: 3 (2 votes) · EA · GW

I was not informed of any earmarking, so I don't think there were any stipulations around that donation.

comment by RyanCarey · 2019-04-08T20:34:25.852Z · score: 32 (17 votes) · EA · GW

It would be really useful if this was split up into separate comments that could be upvoted/downvoted separately.

comment by Milan_Griffes · 2019-04-08T20:45:38.166Z · score: 9 (6 votes) · EA · GW

+1. I have pretty different thoughts about many of the points you raise.

comment by Jan_Kulveit · 2019-04-09T00:02:29.425Z · score: 7 (7 votes) · EA · GW

I don't think karma/voting system should be given that much attention or should be used as a highly visible feedback on project funding.

comment by Habryka · 2019-04-09T00:10:26.239Z · score: 21 (8 votes) · EA · GW

I do think that it would help independently of that by allowing more focused discussion on individual issues.

comment by Milan_Griffes · 2019-04-09T00:29:56.702Z · score: 5 (3 votes) · EA · GW

{Made this a top-level comment at Oli's request.}

comment by Habryka · 2019-04-09T01:40:14.017Z · score: 4 (3 votes) · EA · GW

(Will reply to this if you make it a top-level comment, like the others)

comment by Milan_Griffes · 2019-04-09T05:22:14.079Z · score: 4 (2 votes) · EA · GW

K, it's now top-level.

comment by Jan_Kulveit · 2019-04-09T00:24:20.137Z · score: 5 (4 votes) · EA · GW

To clarify - agree with the benefits of splitting the discussion threads for readability, but I was unenthusiastic about the motivation be voting.

comment by Milan_Griffes · 2019-04-09T00:22:09.095Z · score: 3 (2 votes) · EA · GW

Ought: why provide $50,000 to Ought rather than ~$15,000, given that they're not funding constrained?

comment by Habryka · 2019-04-09T00:30:00.337Z · score: 3 (2 votes) · EA · GW

(Top-level seems better, but will reply here anyway)

The Ought grant was one of the grants I was least involved in, so I can't speak super much to the motivation behind that one. I think you will want to get Matt Wage's thoughts on that.

comment by Milan_Griffes · 2019-04-09T01:39:37.449Z · score: 2 (1 votes) · EA · GW

Cool, do you know if he's reading & reacting to this thread?

comment by Habryka · 2019-04-09T02:35:03.594Z · score: 3 (2 votes) · EA · GW

Don't know. My guess is he will probably read it, but I don't know whether he will have the time to respond to comments.

comment by aarongertler · 2019-04-08T22:41:56.136Z · score: 30 (19 votes) · EA · GW
Robert Miles, video content on AI alignment - 39k. Isn't this something you guys and/or MIRI should be doing, and could do quickly, for a lot less money, without having to trust that someone else will do it well enough?

Creating good video scripts is a rare skill. So is being able to explain things on a video in a way many viewers find compelling. And a large audience of active viewers is a rare resource (one Miles already has through his previous work).

I share some of your questions and concerns about other grants here, but in this case, I think it makes a lot of sense to outsource this tricky task, which most organizations do badly, to someone with a track record of doing it well.


I work for CEA, but these views are my own.

comment by Ozzie Gooen (oagr) · 2019-04-09T22:04:40.191Z · score: 42 (17 votes) · EA · GW

I honestly think this was one of the more obvious ones on the list. 39k for one full year of work is a bit of a steal, especially for someone who already has the mathematical background, video production skills, and audience. I imagine if CEA were to try to recreate that it would have a pretty hard time, plus the recruitment would be quite a challenge.

comment by Cullen_OKeefe · 2019-04-10T21:06:20.267Z · score: 24 (12 votes) · EA · GW

I second this analysis and agree that this was a great grant. I was considering donating to Miles' Patreon but was glad to see the Fund step in to do so instead. It's more tax-efficient to do it that way. Miles is a credible, entertaining, informative source on AI Safety and could be a real asset to beginners in the field. I've introduced people to AIS using his videos.

comment by Ozzie Gooen (oagr) · 2019-04-09T22:09:40.099Z · score: 24 (6 votes) · EA · GW

"Metaculus - 70k, Ozzie Gooen - 70k, Jacob Lagerros - 27k. These are small companies that need funding; why are you acting as grant-givers here rather than as special interest investors?"

I'm not sure why you think all of these are companies. Metaculus is a company, but the other two aren't.

Personally, I think it would be pretty neat if this group (or a similar one) were to later set up the legal infrastructure to properly invest in groups where that would make sense. But this would take quite a bit of time (both fixed costs and marginal costs), and if there are only a few groups per year (one, in this case, I believe) is probably not worth it.

comment by Milan_Griffes · 2019-04-09T00:36:18.292Z · score: 10 (5 votes) · EA · GW

Ozzie's grant: How is Foretold differentiated from Ought's Mosaic? From a quick look, they appear to be attacking a similar problem-space.

Does Ozzie have a go-to-market strategy? Seems like a lot of what he's doing would be very profitable & desired by many companies, if executed well.

Relatedly, why not take an equity stake in Ozzie's project, rather than structure this as a donation?

comment by Habryka · 2019-04-09T02:23:06.911Z · score: 11 (6 votes) · EA · GW

Ozzie was the main developer behind the initial version of Mosaic, so I do expect some of the overlap to be Ozzie's influence.

I don't think I want Ozzie to commit at this point to being a for-profit entity with equity to be given out. It might turn out that the technology he is developing is best built on a non-profit basis. It also seems legally quite confusing/difficult to have the LTF-Fund own a stake in someone else's organization (I don't even know whether that's compatible with being a 501c3).

I expect Ozzie to be better placed to talk about his own go-to-market strategy instead of me guessing at Ozzie's intentions. I obviously have my own models of what I expect Ozzie to do, but in this case it seems better for Ozzie to answer that question.

comment by Milan_Griffes · 2019-04-09T05:24:41.988Z · score: 2 (1 votes) · EA · GW
Ozzie was the main developer behind the initial version of Mosaic, so I do expect some of the overlap to be Ozzie's influence.

Right, I'm wondering about the bull case for simultaneously funding the two projects.

comment by Habryka · 2019-04-09T19:16:04.489Z · score: 4 (2 votes) · EA · GW

Hmm, since I was relatively uninvolved with the Ought grant I have some difficulty giving a concrete answer to that. From an all things considered view (given that Matt was interested in funding it) I think both grants are likely worth funding, and I expect the two organizations to coordinate in a way to mostly avoid unnecessary competition and duplicate effort.

comment by Milan_Griffes · 2019-04-09T19:28:47.903Z · score: 2 (1 votes) · EA · GW
I expect the two organizations to coordinate in a way to mostly avoid unnecessary competition and duplicate effort.

Curious to hear more about what that will look like, though probably Ozzie's better positioned to reply.

comment by Ozzie Gooen (oagr) · 2019-04-09T21:58:42.073Z · score: 42 (11 votes) · EA · GW

Happy to chime in here.

I've previously worked at Ought for a few months and have helped them make Mosaic. I've been talking a decent amount with different Ought team members. We share broad interests of how to break down reasoning, but are executing this in very different ways. Mosaic works in breaking down a very large space of problems into tiny text subproblems. I'm working on a prediction application, which works by having people predict probabilities of future events, and separately share information about their thinking. I think that essentially no one who sees both applications would consider them to be equivalent.

I'm doing very similar work as part of my research at FHI. The plan is not to attempt to become a business or monetize in the foreseeable future. I've been around the startup scene a lot before, and have come to better understand the limitations of getting money in business ways. In almost all cases, from what I can tell, experimental and charitable desires become pushed aside in order to optimize for profits. I've considered this with Guesstimate. Originally I thought I could make a business that also would be useful to EAs, but later realized that it would be exceptionally difficult. Most realistic business strategies for sales looked like domain-specific tools, for instance, a real-estate-specific distribution application, which would sell a lot more but be quite useless for EA causes.

In this case, my first and main priority is to experiment/innovate in the space. I think that doing this in the research setting at this point will be the best way to ensure that the work stays focussed on the long-term benefits.

Hypothetically, if in a few years we wind up with something that was optimized for EA uses, but happens to be easily monetizable with low effort, then that could be a useful way of helping to fund things. However, I really don't want to commit to a specific path at this stage, which is really early on.

comment by Milan_Griffes · 2019-04-09T22:51:07.891Z · score: 4 (2 votes) · EA · GW

Awesome, thanks for jumping in!

Most realistic business strategies for sales looked like domain-specific tools, for instance, a real-estate-specific distribution application, which would sell a lot more but be quite useless for EA causes.

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

comment by Raemon · 2019-04-09T22:58:01.056Z · score: 6 (3 votes) · EA · GW
What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

Is there a particular reason to assume that'd be a good idea?

comment by Milan_Griffes · 2019-04-09T23:05:26.644Z · score: 4 (2 votes) · EA · GW
comment by Raemon · 2019-04-10T02:35:22.401Z · score: 8 (2 votes) · EA · GW

I'm familiar with good things coming out of those places, but not sure why they're the appropriate lens in this case.

Popping back to this:

What do you think about building a company around e.g. the real-estate-specific app, and then housing altruistic work in a "special projects" or "research" arm of that company?

This makes more sense to me when you actually have a company large enough to theoretically have multiple arms. AFAICT there are no arms here, there are just like 1-3 people working on a thing. And I'd expect getting to the point where you could have that requires at least 5-10 years of work.

What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?

comment by Milan_Griffes · 2019-04-10T02:45:33.517Z · score: 5 (4 votes) · EA · GW
What's the good thing that happens if Ozzie first builds a profitable company and only later works in a research arm of that company, that wouldn't happen if he just became "the research arm of that company" right now?
  • More social capital & prestige
  • More robust revenue situation
  • More freedom to act opportunistically to support other projects that you care about (as a small-scale angel funder; as a mentor)
  • (probably) More learning about how organizations work from setting up an org with diverse stakeholders

Case study: Matt Fallshaw & how Bellroy enabled TrikeApps, which supported a lot of good stuff (e.g. LessWrong 1.0). But Bellroy just sells (nice) wallets.

comment by Milan_Griffes · 2019-04-10T02:49:53.689Z · score: 2 (1 votes) · EA · GW

Also to clarify: I'm imagining Ozzie + co-founders build a company, and Ozzie dedicates a fair bit of his capacity to research all along the way.

comment by Raemon · 2019-04-10T03:19:37.166Z · score: 10 (3 votes) · EA · GW

Part of my thinking here is that this would be a mistake: focus and attention are some of the most valuable things, and splitting your focus is generally not good.

comment by Milan_Griffes · 2019-04-10T04:08:02.480Z · score: 7 (4 votes) · EA · GW

This seems highly person-dependent: definitely true for some people, definitely not true for others.

Also, effective administrators & executives tend to multitask heavily, e.g. Robert Moses.

comment by Ozzie Gooen (oagr) · 2019-04-14T14:11:09.873Z · score: 11 (5 votes) · EA · GW

I feel pretty flattered to be even vaguely categorized with all of those folks, but I think it's pretty unlikely of working out that well (it almost always is). If I was pretty sure (>30%) I could make a company as large as Apple/Twitter/Tesla/YC, I'd be pretty happy to go that route.

I've chatted to hundreds of entrepreneurs and tried this, arguably, twice before. That said, if later it's predicted that going the more direct business way would be better for total expected value, I could definitely be open to changing later.

comment by Ozzie Gooen (oagr) · 2019-04-14T14:18:26.878Z · score: 5 (3 votes) · EA · GW

Another thing to note: I'm optimizing on a time-horizon of around 10-30 years. Making a business first could easily take 6-20 years.

comment by Milan_Griffes · 2019-04-14T16:38:12.897Z · score: 3 (2 votes) · EA · GW
I'm optimizing on a time-horizon of around 10-30 years.

Does this flow from your AGI timeline estimate?

comment by Ozzie Gooen (oagr) · 2019-04-17T17:05:08.750Z · score: 6 (3 votes) · EA · GW

Basically, though it's a bit extra short when weighted for what we can change. Transformative narrow AI or other transformative technologies could also apply.

comment by Milan_Griffes · 2019-04-08T21:16:38.946Z · score: 10 (9 votes) · EA · GW

Could you publish also a list of runner-up's? (i.e. applicants that were closely considered but didn't make the cut?)

I think that'd be helpful as the community thinks through the decision-making process here.

comment by Habryka · 2019-04-08T21:23:17.788Z · score: 26 (15 votes) · EA · GW

I currently don't feel comfortable publishing who applied and did not receive a grant, without first checking in with the applicants. I can imagine that in future round there would be some checkbox that applicants can check to indicate that they feel comfortable with their application being shared publicly even if they do not receive a grant.

comment by Milan_Griffes · 2019-04-08T23:13:26.151Z · score: 4 (3 votes) · EA · GW

Got it.

Do you plan to check with the applicants from this round? Seems quick to do, and could surface a lot of helpful information.

comment by Habryka · 2019-04-08T23:27:04.082Z · score: 7 (4 votes) · EA · GW

I have told all applicants that I would be interested in giving public feedback on their application, and will do so if they comment on this thread.

comment by Milan_Griffes · 2019-04-09T00:14:06.869Z · score: 4 (3 votes) · EA · GW

Huh, I submitted two applications but didn't see your note re: public feedback. Perhaps you missed me?

comment by Habryka · 2019-04-09T00:19:53.787Z · score: 7 (5 votes) · EA · GW

I sent you a different email which indicated that I was already planning on sending you feedback directly within the next two weeks. The email which will include that feedback will then also include a request to share it publicly.

There was a small group of people (~7) where I had a sense that direct feedback would be particularly valuable, and you were part of that group, so I sent them a different email indicating that I am going to give them additional feedback in any case, and it was difficult to fit in a sentence that also encouraged them asking for feedback publicly since I had already told them I would send them feedback.

comment by Milan_Griffes · 2019-04-09T00:20:46.291Z · score: 3 (2 votes) · EA · GW

Got it, thanks.

comment by Cullen_OKeefe · 2019-04-10T21:01:29.801Z · score: 5 (5 votes) · EA · GW

I'd like to also echo others' comments thanking the team for responding and engaging with questioning of these decisions.

A question I have as a consistent donor to the fund: under which circumstances, if any, would the team consider regranting to, e.g., the EA Meta Fund? Under some facts (e.g., very few good LTF-specific funding opportunities but many good meta/EA Community funding opportunities), couldn't that fund do more good for the LTF than projects more classically appropriate to the LTF Fund?* Or would you always consider meta causes as potential recipients of the LTF Fund, and therefore see no value regranting since the Meta Fund would not be in a better position than you to meet such requests?

I ask because, though I still think these grants have merit, I can also imagine a future in which donations to the Meta Fund would have more value to the LTF than the LTF Fund. But I imagine the LTF Fund could be better-positioned than me to make that judgment and would prefer it to do so in my stead. But if the LTF Fund would not consider regranting to the next-best fund, then I would have to scrutinize grants more to see which fund is creating more value for the LTF. But this defeats the purpose of the LTF.

*The same might be said of the other Funds too, but Meta seems like the next best for the LTF specifically IMO.

comment by Milan_Griffes · 2019-04-09T01:31:08.994Z · score: 5 (3 votes) · EA · GW

Did the EA Hotel apply?

If so, are they open to the reasoning about why they didn't get a grant being made public?

comment by Habryka · 2019-04-09T02:34:32.663Z · score: 20 (8 votes) · EA · GW

I don't feel comfortable disclosing who has applied and who hasn't applied without the relevant person's permission.

comment by Greg_Colbourn · 2019-04-09T09:02:49.143Z · score: 13 (9 votes) · EA · GW

We applied. Judging by the email I received, I think we are also part of the small group of ~7 mentioned here [EA · GW]. Awaiting the follow up email.

comment by anoneaagain · 2019-04-08T21:43:51.803Z · score: -12 (38 votes) · EA · GW

$28,000 to print hardback copies of fanfiction. $20,000 to someone who was feeling "burnt out" so they can learn to ride a bike (an actual measure of success from an adult in a grant application!) and be unemployed. $30,000 to someones friend on the basis they are good at "Facilitating conversations". $39,000 to make unsuccessful youtube videos. And these are "a strong set of grants" according to the top upvoted post. Wow.

comment by Davis_Kingsley · 2019-04-10T17:27:43.154Z · score: 29 (22 votes) · EA · GW

I think this comment, while quite rude, does get at something valuable. There's an argument that goes "hmm, the outside view says this is absurd, we should be really sure of our inside view before proceeding" and I think that's sometimes a bit of a neglected perspective in rationalist/EA spaces.

I happen to know that the inside view on HPMoR bringing people into the community is very strong, and that the inside view on Eli Tyre doing good and important work is also very strong. I'm less familiar with the details behind the other grants that anoneaagain highlighted, but I do think that being aware and recognizing the... unorthodoxy of these proposals is important, even if the inside view does end up overriding that.

comment by Habryka · 2019-04-10T18:45:51.497Z · score: 14 (10 votes) · EA · GW

I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think weirdness is overall a pretty bad predictor of impact, both in the positive and negative direction. I think it's a good emotion to pay attention to, because often you can learn valuable things from it, but I think it only sometimes tends to give rise to real arguments in favor or against an idea.

It is also very susceptible to framing effects. The comment above says "$39,000 to make unsuccessful youtube videos". That sure sounds naive and weird, but the whole argument relies on the word "unsuccessful" which is a pure framing device and fully unsubstantiated.

And, even though I think weirdness is only a mediocre predictor of impact, I am quite confident that the degree to which a grant or a grantee is perceived as intuitively weird by broad societal standards, is still by far the biggest predictor of whether your project can receive a grant from any major EA granting body (I don't think this is necessarily the fault of the granting bodies, but is instead a result of a variety of complicated social incentives that force their hand most of the time).

I think this has an incredibly negative effect on the ability of the Effective Altruism community to make progress on any of the big problems we care about, and I really don't think we want to push further in that direction.

I think you want to pay attention to whether you perceive something as weird, but I don't think that feeling should be among your top considerations when evaluating an idea or project, and I think right now it is usually the single biggest consideration in most discourse.

After chatting with you about this via PMs, I think you aren't necessarily making that mistake, since I think you do emphasize that there are many arguments that could convince you that something weird is still a good idea.

I think in particular it is important that "something being perceived as weird is definitely not sufficient reason to dismiss it as an effective intervention" to be common knowledge and part of public discourse. As well as "if someone is doing something that looks weird to me, without me having thought much about it or asked them much about their reasons for doing things, then that isn't super much evidence about what they are doing being a bad idea".

comment by Halffull · 2019-04-10T19:09:15.545Z · score: 13 (9 votes) · EA · GW
I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think there's two things going on here.

The first is that weirdness and outside view are often deeply correlated, although not the same thing. In many ways the feeling of weirdness is a schelling fence. It protects people from sociopaths, joining cults, and other things that are a bad idea but they can't quite articulate in words WHY it's a bad idea.

I think you're right that the best interventions will many times be weird, so in this case its' a schelling fence that you have to ignore if you want to make any progress from an inside view... but it's still worth noting that weirdness is there and good data.

The second thing going on is that it seems like many EA institutions have adopted the neoliberal stategy of gaining high status, infiltrating academia, and using that to advance EA values. From this perspective, it's very important to avoid an aura of weirdness for the movement as a whole, even if any given individual weird intervention might have high impact. This is hard to talk about because being too loud about the strategy makes it less effective, which means that sometimes people have to say things like "outside view" when what they really mean is "you're threatening our long term strategy but we can't talk about it." Although obviously in this particular case the positive impact on this strategy outweighs the potential negative impact of the weirdness aura.

I feel comfortable stating this because it's a random EA forum post and I'm not in a position of power at an EA org, but were I in that position, I'd feel much less comfortable posting this.

comment by Khorton · 2019-04-08T21:49:34.281Z · score: 23 (18 votes) · EA · GW

Downvoted for an unnecessarily unkind tone.

comment by RobBensinger · 2019-04-10T20:03:09.925Z · score: 49 (22 votes) · EA · GW

The main thing that pinged me about anoneaagain's comment was that it's saying things that aren't true, and saying them in ways that aren't epistemically cooperative, more so than that it's merely unkind. If you're going to assert 'this person's youtube videos are unsuccessful', you should say what you mean by that and why you think it. If the thing you're responding to is a long, skimmable 75-page post, you should make sure your readers didn't miss the fact that the person you're alluding to is a Computerphile contributor whose videos there tend to get hundreds of thousands of views, and you should say something about why that's not relevant to your success metric (or to the broader goals LTFF should be focusing on).

Wink-and-nudge, connotation-based argument makes it hard to figure out what argument's being made, which makes it hard to have a back-and-forth. If we strip aside the connotation, it's harder to see what's laughable about ideas like "it can be useful to send people books" or "it can be useful to send people books that aren't textbooks, essay collections, or works of original fiction". Likewise, it doesn't seem silly to me for people with disabilities to work on EA projects, or to suggest that disability treatment could be relevant to some EA projects. But I have no idea where to go from there in responding to anoneaagain, because the comment's argument structure is hidden.

comment by Evan_Gaensbauer · 2019-04-17T07:05:54.197Z · score: 3 (2 votes) · EA · GW

If you don't mind me asking, what did goal did you intend to achieve or accomplish with this comment?