Comment by habryka on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T17:58:58.984Z · score: 6 (3 votes) · EA · GW

Yep, I saw that. I didn't actually intend to criticize your use of the quiz, sorry if it came across that way. I just gave it a try and figured I would contribute some data.

(This doesn't mean I agree with how 80k communicates information. I haven't kept up at all with 80k's writing, so I don't have any strong opinions either way here)

Comment by habryka on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T04:05:10.288Z · score: 14 (6 votes) · EA · GW

I got them on basically every setting that remotely applied to me.

Comment by habryka on EA Hotel fundraiser 4: concrete outputs after 10 months · 2019-04-18T03:41:55.038Z · score: 6 (3 votes) · EA · GW

I think sadly pretty low, based on my current model of the time constraints of everyone, and also CEA logistical constraints.

Comment by habryka on EA Hotel fundraiser 4: concrete outputs after 10 months · 2019-04-17T22:39:34.826Z · score: 20 (6 votes) · EA · GW

(This is just my personal perspective and does not aim to reflect the opinions of anyone else on the LTF-Fund)

I am planning to send more feedback on this to the EA Hotel people.

I have actually broadly come around to the EA Hotel being a good idea, but at the time we made the grant decision there was a lot less evidence and writeups around, and it was those writeups by a variety of people that convinced me it is likely a good idea, with some caveats.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-15T23:18:21.550Z · score: 4 (2 votes) · EA · GW

Yeah, that's what I intended to say. "In the world where I come to the above opinion, I expect my crux will have been that whatever made CFAR historically work, is still working"

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-11T21:14:51.811Z · score: 3 (2 votes) · EA · GW

Will update to say "help facilitate". Thanks for the correction!

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T23:37:05.857Z · score: 4 (2 votes) · EA · GW

He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T22:09:01.648Z · score: 16 (8 votes) · EA · GW

I think that people should feel comfortable sharing their system-1 expressions, in a way that does not immediately imply judgement.

I am thinking of stuff like the non-violent communication patterns, where you structure your observation in the following steps:

1. List a set of objective observations

2. Report your experience upon making those observations

3. Then your personal interpretations of those experiences and what they imply about your model of the world

4. Your requests that follow from those models

I think it's fine to stop part-way through this process, but that it's generally a good idea to not skip any steps. So I think it's fine to just list observations, and it's fine to just list observations and then report how you feel about those things, as long as you clearly indicate that this is your experience and doesn't necessarily involve judgement. But it's a bad idea to immediately skip to the request/judgement step.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T19:27:44.868Z · score: 6 (4 votes) · EA · GW

I will get back to you, but it will probably be a few days. It seems fairer to first send feedback to the people I said I would send private feedback too, and then come back to the public feedback requests.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T19:04:46.084Z · score: 37 (10 votes) · EA · GW

I don't get compensated, though I also don't think compensation would make much of a difference for me or anyone else on the fund (except maybe Alex).

Everyone on the fund is basically dedicating all of their resources towards EA stuff, and is generally giving up most of their salary potential for working in EA. I don't think it would make super much sense for us to get more money, given that we are already de-facto donating everything above a certain threshold (either literally in the case of the two Matts, or indirectly by taking a paycut and working in EA).

I think if people give more money to the fund because they come to trust the decisions of the fund more, then that seems like it would incentivize more things like this. Also if people bring up strong arguments against any of the reasoning I explained above, then that is a great win, since I care a lot about our fund distributions getting better.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T18:45:51.497Z · score: 14 (10 votes) · EA · GW

I think there is something going on in this comment that I wouldn't put in the category of "outside view". Instead I would put it in the category of "perceiving something as intuitively weird, and reacting to it".

I think weirdness is overall a pretty bad predictor of impact, both in the positive and negative direction. I think it's a good emotion to pay attention to, because often you can learn valuable things from it, but I think it only sometimes tends to give rise to real arguments in favor or against an idea.

It is also very susceptible to framing effects. The comment above says "$39,000 to make unsuccessful youtube videos". That sure sounds naive and weird, but the whole argument relies on the word "unsuccessful" which is a pure framing device and fully unsubstantiated.

And, even though I think weirdness is only a mediocre predictor of impact, I am quite confident that the degree to which a grant or a grantee is perceived as intuitively weird by broad societal standards, is still by far the biggest predictor of whether your project can receive a grant from any major EA granting body (I don't think this is necessarily the fault of the granting bodies, but is instead a result of a variety of complicated social incentives that force their hand most of the time).

I think this has an incredibly negative effect on the ability of the Effective Altruism community to make progress on any of the big problems we care about, and I really don't think we want to push further in that direction.

I think you want to pay attention to whether you perceive something as weird, but I don't think that feeling should be among your top considerations when evaluating an idea or project, and I think right now it is usually the single biggest consideration in most discourse.

After chatting with you about this via PMs, I think you aren't necessarily making that mistake, since I think you do emphasize that there are many arguments that could convince you that something weird is still a good idea.

I think in particular it is important that "something being perceived as weird is definitely not sufficient reason to dismiss it as an effective intervention" to be common knowledge and part of public discourse. As well as "if someone is doing something that looks weird to me, without me having thought much about it or asked them much about their reasons for doing things, then that isn't super much evidence about what they are doing being a bad idea".

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T03:30:56.274Z · score: 11 (7 votes) · EA · GW

The primary thing I expect him to do with this grant is to work together with John Salvatier on doing research on skill transfer between experts (which I am partially excited about because that's the kind of thing that I see a lot of world-scale model building and associated grant-making being bottlenecked on).

However, as I mentioned in the review, if he finds that he can't contribute to that as effectively as he thought, I want him to feel comfortable pursuing other research avenues. I don't currently have a short-list of what those would be, but would probably just talk with him about what research directions I would be excited about, if he decides to not collaborate with John. One of the research projects he suggested was related to studying historical social movements and some broader issues around societal coordination mechanisms that seemed decent.

I primarily know about the work he has so far produced with John Salvatier, and also know that he demonstrated general competence in a variety of other projects, including making money managing a small independent hedge fund, running a research project for the Democracy Defense Fund, doing some research at Brown university, and participating in some forecasting tournaments and scoring well.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T03:00:01.423Z · score: 6 (3 votes) · EA · GW

Hmm, I guess it depends a bit on how you view this.

If you model this in terms of "total financial resources going to EA-aligned people", then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.

If you want to model it as "money controlled directly by EA institutions" then it's closer to your number.

I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T02:51:12.593Z · score: 4 (2 votes) · EA · GW

Ah, yes. The second one. Will update.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T02:50:36.200Z · score: 16 (11 votes) · EA · GW

Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it's unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn't, as I have argued below).

The outcome might be that some people might start disliking HPMoR, but that doesn't seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.

I have some vague feeling that there might be some more weird downstream effects of this, but I don't think I have any concrete models of how they might happen, and would be interested in hearing more of people's concerns.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T01:29:58.286Z · score: 10 (4 votes) · EA · GW

Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don't think I understand what kind of reputational risks you are worried about.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T00:14:27.588Z · score: 11 (6 votes) · EA · GW

Here is my rough fermi:

My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.

Since people's competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.

To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T00:08:23.949Z · score: 15 (4 votes) · EA · GW

Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:

1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.

The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.

I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.

2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:

  • I don't think the aim of this grant should be "to recruit IMO and EGMO winners into the EA community". I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
    • I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don't think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
    • Jan's comment summarized the concerns I have here reasonably well.
  • As Misha said, this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
  • As Ben Pace said, I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
3. I'm not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?

Misha responded to this. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn't be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don't see any problems with this.

4. Not necessarily that risky funds shouldn't be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.

My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: "Why might you choose to not donate to this fund?"

First, donors who prefer to support established organizations. The fund managers have a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.

This is the first and top reason we list why someone might not want to donate to this fund. This doesn't necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.

From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T23:31:16.690Z · score: 16 (10 votes) · EA · GW

Seems good.

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

I am overall still quite positive on CFAR. I have significant concerns, but the total impact CFAR had over the course of its existence strikes me as very large and easily worth the resources it has taken up so far.

I don't think it's the correct choice for CFAR to take irreversible action right now because they correctly decided to not run a fall fundraiser, and I still assign significant probability to CFAR actually being on the right track to continue having a large impact. My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

I considered this for quite a while, but ultimately decided against it. I think grantmakers should generally be very hesitant to make earmarked or conditional grants to organizations, without knowing the way that organization operates in close detail. Some things that might seem easy to change from the outside often turn out to be really hard to change for good reasons, and this also has the potential to create a kind of adversarial relationship where the organization is incentivized to do the minimum amount of effort necessary to meet the conditions of the grant, which I think tends to make transparency a lot harder.

Overall, I much more strongly prefer to recommend unconditional grants with concrete suggestions for what changes would cause future unconditional grants to be made to the organization, while communicating clearly what kind of long-term performance metrics or considerations would cause me to change my mind.

I expect to communicate extensively with CFAR over the coming weeks, talk to most of its staff members, generally get a better sense of how CFAR operates and think about the big-picture effects that CFAR has on the long-term future and global catastrophic risk. I think I am likely to then either:

  • make recommendations for a set of changes with conditional funding,
  • decide that CFAR does not require further funding from the LTF,
  • or be convinced that CFAR's current plans make sense and that they should have sufficient resources to execute those plans.
Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T20:34:02.898Z · score: 65 (20 votes) · EA · GW

Here is a rough summary of the process, it's hard to explain spreadsheets in words so this might end up sounding a bit confusing:

  • We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from -5 to +5 for each application
  • Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
  • We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
  • Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
  • As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes

However, some fund members weren't super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.

We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.

Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T20:14:54.305Z · score: 10 (3 votes) · EA · GW

I agree. Though I think I expect the ratio of funds-distributed/staff to roughly stay the same, at least for a bit, and probably go up a bit.

I think older and larger organizations will have smaller funds-distributed/staff ratios, but I think that's mostly because coordinating people is hard and marginal productiveness of a hire goes down a lot after the initial founders, so you need to hire a lot more people to produce the same quality of output.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T20:06:26.680Z · score: 11 (5 votes) · EA · GW

I do agree with that, and this also establishes a canonical way of breaking the books up into parts. @Misha: Do you think that's an option?

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T20:03:08.622Z · score: 33 (11 votes) · EA · GW

I strongly agree that I would like there to be more people who have the competencies and resources necessary to assess grants like this. With the Open Philanthropy Project having access to ~10 billion dollars, the case for needing more people with that expertise is pretty clear, and my current sense is that there is a broad consensus in EA that finding more people for those roles is among, if not the, top priority.

I think giving less money to EA Funds would not clearly improve this situation from this perspective at all, since most other granting bodies that exist in the EA space have an even higher (funds-distributed)/staff ratio than this.

The Open Philanthropy Project has about 15-20 people assessing grants, and gives out at least 100 million dollars a year, and likely aims to give closer to a $1 billion dollars a year given their reserves.

BERI has maybe 2 people working full-time on grant assessment, and my current guess is that they give out about $5 million dollars of grants a year

My guess is that GiveWell also has about 10 staff assessing grants full-time, making grants of about $20 million dollars

I think at the current level of team-member-involvement, and since I do think there is a significant judgement-component to evaluating grants which allows the average LTF-Fund team member to act with higher leverage, plus the time that anyone involved in the LTF-landscape has to invest to build models and keep up to speed with recent developments, I actually think that the LTF-Fund team is able to make more comprehensive grant assessments per dollar granted than almost any other granting body in the space.

I do think that having more people who can assess grants and help distribute resources like this is key, and think that investing in training and recruiting those people should be one of the top priorities for the community at large.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T19:16:04.489Z · score: 4 (2 votes) · EA · GW

Hmm, since I was relatively uninvolved with the Ought grant I have some difficulty giving a concrete answer to that. From an all things considered view (given that Matt was interested in funding it) I think both grants are likely worth funding, and I expect the two organizations to coordinate in a way to mostly avoid unnecessary competition and duplicate effort.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T19:10:23.105Z · score: 9 (3 votes) · EA · GW
Why a large, unrestricted grant to CFAR, given these concerns? Would a smaller grant catalyze changes such that the organization becomes cash-flow positive?

I have two interpretations of what your potential concerns here might be, so might be good to clarify first. Which of these two interpretations is closer to what you mean?

1. "Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future"

2. "Why not give CFAR a grant that is conditional on some kind of change in the organization?"

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T17:51:47.000Z · score: 28 (9 votes) · EA · GW
Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

At least for me this doesn't really resonate with how I am thinking about grantmaking. The broader EA/Rationality/LTF community is in significant chunks a professional network, and so I've worked with a lot of people on a lot of projects over the years. I've discussed cause prioritization questions on the EA Forum, worked with many people at CEA, tried to develop the art of human rationality on LessWrong, worked with people at CFAR, discussed many important big picture questions with people at FHI, etc.

The vast majority of my interactions with people do not come from parties, but come from settings where people are trying to solve some kind of problem, and seeing how others solve that problem is significant evidence about whether they can solve similar problems.

It's not that I hang out with lots of people at parties, make lots of friends and then that is my primary source for evaluating grant candidates. I basically don't really go to any parties (I actually tend to find them emotionally exhausting, and only go to parties if I have some concrete goal to achieve at one). Instead I work with a lot of people and try to solve problems with them and then that obviously gives me significant evidence about who is good at solving what kinds of problems.

I do find grant interviews more exhausting than other kinds of work, but I think that has to do with the directly adversarial setting in which the applicant is trying their best to seem competent and good, and I am trying my best to get an accurate judgement of their competence, and I think that dynamic usually makes that kind of interview a much worse source of evidence of someone's competence than having worked with them on some problem for a few hours (which is also why work-tests tend to be much better predictors of future job-performance than interview-performance).

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T17:39:11.993Z · score: 45 (11 votes) · EA · GW
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant.

I personally have never interacted directly with the grantees of about 6 of the 14 grants that I have written up, so it it not really about knowing the grantmakers in person. What does matter a lot are the second degree connections I have to those people (and that someone on the team had for the large majority of applications), as well as whether the grantees had participated in some of the public discussions we've had over the past years and demonstrated good judgement (e.g. EA Forum & LessWrong discussions).

I don't think you should model the situation as relying on knowing a grantmaker in-person, but you should think that testimonials and referrals from people that the grantmakers trust matter a good amount. That trust can be built via a variety of indirect ways, some of which are about knowing them in person and having a trust relationship that has been built via personal contact, but a lot of the time that trust comes from the connecting person having made a variety of publicly visible good judgements.

As an example, one applicant came with a referral from Tyler Cowen. I have only interacted directly with Tyler once in an email chain around EA Global 2015, but he has written up a lot of valuable thoughts online and seems to have generally demonstrated broadly good judgement (including in the granting domain with his Emergent Ventures project). This made his endorsement factor positively into my assessment for that application. (Though because I don't know Tyler that well, I wasn't sure how easily he would give out referrals like this, which reduced the weight that referral had in my mind)

The word interact above is meant in a very broad way, which includes second degree social connections as well as online interactions and observing the grantee to have demonstrated good judgement in some public setting. In the absence of any of that, it's often very hard to get a good sense of the competence of an applicant.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T17:19:34.114Z · score: 20 (9 votes) · EA · GW
This also strikes me as unfortunate and may lead to inefficiently inflated grant requests in the future, though I guess I can understand why the logistics behind this may require it. It feels intuitively weird though that it is easier to get $10K than it is to get $1K.

A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate). So given that overhead, it makes some amount of sense that it's hard to get $1k grants.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T17:12:51.391Z · score: 13 (7 votes) · EA · GW

The thing that makes me more optimistic here is that the organizers of IMO and EGMO themselves have read HPMoR, and that the books are (as far as I understand it) handed out as part of the prize-package of IMO and EGMO.

I think this makes it more natural to award a large significant-seeming prize, and also comes with a strong encouragement to actually give the books a try.

My model is that only awarding the first book would feel a lot less significant, and my current models of human psychology suggests that while it is the case that some people will feel intimidated by the length of the book, the combined effect of being given a much smaller-seeming gift plus the inconvenience of having to send an email or fill out a form or go to a website to continue reading the book is larger than the effect of the size of the book being overwhelming.

The other thing that having full physical copies enables is book-lending. I printed a full copy of HPMoR a few years ago and have borrowed it out to at least 5 people, maybe one of which would have read the book if I had just sent them a link or just borrowed them the first few chapters (I have given out the small booklets and generally had less success at that than loaning parts of my whole printed book series).

However, I am not super confident of this, and the tradeoff strikes me as relatively close. I yesterday also had a longer conversation about this on the EA-Corner discord and after chatting with me for a while a lot of people seemed to think that giving out the whole book was a better idea, but it did take a while, which is some evidence of inferential distance.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T17:01:56.624Z · score: 35 (12 votes) · EA · GW
but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know

To be clear, we did invest time into vetting applications from people we didn't know, we just obviously have limits to how much time we can invest. I expect this will be a limiting factor for any grant body.

My guess is that if you don't have any information besides the application info, and the plan requires a significant level of skill (as the vast majority of grants do), you have to invest at least an additional 5, often 10, hours of effort into reaching out to them, performing interviews, getting testimonials, analyzing their case, etc. If you don't do this, I expect the average grant to be net negative.

Our review period lasted about one month. At 100 applications, assuming that you create an anonymous review process, this would have resulted in around 250-500 hours of additional work, which would have made this the full-time job for 2-3 of the 5 people on the grant board, plus the already existing ~80 hours of overhead this grant round required from the board. You likely would have filtered out about 50 of them at an earlier stage, so you can maybe cut that in half, resulting in ~2 full-time staff for that review period.

I don't think that level of time-investment is possible for the EA Funds, and if you make it a requirement for being on an EA Fund board, the quality of your grant decisions will go down drastically because there are very few people who have a track record of good judgement in this domain, who are not also holding other full-time jobs. That level of commitment would not be compatible with holding another full-time job, especially not in a leadership position.

I do think that at our current grant volume, we should invest more resources into building infrastructure for vetting grant applications. I think it might make sense for us to hire a part-time staff to help with evaluations and do background research as well as interviews for us, but it's currently unclear to me how such a person would be managed and whether their salary would be worth the benefit, but it seems like plausibly the correct choice.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T16:23:06.148Z · score: 36 (19 votes) · EA · GW
To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network.

That is incorrect. The current grant team was actually explicitly chosen on the basis of having non-overlapping networks. Besides me nobody lives in the Bay Area (at least full time). Here is where I think everyone is living:

  • Matt Fallshaw: Australia (but also travels a lot)
  • Helen Toner: Georgetown (I think)
  • Alex Zhu: No current permanent living location, travels a lot, might live in Boulder starting a few weeks from now
  • Matt Wage: New York

I was also partially chosen because I used to live in Europe and still have pretty strong connections to a lot of european communities (plus my work on online communities making my network less geographically centralized).

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T16:18:31.584Z · score: 9 (3 votes) · EA · GW

I don't think you want to go below three people for a granting body, to make sure that you can catch all the potential negative downsides of a grant. My guess is that if you have 6 or more people it would be better to split it into two independent grant teams.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T02:35:03.594Z · score: 3 (2 votes) · EA · GW

Don't know. My guess is he will probably read it, but I don't know whether he will have the time to respond to comments.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T02:34:32.663Z · score: 20 (8 votes) · EA · GW

I don't feel comfortable disclosing who has applied and who hasn't applied without the relevant person's permission.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T02:33:46.422Z · score: 18 (8 votes) · EA · GW

I agree that I might want to write a top-level post about this at some point. Here is a super rough version of my current model:

To do things that are as difficult as EAs are trying to do, you usually need someone to throw basically everything they have behind it, similarly to my model of early stage startups. At the same time, your success rates won't be super high because the problems we are trying to solve are often of massive scale, often lack concrete feedback loops, and don't have many proven solutions.

And even if you succeed some amount, it's unlikely that you will be rewarded with a comparable amount of status or resources than you would if you were to build a successful startup. My model is that EA org success tends to look weird and not really translate into wealth or status in the broader world. This puts large cognitive strain on you, in particular given the tendency for high scrupulosity in the community, by introducing cognitive dissonance between your personal benefit and your moral ideals.

This is combined with an environment that is starved on management capacity, and so has very little room to give people feedback on their plans and actions.

Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don't expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T02:23:06.911Z · score: 11 (6 votes) · EA · GW

Ozzie was the main developer behind the initial version of Mosaic, so I do expect some of the overlap to be Ozzie's influence.

I don't think I want Ozzie to commit at this point to being a for-profit entity with equity to be given out. It might turn out that the technology he is developing is best built on a non-profit basis. It also seems legally quite confusing/difficult to have the LTF-Fund own a stake in someone else's organization (I don't even know whether that's compatible with being a 501c3).

I expect Ozzie to be better placed to talk about his own go-to-market strategy instead of me guessing at Ozzie's intentions. I obviously have my own models of what I expect Ozzie to do, but in this case it seems better for Ozzie to answer that question.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T01:40:14.017Z · score: 4 (3 votes) · EA · GW

(Will reply to this if you make it a top-level comment, like the others)

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T00:30:00.337Z · score: 3 (2 votes) · EA · GW

(Top-level seems better, but will reply here anyway)

The Ought grant was one of the grants I was least involved in, so I can't speak super much to the motivation behind that one. I think you will want to get Matt Wage's thoughts on that.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T00:19:53.787Z · score: 7 (5 votes) · EA · GW

I sent you a different email which indicated that I was already planning on sending you feedback directly within the next two weeks. The email which will include that feedback will then also include a request to share it publicly.

There was a small group of people (~7) where I had a sense that direct feedback would be particularly valuable, and you were part of that group, so I sent them a different email indicating that I am going to give them additional feedback in any case, and it was difficult to fit in a sentence that also encouraged them asking for feedback publicly since I had already told them I would send them feedback.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T00:10:26.239Z · score: 21 (8 votes) · EA · GW

I do think that it would help independently of that by allowing more focused discussion on individual issues.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T00:01:37.791Z · score: 23 (14 votes) · EA · GW

I agree with this, but also think that people should feel free to express any system-1 level reactions they have to these grants. In my experience it can often be quite hard to formalize a critique into a concrete, operationalized set of policy changes, even if the critique itself is good and valid, and I don't think I want to force all commenters to fully formalize their beliefs before they can express them here.

I do think the end goal of the conversation should be a set of policies that the LTF-Fund can implement.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:56:53.902Z · score: 15 (7 votes) · EA · GW

I would be interested in other people creating new top-level comments with individual concerns or questions. I think I have difficulty responding to this top-level comment, and expect that other people stating their questions independently will overall result in better discussion.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:51:01.515Z · score: 10 (5 votes) · EA · GW

I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.

I think a single granting body is likely to end up missing a large number of good opportunities, and general intuitions arounds hits-based giving make me think that encouraging independence here is better than splitting up every grant into only one domain (this does rely on those granting bodies being able to communicate clearly around downside risk, which I think we can achieve).

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:46:43.320Z · score: 34 (15 votes) · EA · GW

Hmm, I don't think I am super sure what a good answer to this would look like. Here are some common reasons for why I think a grant was not a good idea to recommend:

  • The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)
  • The mainline outcome of the grant was good, but there were potential negative consequences that the applicant did not consider or properly account for, and I did not feel like I could cause the applicant to understand the downside risk they have to account for without investing significant effort and time
  • The grant was only tenuously EA-related and seemed to have been submitted to a lot of applications relatively indiscriminately
  • I was unable to understand the goals, implementation or other details of the grant
  • I simply expected the proposed plan to not work, for a large variety of reasons. Here are some of the most frequent:
    • The grant was trying to achieve something highly ambitious while seeming to allocate very little resources to achieving that outcome
    • The grantee had a track record of work that I did not consider to be of sufficient quality to achieve what they set out to do
  • In some cases the applicant asked for less than our minimum grant amount of $10,000
Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:35:53.955Z · score: 3 (2 votes) · EA · GW

I was not informed of any earmarking, so I don't think there were any stipulations around that donation.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:32:40.076Z · score: 8 (5 votes) · EA · GW

I made a short comment here about this, though obviously there is more to be said on this topic.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:27:04.082Z · score: 7 (4 votes) · EA · GW

I have told all applicants that I would be interested in giving public feedback on their application, and will do so if they comment on this thread.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T23:26:20.852Z · score: 7 (2 votes) · EA · GW

Agree with this.

I do think there is value in showing them that there exists a community that cares a lot about the long-term-future, and do think there is some value in them collaborating with that community instead of going off and doing their own thing, but the first priority should be to help them think better and about the long-term at all.

I think none of the other proposed books achieve this very well.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T21:37:19.226Z · score: 9 (3 votes) · EA · GW

(Responding to the second point about which fund is a better fit for this, will respond to the first point separately)

I am broadly confused how to deal with the "which fund is a better fit?" question. Since it's hard to influence the long-term future I expect a lot of good interventions to go via the path of first introducing people to the community, building institutions that can improve our decision-making, and generally opting for building positive feedback loops and resources that we can deploy as soon as concrete opportunities show up.

My current guess is that we should check in with the Meta fund and their grants to make sure that we don't make overlapping grants and that we communicate any concerns, but that as soon as there is an application that we think is worth it from the perspective of the long-term-future that the Meta fund is not covering, that we should feel comfortable filling it, independently of whether it looks a bit like EA-Meta. But I am open to changing my mind on this.

Comment by habryka on Long Term Future Fund: April 2019 grant decisions · 2019-04-08T21:23:17.788Z · score: 26 (15 votes) · EA · GW

I currently don't feel comfortable publishing who applied and did not receive a grant, without first checking in with the applicants. I can imagine that in future round there would be some checkbox that applicants can check to indicate that they feel comfortable with their application being shared publicly even if they do not receive a grant.

Long Term Future Fund: April 2019 grant decisions

2019-04-08T01:00:10.890Z · score: 131 (61 votes)

Major Donation: Long Term Future Fund Application Extended 1 Week

2019-02-16T23:28:45.666Z · score: 41 (19 votes)

EA Funds: Long-Term Future fund is open to applications until Feb. 7th

2019-01-17T20:25:29.163Z · score: 19 (13 votes)

Long Term Future Fund: November grant decisions

2018-12-02T00:26:50.849Z · score: 35 (29 votes)

EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)

2018-11-21T03:41:38.850Z · score: 21 (11 votes)