Posts

Request for Feedback: Draft of a COI policy for the Long Term Future Fund 2020-02-05T18:38:24.224Z · score: 40 (19 votes)
Long Term Future Fund Application closes tonight 2020-02-01T19:47:47.051Z · score: 16 (4 votes)
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:35:59.575Z · score: 8 (2 votes)
AI Alignment 2018-2019 Review 2020-01-28T21:14:02.503Z · score: 28 (11 votes)
Long-Term Future Fund: November 2019 short grant writeups 2020-01-05T00:15:02.468Z · score: 46 (20 votes)
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:43:28.728Z · score: 13 (3 votes)
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T18:46:40.813Z · score: 79 (35 votes)
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:13:32.289Z · score: 11 (6 votes)
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z · score: 29 (10 votes)
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z · score: 52 (20 votes)
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z · score: 60 (23 votes)
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z · score: 143 (75 votes)
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z · score: 41 (19 votes)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z · score: 19 (13 votes)
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z · score: 35 (29 votes)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z · score: 21 (11 votes)

Comments

Comment by habryka on Examples for impact of Working at EAorg instead of ETG · 2020-03-15T18:49:45.889Z · score: 7 (4 votes) · EA · GW

This also seems right to me. We roughly try to distribute all the money we have in a given year (with some flexibility between rounds), and aren't planning to hold large reserves. So from just our decisions we couldn't ramp up our grantmaking because better opportunities arise.

However, I can imagine donations to us increasing if better opportunities arise, so I do expect there to be at least some effect.

Comment by habryka on Linch's Shortform · 2020-02-25T04:40:36.160Z · score: 6 (3 votes) · EA · GW
11. I gave significant amounts of money to the Long-Term Future Fund (which funded Catalyst), so I'm glad Catalyst turned out well. It's really hard to forecast the counterfactual success of long-reach plans like this one, but naively it looks like this seems like the right approach to help build out the pipeline for biosecurity.

I am glad to hear that! I sadly didn't end up having the time to go, but I've been excited about the project for a while.

Comment by habryka on Thoughts on The Weapon of Openness · 2020-02-14T22:42:45.194Z · score: 6 (3 votes) · EA · GW
though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

So I think this is actually a really important point. I think by default the NSA can contract out various tasks to industry professionals and academics and on average get results back from them that are better than what they could have done internally. The differential cryptoanalysis situation is a key example of that. IBM could have instead been contracted by some random other group and developed the technology for them instead, which means that the NSA had basically no lead in cryptography over IBM.

Comment by habryka on Thoughts on The Weapon of Openness · 2020-02-14T18:30:05.134Z · score: 5 (3 votes) · EA · GW

Even if all of these turn out to be quite significant, that would at most imply a lead of something like 5 years.

The elliptic curve one doesn't strike me at all like the NSA had a big lead. You are probably referring to this backdoor:

https://en.wikipedia.org/wiki/Dual_EC_DRBG

This backdoor was basically immediately identified by security researchers the year it was embedded in the standard. As you can read in the Wikipedia article:

Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG.

I can't really figure out what you mean by the DES recommended magic numbers. There were some magic numbers in DES that were used for defense against the differential cryptanalysis technique. Which I do agree is probably the single strongest example we have of an NSA lead, though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

To be clear, a 30 (!) year lead seems absolutely impossible to me. A 3 year broad lead seems maybe plausible to me, with a 10 year lead in some very narrow specific subset of the field that gets relatively little attention (in the same way research groups can sometimes pull ahead in a specific subset of the field that they are investing heavily in).

I have never talked to a security researcher who would consider 30 years remotely plausible. The usual impression that I've gotten from talking to security researchers is that the NSA has some interesting techniques and probably a variety of backdoors, which they primarily installed not by technological advantage but by political maneuvering, but that in overall competence they are probably behind the academic field, and almost certainly not very far ahead.

Comment by habryka on Thoughts on The Weapon of Openness · 2020-02-13T22:37:25.331Z · score: 8 (5 votes) · EA · GW
past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research

I have never heard this and would extremely surprised by this. Like, willing to take a 15:1 bet on this, at least. Probably more.

Do you have a source for this?

Comment by habryka on How do you feel about the main EA facebook group? · 2020-02-13T00:45:55.714Z · score: 4 (2 votes) · EA · GW

Do you have the same feeling about comments on the EA Forum?

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T17:51:38.967Z · score: 2 (1 votes) · EA · GW
Separately, you mentioned OpenPhil's policy of (non-) disclosure as an example to emulate. I strongly disagree with this, for two reasons.

This sounds a bit weird to me, given that the above is erring quite far in the direction of disclosure.

The specific dimension of the OpenPhil policy that I think has strong arguments going for it is to be hesitant with recusals. I really want to continue to be very open with our Conflict of Interest, and wouldn't currently advocate for emulating Open Phil's policy on the disclosure dimension.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T17:48:58.025Z · score: 4 (2 votes) · EA · GW
I didn't see any discussion of recusal because the fund member is employed or receives funds from the potential grantee?

Yes, that should be covered by the CEA fund policy we are extending. Here are the relevant sections:

Own organization: any organization that a team member
is currently employed by
volunteers for
was employed by at any time in the last 12 months
reasonably expects to become employed by in the foreseeable future
does not work for, but that employs a close relative or intimate partner
is on the board of, or otherwise plays a substantially similar advisory role for
has a substantial financial interest in

And:

A team member may not propose a grant to their own organization
A team member must recuse themselves from making decisions on grants to their own organizations (except where they advocate against granting to their own organization)
A team member must recuse themselves from advocating for their own organization if another team member has proposed such a grant
A team member may provide relevant information about their own organization in a neutral way (typically in response to questions from the team’s other members).

Which covers basically that whole space.

Note that that policy is still in draft form and not yet fully approved (and there are still some incomplete sentences in it), so we might want to adjust our policy above depending on changes in the the CEA fund general policy.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T21:58:19.306Z · score: 6 (4 votes) · EA · GW

Responding on a more object-level:

As an obviously extreme analogy, suppose that someone applying for a job decides to include information about their sexual history on their CV.

I think this depends a lot on the exact job, and the nature of the sexual history. If you are a registered sex-offender, and are open about this on your CV, then that will overall make a much better impression than if I find that out from doing independent research later on, since that is information that (depending on the role and the exact context) might be really highly relevant for the job.

Obviously including potentially embarrassing information in a CV without it having much purpose is a bad idea, and mostly signals various forms of social obliviousness, as well as distract from the actually important parts of your CV, which pertain to your professional experience and factors that will likely determine how well you will do at your job.

But I'm inclined to agree with Howie that the extra clarity you get from moving beyond 'high-level' categories probably isn't all that decision-relevant.

So, I do think this is probably where our actual disagreement lies. Of the most concrete conflicts of interest that have given rise to abuses of power I have observed both within the EA community, and in other communities, more than 50% where the result of romantic relationships, and were basically completely unaddressed by the high-level COI policies that the relevant institutions had in place. Most of these are in weird grey-areas of confidentiality, but I would be happy to talk to you about the details of those if you send me a private message.

I think being concrete here is actually highly action relevant, and I've seen the lack of concreteness in company policies have very large and concrete negative consequences for those organizations.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T21:50:03.879Z · score: 5 (5 votes) · EA · GW
less concrete terms is mostly about demonstrating an expected form of professionalism.

Hmm, I think we likely have disagreements on the degree to which I think at least a significant chunk of professionalism norms are the results of individuals trying to limit accountability of themselves and people around them. I generally am not a huge fan of large fractions of professionalism norms (which is not by any means a rejection of all professionalism norms, just specific subsets of it).

I think newspeak is a pretty real thing, and the adoption of language that is broadly designed to obfuscate and limit accountability is a real phenomenon. I think that phenomenon is pretty entangled with professionalism. I agree that there is often an expectation of professionalism, but I would argue that exactly that expectation is what often causes obfuscating language to be adopted. And I think this issue is important enough that just blindly adopting professional norms is quite dangerous and can have very large negative consequences.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T02:48:05.746Z · score: 3 (2 votes) · EA · GW
You could do early screening by unanimous vote against funding specific potential grantees, and, in these cases, no COI statement would have to be written at all.

Since we don't publicize rejections, or even who applied to the fund, I wasn't planning to write any COI statements for rejected applicants. That's a bit sad, since it kind of leaves a significant number of decisions without accountability, but I don't know what else to do.

The natural time for grantees to object to certain information to be included would be when we run our final writeup past them. They could then request that we change our writeup, or ask us to rerun the vote with certain members excluded, which would make the COI statements unnecessary.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:53:33.076Z · score: 4 (6 votes) · EA · GW

This is a more general point that shapes my thinking here a bit, not directly responding to your comment.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

I feel like the thing that is happening here makes me pretty uncomfortable, and I really don't want to further incentivize this kind of assessment of stuff.

A related concept in this space seems to me to be the Copenhagen Interpretation of Ethics:

The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time – until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.

I feel like there is a similar thing going on with being concrete about stuff like sexual and romantic relationships (which obviously have massive consequences in large parts of the world). And maybe more broadly having this COI policy in the first place. My sense is that we can successfully avoid a lot of criticism by just not having any COI policy, or having a really high-level and vague one, because any policy we would have would clearly signal we have looked at the problem, and are now to blame for any consequences related to it.

More broadly, I just feel really uncomfortable with having to write all of our documents to make sense on a purely associative level. I as a donor would be really excited to see a COI policy as concrete as the one above, similarly to how all the concrete mistake pages on all the EA org websites make me really excited. I feel like making the policy less concrete trades of getting something right and as such being quite exciting to people like me, in favor of being more broadly palatable to some large group of people, and maybe making a bit fewer enemies. But that feels like it's usually going to be the wrong strategy for a fund like ours, where I am most excited about having a small group of really dedicated donors who are really excited about what we are doing, much more than being very broadly palatable to a large audience, without anyone being particularly excited about it.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:35:21.918Z · score: 4 (2 votes) · EA · GW
being personal friends with someone should require disclosure.

I think this comment highlights some of the reasons for why I am hesitant to just err on the side of disclosure for personal friendships.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:21:57.373Z · score: 4 (2 votes) · EA · GW
I think the onus is on LTF to find a way of managing COIs that avoids this, while also having a suitably stringent COI policy.

I mean, these are clearly trading off against each other, given all the time constraints I explained in a different comment. Sure, you can say that we have an obligation, but that doesn't really help me balance these tradeoffs.

The above COI policy is my best guess at how to manage that tradeoff. It seems to me that moving towards recusal on any of the above axes, will have to prevent at least some grants being made, or at least I don't currently really see a way forward that would not make that the case. I do think looking into some kind of COI board could be a good idea, but I do continue to be quite concerned about having a profusion of boards in which no one has any real investment and no one has time to really think through things, and am currently tending towards that being a bad idea.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:13:11.092Z · score: 6 (4 votes) · EA · GW
I can't imagine myself being able to objectively cast a vote about funding my room-mate

So, I think I agree with this in the case of small houses. However, I've been part of large group houses with 18+ people in it, where I interacted with very few of the people living in it, and overall spent much less time with many of my housemates than I did with only very casual acquaintances.

Maybe we should just make that explicit? Differentiate living together with 3-4 other people, from living together with 15 other people? A cutoff at something like 7 people seems potentially reasonable to me.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:09:12.903Z · score: 4 (2 votes) · EA · GW

Yeah, I am not sure how to deal with this. Currently the fund team is quite heavily geographically distributed, with me being the only person located in the Bay Area, so on that dimension we are doing pretty well.

I don't really know what to do if there are multiple COIs, which is one of the reasons I much prefer us to err on the side of disclosure instead of recusal. I expect if we were to include friendships as sufficient for recusal, we would very frequently have only one person on the fund being able to vote on a proposal, and I expect that to overall make our decision-making quite a bit worse.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T23:08:20.346Z · score: 17 (7 votes) · EA · GW

So, the problem here is that we are already dealing with a lot of time-constraint, and I feel pretty doomy about having a group that has even less time than the fund already has, to be involved in this kind of decision-making.

I also have a more general concern where when I look at dysfunctional organizations, one of the things I often see are profusions of board upon boards, each one of which primarily serves to spread accountability around, overall resulting in a system in which no one really has any skin in the game and in which even very simple tasks often require weeks of back-and-forth.

I think there are strong arguments in this space that should push you towards avoiding the creation of lots of specialized boards and their associated complicated hierarchies, and I think we see that in the most successful for-profit companies. I think the non-profit sector does this more, but I mostly think of this as a pathology of the non-profit sector that is causing a lot of its problems.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T23:02:55.068Z · score: 5 (3 votes) · EA · GW

That seems good. Edited the document!

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:55:27.197Z · score: 3 (2 votes) · EA · GW

Oh, no. To be clear, recusals are generally non-public. The document above should be more clear about that.

Edit: Actually, the document above does just straightforwardly say:

(recusals and the associated COIs are not generally made public)
Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:54:22.391Z · score: 19 (6 votes) · EA · GW
It seems fairly obvious to me that being in a [...] active collaboration with someone should require recusal

This seems plausibly right to me, though my model is that this should depend a bit on the size and nature of the collaboration.

As a concrete example, my model is that Open Phil has many people who were actively collaborating with projects that eventually grew into CSET, and that that involvement was necessary to make the project feasible, and some of those then went on to work at CSET. Those people were also the most informed about the decisions about the grants they eventually made to CSET, and so I don't expect them to have been recused from the relevant decisions. So I would be hesitant to commit to nobody on the LTFF ever being involved in a project in the same way that a bunch of Open Phil staff were involved in CSET.

My broad model here is that recusal is a pretty bad tool for solving this problem, and that this instead should be solved by the fund members putting more effort into grants that are subject to COIs, and to be more likely to internally veto grants if they seem to be the result of COIs. Obviously that has less external accountability, but is how I expect organizations like GiveWell and Open Phil to manage cases like this. Disclosure feels like the right default in this case, which allows us to be open about how we adjusted our votes and decisions based on the COIs present.

In general I feel CoI policies should err fairly strongly on the side of caution

I don't think I understand what this means, written in this very general language. Most places don't have strong COI policies at all, and both GiveWell and OpenPhil have much laxer COI policies than the above, from what I can tell, which seem like two of the most relevant reference points.

Open Phil has also written a bunch about how they no longer disclose most COIs because the cost was quite large, so overall it seems like a bad idea to just blindly err on the side of caution (since one of the most competent organizations in our direct orbit has decided that that strategy was a mistake).

The above COI policy is more restrictive than the policy for any other fund (since its supplementary and in addition to the official CEA COI policy), so it's also not particularly lax in a general sense.

It seems fairly obvious to me that being in a close friendship [...] should require recusal

I am pretty uncertain about this case. My current plan is to have a policy of disclosing these things for a while, and then allow donors and other stakeholders to give us feedback on whether they think some of the grants were bad as a result of those conflicts.

Again, CSET is a pretty concrete example here, with many people at Open Phil being close friends with people at CSET. Or many people at GiveWell being friends with people at GiveDirectly or AMF. I don't know their internal COI policies, but I don't expect those GiveWell or Open Phil employees to completely recuse themselves from the decisions related to those organizations.

There is a more general heuristic here, where at this stage I prefer our policies to end up disclosing a lot of information, so that others can be well-informed about the tradeoffs we are making. If you err on the side of recusal, you will just prevent a lot of grants from being made, the opportunity cost of which is really hard to communicate to potential donors and stakeholders, and it's hard for people to get a sense of the tradeoffs. So I prefer starting relatively lax, and then over time figuring out ways in which we can reduce bad incentives while still preserving the value of many of the grants that are very context-heavy.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:31:28.527Z · score: 7 (4 votes) · EA · GW

Yeah, splitting it into two seems reasonable, one of which is linked more prominently and one that is here on the forum, though I do much prefer to be more concrete than the GiveWell policy.

I guess I am kind of confused about the benefit of being vague and high-level here. It just seems better for everyone if we are very concrete here, and I kind of don't feel super great about writing things that are less informative, but make people feel better when reading them.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:29:21.056Z · score: 14 (6 votes) · EA · GW

We're a pretty small team, and for most grants, there usually are only one or two people on the fund who have enough context on the project and the grantees to actually be able to assess whether the grant is a good idea. Those are also usually the fund members who are most likely to have conflicts of interest.

Since none of us are full-time, we also usually don't have enough time to share all of our models with each other, so it often isn't feasible to just have the contact-person share all of their impressions with the other fund members, and have them vote on it (since we realistically can't spend more than an hour of full-fund time on each individual grant we make).

One of the things that I am most concerned about if you were to just move towards recusal, is you just end up in a situation where by necessity the other fund members just have to take the recused person's word for the grant being good (or you pass up on all the most valuable grant opportunities). Then their own votes mostly just indirectly represent their trust in the fund member with the COI, as opposed to their independent assessment. This to me seems like it much further reduces accountability and transparency, and muddles a bunch of the internal decision-making.

The current algorithm we are running is something closer to: Be hesitant with recusal, but if the only people on the fund who are strongly in favor of a grant also have some potential weak COI, then put more effort into getting more references and other external sources of validation, but usually still taking the vote of the person with the potential weak COI into account.

Maybe we could integrate this officially into this policy by saying something like: If you have a COI of this type, we will give less weight to your vote, but your vote will still have some weight, depending on how strong your COI is judged by the other fund members. Though I am worried that this is too detailed and would require changing every time we change the local dynamics of how we vote on things.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:16:46.196Z · score: 4 (4 votes) · EA · GW

My model suggests that a lawyer would say: "Very little of this is legally enforceable, I could help you write something legally enforceable, but it would be a much smaller subset of this document, and I don't know how to help with the rest".

Would be curious if people disagree. I also don't have a ton of experience in dealing with lawyers.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T20:32:03.379Z · score: 2 (3 votes) · EA · GW
Go over this with a lawyer and let them formulate it right.

Huh, I am a bit confused. This is not intended to be a legally binding document, so I am not sure how much a lawyer would help. Do you have any specific questions you would ask a lawyer? Or is it just because lawyers have a lot of experience writing clear documents like this?

Replace ‘romantic and/or sexual’ with ‘intimate’.

Yeah, I could see that, though it feels a bunch less clear to me, and maybe too broad? I don't have a great model of what qualifies as an "intimate" relationship, but do feel pretty comfortable judging a romantic and/or sexual relationship.

I do like all of your edit suggestions, and will incorporate them sometime today.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T20:29:05.247Z · score: 4 (3 votes) · EA · GW

So, I am not sure what the alternative is. The pressures seem worse if anything at that level would automatically result in a complete recusal, and not disclosing it also seems kind of bad.

Having a private board also doesn't feel great to me, mostly because I am not a huge fan of lots of intransparent governing bodies, but maybe that's the best path forward?

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T07:30:20.753Z · score: 4 (2 votes) · EA · GW

Yeah, I was also unsure about this. In the original Google Doc, I had the comment:

I sure feel like this is going to confuse people, but not sure what else to do. Any help would be appreciated.

Highlighting the word metamour. If someone has a better word for that, or a good alternative explanation, that would be good.

I think I would eventually want this policy to be public and linked from our funds page, though it should probably go through at least one more round of editing before that.

Overall, I think if you want a good conflict of interest policy, you have to be concrete about things, which includes talking about things that in reality are often the sources of conflict of interest, which are pretty often messy romantic and sexual relationships and sharing weird intense experiences together. I don't know of a way to both be concrete here, and not be off-putting, while also actually addressing many of these potential sources for COIs.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T03:27:10.800Z · score: 2 (1 votes) · EA · GW

Yes, this was just intended as a negative example. If you are friends with a grantee, but do not share a living arrangement and don't have any romantic involvement with them, you don't have to disclose that, or recuse yourself from the decision.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-05T18:42:49.146Z · score: 8 (4 votes) · EA · GW

One additional idea that we had might be for us to create some kind of small external board that can supervise our COI decisions, and that in some sense has authority to deal with difficult edge-cases. By default this would be CEA staff members, but it might make sense to invest some more resources into this, and have more clearly delineated responsibilities in this space, and to allow for broader community buy-in than I think would be the case if we were supervised just by CEA staff members.

Curious about whether anyone has experience with something like this, and whether it seems like a good idea.

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T03:06:49.698Z · score: 4 (2 votes) · EA · GW

Oops. Fixed.

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T05:09:38.065Z · score: 21 (9 votes) · EA · GW

There might also be a confusion about what the purpose and impact of bets in our community is. While the number of bets being made is relatively small, the effect of having a broader betting culture is quite major, at least in my experience of interacting with the community.

More precisely, we have a pretty concrete norm that if someone makes a prediction or a public forecast, then it is usually valid (with some exceptions) to offer a bet with equal or better odds than the forecasted probability to the person making the forecast, and expect them to take you up on the bet. If the person does not take you up on the bet, this usually comes with some loss of status and reputation, and is usually (correctly, I would argue) interpreted as evidence that the forecast was not meant sincerely, or the person is trying to avoid public accountability in some other way. From what I can tell, this is exactly what happened here.

The effects of this norm (at least as I have perceived it) are large and strongly positive. From what I can tell, it is one of the norms that ensures the consistency of the models that our public intellectuals express, and when I interact with communities that do not have this norm, I very concretely experience many people no longer using probabilities in consistent ways, and can concretely observe large numbers of negative consequences arising from the lack of this norm.

Alex Tabarrok has written about this in his post "A Bet is a Tax on Bullshit".

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T04:56:35.434Z · score: 23 (12 votes) · EA · GW
I guess I don't really buy that, though. I don't think that a norm specifically against public bets that are ghoulish from a common-sense morality perspective would place very important limitations on the community's ability to form accurate beliefs or do good.

Responding to this point separately: I am very confused by this statement. A large fraction of topics we are discussing within the EA community, are pretty directly about the death of thousands, often millions or billions, of other people. From biorisk (as discussed here), to global health and development, to the risk of major international conflict, a lot of topics we think about involve people forming models that will quite directly require forecasting the potential impacts of various life-or-death decisions.

I expect bets about a large number of Global Catastrophic Risks to be of great importance, and to similarly be perceived as "ghoulish" as you describe here. Maybe you are describing a distinction that is more complicated than I am currently comprehending, but I at least would expect Chi and Greg to object to bets of the type "what is the expected number of people dying in self-driving car accidents over the next decade?", "Will there be an accident involving an AGI project that would classify as a 'near-miss', killing at least 10000 people or causing at least 10 billion dollars in economic damages within the next 50 years?" and "what is the likelihood of this new bednet distribution method outperforming existing methods by more than 30%, saving 30000 additional people over the next year?".

All of these just strike me as straightforwardly important questions, that an onlooker could easily construe as "ghoulish", and I expect would be strongly discouraged by the norms that I see being advocated for here. In the case of the last one, it is probably the key fact I would be trying to estimate when evaluating a new bednet distribution method.

Ultimately, I care a lot about modeling risks of various technologies, and understanding which technologies and interventions can more effective save people's lives, and whenever I try to understand that, I will have to discuss and build models of how those will impact other people's lives, often in drastic ways.

Compared to the above, the bet between Sean and Justin does not strike me as particularly ghoulish (and I expect that to be confirmed by doing some public surveys on people's naive perception, as Greg suggested), and so I see little alternative to thinking that you are also advocating for banning bets on any of the above propositions, which leaves me confused why you think doing so would not inhibit our ability to do good.

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T04:43:27.886Z · score: 12 (6 votes) · EA · GW
purely because they thought it'd be fun to win a bet and make some money off a friend.

I do think the "purely" matters a good bit here. While I would go as far as to argue that even purely financial motivations are fine (and should be leveraged for the public good when possible), I think in as much as I understand your perspective, it becomes a lot less bad if people are only partially motivated by making money (or gaining status within their community).

As a concrete example, I think large fractions of academia are motivated by wanting a sense of legacy and prestige (this includes large fractions of epidemiology, which is highly relevant to this situation). Those motivations also feel not fully great to me, and I would feel worried about an academic system that tries to purely operate on those motivations. However, I would similarly expect an academic system that does not recognize those motivations at all, bans all expressions of those sentiments, and does not build system that leverages them, to also fail quite disastrously.

I think in order to produce large-scale coordination, it is important to enable the leveraging a of a large variety of motivations, while also keeping them in check by ensuring at least a minimum level of more aligned motivations (or some other external systems that ensures partially aligned motivations still result in good outcomes).

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T22:29:01.201Z · score: 16 (15 votes) · EA · GW
both metacalus and the John's Hopkins platform prediction have relevant questions which are much active, and where people are offering more information.

I am confused. Both of these are environments in which people participate in something very similar to betting. In the first case they are competing pretty directly for internet points, and in the second they are competing for monetary prices.

Those two institutions strike me as great examples of the benefit of having a culture of betting like this, and also strike me as similarly likely to create offense in others.

We seem to agree on the value of those platforms, and both their public perception and their cultural effects seem highly analogous to the private betting case to me. You even explicitly say that you expect similar reactions to questions like the above being brought up on those platforms.

I agree with you that were there only the occasional one-off bet on the forum that was being critiqued here, the epistemic cost would be minor. But I am confident that a community that had a relationship to betting that was more analogous to how Chi's relationship to betting appears to be, we would have never actually built the Metaculus prediction platform. That part of our culture was what enabled us to have these platforms in the first place (as I think an analysis of the history of Metaculus will readily reveal, which I think can be pretty directly traced to a lot of the historic work around prediction markets, which have generally received public critique very similar to the one you describe).

Thus I'm confident if we ran some survey on confronting the 'person on the street' with the idea of people making this sort of bet, they would not think "wow, isn't it great they're willing to put their own money behind their convictions", but something much more adverse around "holding a sweepstake on how many die".

I think this is almost entirely dependent on the framing of the question, so I am a bit uncertain about this. If you frame the question as something like "is it important for members of a research community to be held accountable for the accuracy of their predictions?" you will get a pretty positive answer. If you frame the question as something like "is it bad for members of a research community to profit personally from the deaths and injuries of others?" you will obviously get a negative answer.

In this case, I do think that the broader public will have a broadly negative reaction to the bet above, which I never argued against. The thing I argued against was that minor negative perception in the eyes of the broader public was of particularly large relevance here on our forum.

I additionally argued that the effects of that perception were outweighed by the long-term positive reputational effects of having skin-in-the-game of even just a small amount of our beliefs, and the perception of a good chunk of a much more engaged and more highly-educated audience, which thinks of our participation in prediction-markets and our culture of betting as being one of the things that sets us apart from large parts of the rest of the world.

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T16:48:24.766Z · score: 9 (4 votes) · EA · GW

Ah, I definitely interpreted your comment as “leave a reply or downvote if you think that’s a bad idea”. So I downvoted it and left a reply. My guess is many others have done the same for similar reasons.

I do also think editing for tone was a bad idea (mostly because I think the norm of having to be careful around tone is a pretty straightforward tax on betting, and because it contributed to the shaming of people who do want to bet for what Chi expressed as “inappropriate“ motivations), so doing that was a concrete thing that I think was bad on a norm level.

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T06:40:36.827Z · score: 22 (11 votes) · EA · GW

I also strongly object. I think public betting is one of the most valuable aspects of our culture, and would be deeply saddened to see these comments disappear (and more broadly as an outside observer, seeing them disappear would make me deeply concerned about the epistemic health of our community, since that norm is one of the things that actually keeps members of our community accountable for their professed beliefs)

Comment by habryka on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-01T06:39:20.564Z · score: 22 (29 votes) · EA · GW

I have downvoted this, here are my reasons:

Pretty straightforwardly, I think having correct beliefs about situations like this is exceptionally important, and maybe the central tenet this community is oriented around. Having a culture of betting on those beliefs is one of the primary ways in which we incentivize people to have accurate beliefs in situations like this.

I think doing so publicly is a major public good, and is helping many others think more sanely about this situation. I think the PR risk that comes with this is completely dwarfed by that consideration. I would be deeply saddened to see people avoid taking these bets publicly, since I benefit a lot from both seeing people's belief put the test this way, and I am confident many others are too.

Obviously, providing your personal perspective is fine, but I don't think I want to see more comments like this, and as such I downvoted it. I think a forum that had many comments like this would be a forum I would not want to participate in, and I expect it to directly discourage others from contributing in ways I think are really important and productive (for example, it seems to have caused Sean below to seriously consider deleting his comments, which I would consider a major loss).

I also think that perception wise, this exchange communicates one of the primary aspects that makes me excited about this community. Seeing exchanges like the above is one of the primary reasons why I am involved in the Effective Altruism community, and is what caused me to become interested and develop trust in many of the institutions of the community in the first place. As such, I think this comments gets the broader perception angle backwards.

The comment also seems to repeatedly sneak in assumptions of broader societal judgement, without justifying doing so. The comment makes statements that extend far beyond personal perception, and indeed primarily makes claims about external perception and its relevance, which strike me as straightforwardly wrong and badly argued:

Not only because it looks quite bad from the outside

I don't think it looks bad, and think that on the opposite, it communicates that we take our beliefs seriously and are willing to put personal stakes behind them. There will of course be some populations that will have some negative reaction to the above, but I am not particularly convinced of the relevance of their perception to our local behavior here on the forum.

I'm not sure it's appropriate on a forum about how to do good

I am quite confused why it would be "inappropriate". Our culture of betting is a key part of a culture that helps us identify the most effective ways to do good, and as such is highly appropriate for this forum. It seems to me you are simply asserting that it might be inappropriate, and as such are making an implicit claim about what the norms on such a forum should be, which is something I strongly disagree with.

I can guess that the primary motivation is not "making money" or "the feeling of winning and being right" - which would be quite inappropriate in this context

I don't think these motivations would be inappropriate in this context. Those are fine motivations that we healthily leverage in large parts of the world to cause people to do good things, so of course we should leverage them here to allow us to do good things.

The whole economy relies on people being motivated to make money, and it has been a key ingredient to our ability to sustain the most prosperous period humanity has ever experienced (cf. more broadly the stock market). Of course I want people to have accurate beliefs by giving them the opportunity to make money. That is how you get them to have accurate beliefs!

Similarly the feeling of being right is probably what motivates large fractions of epidemiologists, trying to answer questions of direct relevance to this situation. Academia itself runs to a surprising degree on the satisfaction that comes from being right, and I think we should similarly not label that motivation as "inappropriate", and instead try to build a system that leverages that motivation towards doing good things and helping people have accurate beliefs. Which is precisely what public betting does!

Comment by habryka on Growth and the case against randomista development · 2020-01-22T17:17:48.325Z · score: 28 (12 votes) · EA · GW

I hope that karma isn't a signal of disagreement! We've always had norms of karma being a signal for good and bad content, and explicitly not about whether you agree or disagree with someone. I definitely upvote many things I disagree with, and downvote many things that argue badly for conclusions I agree with.

Comment by habryka on More info on EA Global admissions · 2020-01-16T21:15:42.020Z · score: 20 (7 votes) · EA · GW
I’m curious if you have examples of “norms, ideas, or future plans” which were successfully shared in 2016 (when we had just the one large EA Global) that you think would not have successfully been shared if we had multiple events?

I think EAG 2016 was the last time that I felt like there was a strong shared EA culture. These days I feel quite isolated from the european EA culture, and feel like there is a significant amount of tension between the different cultural clusters (though this is probably worsened by me no longer visiting the UK very much, which I tended to do more during my time at CEA). I think that tension has always been there, but I feel like I am now much more disconnected from how EA is going in other places around the world (and more broadly, don't see a path forward for cultural recombination and reconciliation) because the two clusters just have their own events. I also feel somewhat similar about east-coast and west-coast cultural differences.

More concrete examples would be propagating ongoing shifts in cause-priorities. Many surveys suggest there has been an ongoing shift to more long-term causes, and my sense is that there is a buildup of social tension associated with that, that I think is hard to resolve without building common knowledge.

I think EAG 2016 very concretely actually did a lot by creating common-knowledge of that shift in cause-priorities, as well as a broader shift towards more macro-scale modeling, instead of more narrow RCT-based thinking that I think many assumed to be "what EA is about". I.e. I think EAG 2016 did a lot to establish that EA wasn't just primarily GiveWell and GiveWell style approaches.

A lot of the information I expect to be exchanged here is not going to be straightforward facts, but much more related to attitudes and social expectations, so it's hard to be very concrete about these things, which I regret.

Importantly, I think a lot of this information spreads even when not everyone is attending the same talk. At all EAGs I went to, basically everyone knew by the end what the main points of the opening talks were, because people talked to each other about the content of the opening talks (if they were well-delivered), even if they didn't attend, so there is a lot of diffusion of information that makes literally everyone being in the same talk not fully necessary (and where probabilistic common-knowledge can still be built). The information flow of people who attended separate EA Globals is still present, just many orders of magnitude weaker.

At least in recent years, the comparison of the Net Promoter Score of EAG and EAGx events indicate that the attendees themselves are positive about EAGx, though there are obviously lots of confounding factors:

These graphs are great and surprising to me. I don't yet have great models of how I expect the Net Promoter Score to vary for different types of events like this, so I am not sure yet how to update.

Echoing Denise, I would be curious for evidence here. My intuition is that marginal returns are diminishing, not increasing, and I think this is a common view

At this year's EAG there were many core people in EA that I had hoped I could talk to, but that weren't attending, and when I inquired about their presence, they said they were just planning to attend EAG London, since that was more convenient for them. I also heard other people say that they weren't attending because they didn't really expect a lot of the "best people" to be around, which is a negative feedback loop that I think is at least partially caused by having many events, without one clear Schelling event that everyone is expected to show up to.

(e.g. ticket prices for conferences don’t seem to scale with the square of the number of attendees).

This assumes a model of perfect monopoly for conferences. In a perfectly competitive conference landscape, you expect ticket prices to be equal to marginal costs, which would be decreasing with size. I expect the actual conference landscape to be somewhere in-between, with a curve that does increase in prize proportional to size for a bit, but definitely not completely. Because of that, I don't think price is much evidence either way on this issue.

Do you have examples of groups (events, programs, etc.) which use EA Global attendance as a “significant” membership criterion?

I think I do to some significant extend. I definitely have a significantly different relationship to how I treat people who I met at EA Global. I also think that if someone tells me that they tried to get into EA Global but didn't get in, then I do make a pretty significant update on the degree to which they are core to EA, though the post above has definitely changed that some for me (since it made it more clear that CEA was handling acceptances quite differently than I thought they were). But I don't expect everyone to have read the post in as much detail as I have, and I expect people will continue to think that EAG attendance is in significant parts screening for involvement and knowledge about EA.

I have a variety of other thoughts, but probably won't have time to engage much more. So this will most likely be my last comment on the thread (unless someone asks a question or makes a comment that ends up feeling particularly easy or fun to reply to).

Comment by habryka on Long-Term Future Fund: November 2019 short grant writeups · 2020-01-14T03:51:40.124Z · score: 2 (1 votes) · EA · GW

Thanks! I do really hope I can get around to this. I've had it on my to-do list for a while, but other things have continued to be higher priority. Though I expect things to die down in January, and have some more time set aside in February for writing up LTFF stuff.

Comment by habryka on Long-Term Future Fund: November 2019 short grant writeups · 2020-01-12T20:59:10.902Z · score: 14 (6 votes) · EA · GW

I ended up messaging Ozzie via PM to discuss some of the specific examples more concretely.

I think my position on all of this is better summarized by: "We are currently not receiving many applications from medium-sized organizations, and I don't think the ones that we do receive are competitive with more individual and smaller-project grants".

For me personally the exception here is Rethink Priorities, who have applied, who I am pretty positive on funding, and would strongly consider giving to in future rounds, though I can't speak for the other fund members on that.

Overall, I think we ended up agreeing more on the value of medium-sized orgs, and both think the value there is pretty high, though my experience has been that not that many orgs in that reference class actually applied. And we have actually funded a significant fraction of the ones that have applied (both the AI Safety Camp, and CFAR come to mind, and we would have likely funded Rethink Priorities two rounds ago if another funder hadn't stepped in first).

Comment by habryka on Long-Term Future Fund: November 2019 short grant writeups · 2020-01-11T21:19:02.844Z · score: 15 (7 votes) · EA · GW
Are these small interventions mostly more cost-effective than larger ones?

I do think that, right now, at the margin, small interventions are particularly underfunded. I think that's mostly because there are a variety of funders in the space (like Open Phil and SFF) which are focusing on larger organizational grants, so a lot of the remaining opportunities are more in the space of smaller and early-stage projects.

Larks also brought up another argument for the LTFF focusing on smaller projects in his last AI Alignment Review:

I can understand why the fund managers gave over a quarter of the funds to major organisations – they thought these organisations were a good use of capital! However, to my mind this undermines the purpose of the fund. (Many) individual donors are perfectly capable of evaluating large organisations that publicly advertise for donations. In donating to the LTFF, I think (many) donors are hoping to be funding smaller projects that they could not directly access themselves. As it is, such donors will probably have to consider such organisation allocations a mild ‘tax’ – to the extent that different large organisations are chosen then they would have picked themselves.

I find this argument reasonably compelling and also considered it as one of the reasons for why I want us to focus on smaller projects (though I don't think it's the only, or even primary, reason).

In particular, I think the correct pipeline for larger projects is that the LTFF funds them initially until they have demonstrated enough traction such that funders like Open Phil, Ben Delo and SFF can more easily evaluate them.

I am not fundamentally opposed to funding medium-sized organizations for a longer period, and my model is that there is a size of organization that Open Phil and other funders are unlikely to fund, that have between 2-6 employees, due to being to small to really be worth their attention. I expect we have a good chance of providing long-term support for such organizations, if such opportunities arise (though I don't think we have so far found such opportunities that I would be excited about, though maybe the Topos grant ends up in that category).

One clarification on Roam Research: The primary reason why we didn't want to continue funding Roam was that we thought it very likely that Roam could get profit-oriented VC funding. And I did indeed receive an email from Connor White-Sullivan a few days ago saying that they successfully raised their seed round, parts due to having enough runway and funding security because of our grant, so I think saying that we wouldn't fund them further was the correct call. I think for more purely charity-oriented projects, it's more likely that we would want to support them for a longer period of time.

Comment by habryka on Pablo_Stafforini's Shortform · 2020-01-11T01:54:59.121Z · score: 2 (1 votes) · EA · GW

It's also linked from the frontpage with the "Advanced Sorting/Filtering" link.

Comment by habryka on Pablo_Stafforini's Shortform · 2020-01-09T22:20:40.896Z · score: 13 (6 votes) · EA · GW
The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage.

I don't know what settings the EA Forum uses, but on LessWrong we filter this list to only show users posts that they have not clicked on. I expect eventually we will also add additional functionality to stop showing posts when a user has seen a post a number of times and has repeatedly decided to not click on it.

The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well.

I would use the All-Posts page for this use-case. I check the forum only about once a week, and I like only seeing the best posts from the last few days on the frontpage, as opposed to all of them. There are also a variety of other problems with sorting things strictly by recency, one of the biggest ones is that it basically fails completely as an attention allocation mechanism, and causes people to be a lot more aggressive with downvotes, because everything is competing in a much more direct way for frontpage space (i.e. any user creating any post will take up frontpage space, independently of whether the post is well-received or is of broad interest).


Comment by habryka on More info on EA Global admissions · 2020-01-04T20:26:49.210Z · score: 40 (14 votes) · EA · GW

So, I have a few considerations that tend to argue against that. Here are some of them:

1. Common knowledge is built better by having everyone actually in the same space

I think having common knowledge of norms, ideas and future plans is often very important, and is better achieved by having everyone in the same place. If you split up the event into multiple events, even if all the same people attend, the participants of those events can now no longer verify who else is at the event, and as such can no longer build common knowledge with those other people about the things that have been discussed.

2. EAGx events have historically been of lower quality

I have been to 3 EAGx events, all three of which seemed to me to be just generally much worse run than EAG, both in terms of content and operations. And to be clear, I don't think this reflects particularly badly on the organizers, running a conference is just hard and requires a lot of time, which most EAGx organizers don't tend to have. In general I am in favor of specialization here. Obviously you helping the organizers might be able to address this consideration, so this might be moot.

3. EAGx events should go deep, whereas EAG events should go wide

When I designed the original goal-document for EAGx together with the EAO team, the goal of EAGx was in large parts to allow the creation of more specialist conferences, in which participants could go significantly more in-depth into a topic, and overall feel more like researcher conferences. I think for a variety of reasons that never ended up happening, but one of the reasons is I think that we did try to compensate for the lack of space in the EAG events by encouraging people to go to EAGx events instead.

My current sense is that we do actually also want distributed intro events, and we might want a separate brand from EAGx for that. But for now, I think encouraging usual EAG attendees to go to EAGx events as a replacement will prevent more specialist EAGx-type events from happening, which seems sad to me.

4. The value of a conference does scale to a meaningful degree with n^2

Metcalfe's Law states that

the [value] of a telecommunications network is proportional to the square of the number of connected users of the system (n^2)

I don't think this fully applies to conferences, but I do think it applies to a large degree. The value of an event to me is somewhat proportional to the number of people at that event, so I think there are strong increasing returns to conference size, at least from that perspective.

5. Group membership is in significant parts determined by who attends EAG, and not by who attends EAGx, and I feel somewhat uncomfortable with the degree of control CEA has over that

I think there is a meaningful sense in which people's intuitive sense of "who is an active member of the EA community" is closely related to who attended past EAG events, and so I think preventing people from attending EAG is actually something that has a pretty significant effect on people's social standing. I think having smaller events introduces a lot of noise into that system, and I also don't currently trust CEA to make a lot of the relevant decisions here, and would prefer CEA to on the margin have less control over EA group membership.

I have some more concerns, but these are the ones that I felt like I could write up easily.


Comment by habryka on Thoughts on doing good through non-standard EA career pathways · 2020-01-03T03:26:15.360Z · score: 3 (2 votes) · EA · GW

I have found the handbook of cliometrics pretty useful: https://link.springer.com/referencework/10.1007%2F978-3-642-40458-0

Comment by habryka on Effective Altruism Funds Project Updates · 2020-01-03T03:23:50.990Z · score: 6 (4 votes) · EA · GW

*nods* This perspective is currently still very new to me, and I've only briefly talked about it to people at CEA and other fund members. My sense was that people found the "risk of abuse" framing to resonate a good amount, but this perspective is definitely in no way consensus of the current fund-stakeholders, and is only the best way I can currently make sense of the constraints the fund is facing. I don't know yet to what degree others will find this perspective compelling.

I don't think anyone made a mistake by writing the current risk-page, which I think was an honest and good attempt at trying to explain a bunch of observations and perspectives. I just think I now have a better model that I would prefer to use instead.

Comment by habryka on Effective Altruism Funds Project Updates · 2020-01-03T00:31:53.555Z · score: 18 (5 votes) · EA · GW
I’d been thinking of “risk” in the sense that the EA Funds materials on the topic use the term: “The risk that a grant will have little or no impact.” I think this is basically the kind of risk that most donors will be most concerned about, and is generally a pretty intuitive framing.

To be clear, I am claiming that the section you are linking is not very predictive of how I expect CEA to classify our grants, and is not very predictive of the attitudes that I have seen from CEA and other stakeholders and donors of the funds, in terms of whether they will have an intuitive sense that a grant is "risky". Indeed, I think that page is kind of misleading and think we should probably rewrite it.

I am concretely claiming that both CEA's attitudes, the attitudes of various stakeholders, and most donors attitudes is better predicted by the "risk of abuse" framing I have outlined. In that sense, I disagree with you that most donors will be primarily concerned about the kind of risk that is discussed on the EA Funds page.

Obviously, I do still think there is a place for considering something more like "variance of impact", but I don't actually think that that dimension has played a large role in people's historical reactions to grants we have made, and I don't expect it to matter too much in the future. I think in terms of impact, most people I have interacted with tend to be relatively risk-neutral when it comes to their altruistic impact (and I don't know of any good arguments for why someone should be risk-averse in their altruistic activities, since the case for diminishing marginal returns at the scales on which our grants tend to influence things seems pretty weak).

Edit: To give a more concrete example here, I think by far the grant that has been classified as the "riskiest" grant we have made, that from what I can tell has been motivating a lot of the split into "high risk" and "medium risk" grants, is our grant to Lauren Lee. That grant does not strike me as having a large downside risk, and I don't think anyone I've talked to has suggested that this is the case. The risk that people have talked about is the risk of abuse that I have been talking about, and associated public relations risks, and many have critiqued the grant as "the Long Term Future Fund giving money to their friends", which highlights to me the dimension of abuse risk much more concretely than the dimension of high variance.

In addition to that, grants that operate a higher level of meta than other grants, i.e. which tend to facilitate recruitment, training or various forms of culture-development, have not been broadly described as "risky" to me, even though from a variance perspective those kinds of grants are almost always much higher variance than the object-level activities that they tend to support (since their success and failure is dependent on the success of the object-level activities). Which again strikes me as strong evidence that variance of impact (which seems to be the perspective that the EA Funds materials appear to take) is not a good predictor of how people classify the grants.

Comment by habryka on Introduction: A Primer for Politics, Policy and International Relations · 2020-01-01T03:00:34.773Z · score: 5 (4 votes) · EA · GW

I am excited about this series of posts! :)

Comment by habryka on More info on EA Global admissions · 2019-12-31T23:59:18.577Z · score: 37 (13 votes) · EA · GW

My current sense (as someone who organized EAG in the past and has thought about the effects of EAG a lot) is that it would be better to increase the size of the event, and if that's not financially viable, reduce the size of the subsidies for attendees to make that possible.

I don't think the effect size of "some people felt like the event was too big" is comparable to the effect size of allowing up to 50% more people to participate in the event, and so I think 1000+ person EAG events are probably worth it.

My experience from finding venues is that it is quite doable to find 1000+ person sized venues for reasonable prices, and I didn't share the impression that venue-price seems to increase drastically for more than 500-600 people. I do think the price-per-head might increase a bit, but I would be surprised if it increased by more than 15%.

Comment by habryka on Effective Altruism Funds Project Updates · 2019-12-31T22:44:25.058Z · score: 27 (8 votes) · EA · GW

Note that I don't currently feel super comfortable with the "risk" language in the context of altruistic endeavors, and think that it conjures up a bunch of confusing associations with financial risk (where you usually have an underlying assumption that you are financially risk averse, which usually doesn't apply for altruistic efforts). So I am not fully sure whether I can answer your question as asked.

I actually think a major concern that is generating a lot of the discussion around this is much less "high variance of impact" and more something like "risk of abuse".

In particular, I think relatively few people would object if the funds were doing the equivalent of participating in the donor lottery, even though that would very straightforwardly increase the variance of our impact.

Instead, I think the key difference between a lot of grants that the LTFF made and that were perceived as "risky" and the grants of most other funds (and the grants we made that were perceived as "less risky") is that the risky grants were perceived as harder to judge from the outside of the fund, and were given to people to whom we have a closer personal connection to, both of which enable potential abuse by the fund managers by funneling funds to themselves and their personal connections.

I think people are justified in being concerned about risk of abuse, and I also think that people generally have particularly high standards for altruistic contributions not being funneled into self-enriching activities.

One observation I made that I think illustrates this pretty well is the response to last rounds grant for improving reproducibility in science. I consider that grant to be one of the riskiest (in the "variance of impact sense") that we ever made, with its effects being highly indirect, many steps removed from the long-term future and global catastrophic risk, and its effects only really being relevant in a somewhat smaller fraction of worlds where the reproducibility of cognitive science will become relevant to global catastrophic risks.

However, despite that, almost anyone I've talked to classified that grant as one of the least "risky" grants we made. I think this is because while the path to impact of the grant was long and indirect, the reasoning behind it was broadly available, and the information necessary to make the judgement was fully in-public. There was common-knowledge of grants of that type being a plausible path to impact, and there was no obvious way in which that grant would benefit us as the grantmakers.

Now, in this new frame, let me answer your original questions:

  • At least from what I know the management team would stay the same for both funds
  • I think in the frame of "risk of abuse", I consider the grant to reproducing science to be a "medium-risk" bet. I would also consider our grant to Ought and MIRI as "medium-risk" bets. I would classify many of our grants to individuals as high-risk bets.
  • I think those "medium-risk" grants are indeed comparable in risk of abuse to the average grant of the meta-fund, which I think has generally exercised their individual judgement less, and have more deferred to a broad consensus on which things are positive impact (which I do think has resulted in a lot of value left on the table)

All of this said, I am not yet really sure whether the "risk of abuse" framing really accurately captures people's feelings here, and whether that's the appropriate frame from which to look at things through.

I do think that at the current-margin, using only granting procedures that have minimal risk of abuse is leaving a lot of value at the table, because I think evaluating individual people and their competencies, as well as using local expertize and hard-to-communicate experience, is a crucial component of good grant-making.

I do think we can build better incentive systems and accountability systems to lower the risk of abuse. Reducing the risk of abuse is one of the reasons why I've been investing so much effort into producing comprehensive and transparent grant writeups, since that exposes our reasoning more to the public, and allows people to cross-check and validate the reasoning for our grants, as well as call us out if they think our reasoning is spotty for specific grants. I think this is one way of reducing the risk of abuse, allowing us to overall make grants that take more advantage of our individual judgement, and being more effective on-net.