Posts

Kerry_Vaughan's Shortform 2019-09-20T12:28:39.233Z · score: 2 (1 votes)
EA Grants applications are now open 2018-09-17T18:15:55.467Z · score: 28 (27 votes)
CEA on community building, representativeness, and the EA Summit 2018-08-15T00:38:00.111Z · score: 32 (33 votes)
Discussion: Adding New Funds to EA Funds 2017-06-01T18:52:21.513Z · score: 14 (13 votes)
Update on Effective Altruism Funds 2017-04-20T17:20:03.808Z · score: 21 (22 votes)
My 5 favorite posts of 2016 2017-01-05T19:54:52.833Z · score: 4 (12 votes)
What the EA community can learn from the rise of the neoliberals 2016-12-06T18:21:29.486Z · score: 35 (32 votes)
The Best of EA in 2016: Nomination Thread 2016-11-08T01:33:03.646Z · score: 11 (21 votes)
Three Heuristics for Finding Cause X 2016-11-04T14:36:46.572Z · score: 5 (9 votes)
Review of EA Global 2016 Marketing 2016-09-14T00:35:52.694Z · score: 10 (16 votes)
Improving the Effective Altruism Network 2016-08-31T19:44:02.391Z · score: 24 (15 votes)
Effective Altruists really love EA: Evidence from EA Global 2016-08-14T19:25:03.007Z · score: 2 (16 votes)
Month-long EA movement building experiment: Effective Altruism: Grow 2016-06-28T20:34:26.604Z · score: -1 (13 votes)
Effective Altruism Outreach winter fundraiser 2015-12-11T17:34:04.050Z · score: 10 (14 votes)
EA Ventures Request for Projects + Update 2015-06-08T21:29:18.349Z · score: 9 (9 votes)
Save the date for EA Global in August! 2015-05-07T14:45:18.094Z · score: 8 (8 votes)
Announcing Effective Altruism Ventures 2015-02-27T20:03:34.072Z · score: 17 (17 votes)

Comments

Comment by kerry_vaughan on How have you become more (or less) engaged with EA in the last year? · 2020-09-14T14:31:59.923Z · score: 4 (2 votes) · EA · GW

I work at Leverage Research as the Program Manager for our Early Stage Science research.

Comment by kerry_vaughan on How have you become more (or less) engaged with EA in the last year? · 2020-09-11T16:47:36.321Z · score: 23 (14 votes) · EA · GW

I'm much less involved now than I was 12 months ago. 

There are a few reasons for this. The largest factor is that my engagement has steadily decreased since I stopped working an EA job where engagement with EA was a job requirement and took a non-EA job instead. My intellectual interests have also shifted to history of science which is mostly outside the EA purview.

More generally, from the outside, EA feels stagnant both intellectually and socially. The intellectual advances that I'm aware of seem to be concentrated in working out the details of longtermism using the tools of philosophy and economics -- important work to be sure, but not work that is likely to substantially influence my worldview or plans. 

Socially, many of the close friends I met in EA are drifting away from EA involvement. The newer people I've met also tend to have a notably different vibe from EAs in the past. Newer EAs seem to be looking to the older EA intellectuals to tell them the answer to what they should do with their lives and how they should think about the world. Something I liked about the vibe of the EA community in the past was the sense of possibility; the sense that there were many unanswered questions and that everyone had to work together to figure things out. 

As the EA community has matured, it seems to have narrowed its focus and reigned in its level of ambition. That's probably for the best, but I suspect it means that the intellectual explorers of the future are probably going to be located elsewhere.

Comment by kerry_vaughan on Updates from Leverage Research: history, mistakes and new focus · 2019-11-27T15:32:43.751Z · score: 4 (3 votes) · EA · GW

So I’m curious if intellectual progress which is dependent on physical tools is really that much different. I’d naively expect your results to translate to math as well.

This is an interesting point, and it's useful to know that your experience indicates there might be a similar phenomenon in math.

My initial reaction is that I wouldn’t expect models of early stage science to straightforwardly apply to mathematics because observations are central to scientific inquiry and don’t appear to have a straightforward analogue in the mathematical case (observations are obviously involved in math, but the role and type seems possibly different).

I’ll keep the question of whether the models apply to mathematics in mind as we start specifying the early stage science hypotheses in more detail.

Comment by kerry_vaughan on Updates from Leverage Research: history, mistakes and new focus · 2019-11-24T23:27:42.222Z · score: 1 (7 votes) · EA · GW

Hi edoarad,

Some off the bat skepticism. It seems a priori that the research on early stage science is motivated by early stage research directions and tools in Psychology. I'm wary of motivated reasoning when coming to conclusions regarding the resulting models in early stage, especially as it seems to me that this kind of research (like historical research) is very malleable and can be inadvertently argued to almost any conclusions one is initially inclined to.

What's your take on it?

Thanks for the question. This seems like the right kind of thing to be skeptical about. Here are a few thoughts.

First, I want to emphasize that we hypothesize that there may be a pattern here. Part of our initial reasoning for thinking that the hypothesis is plausible comes from both the historical case studies and our results from attempting early stage psychology research, but it could very well turn out that science doesn’t follow phases in the way we’ve hypothesized or that we aren’t able to find a single justified, describable pattern in the development of functional knowledge acquisition programs. If this happens we’d abandon or change the research program depending on what we find.

I expect that claims we make about early stage science will ultimately involve three justification types. The first is whether we can make abstractly plausible claims that fit the fact pattern from historical cases. The second is that our claims will need to follow a coherent logic of discovery that makes sense given the obstacles that scientists face in understanding new phenomena. Finally, if our research program goes well, I expect us to be able to make claims about how scientists should conduct early stage science today and then see whether those claims help scientists achieve more scientific progress. The use of multiple justification types makes it more difficult to simply argue for whatever conclusion one is already inclined towards.

Finally, I should note that the epistemic status of claims made on the basis of historical cases is something of an open question. There’s an active debate in academia about the use of history for reaching methodological conclusions, but at least one camp holds that historical cases can be used in an epistemically sound way. Working through the details of this debate is one of the topics I’m researching at the moment.

Also, I'm not quite sure where do you put the line on what is an early stage research. To take some familiar examples, Einstein's theory of relativity, Turing's cryptanalysis research on the enigma (with new computing tools), Wiles's proof of Fermat's last theorem, EA's work on longtermism, Current research on String theory - are they early stage scientific research?

I don’t yet have a precise answer to the question of which instances of scientific progress count as early stage science although I expect to work out a more detailed account in the future. Figuring out whether a case of intellectual progress counts as early stage science involves both figuring out whether it is science and then figuring out whether it is early stage science. I probably wouldn’t consider Wiles's proof of Fermat's last theorem and the development of cryptography as early stage science because I wouldn’t consider mathematical research of this type as science. Similarly, I probably wouldn’t consider EA work on longtermism as early stage science because I would consider it philosophy instead of science.

In terms of whether a particular work of science is early stage science, in our paper we gesture at the characteristics one might look for by identifying the following cluster of attributes:

A relative absence of established theories and well-understood instruments in the area of investigation, the appearance of strange or unexplained phenomena, and lack of theoretical and practical consensus among researchers. Progress seems to occur despite (and sometimes enabled by) flawed theories, individual researchers use imprecise measurement tools that are frequently new and difficult to share, and there exists a bi-directional cycle of improvement between increasingly sophisticated theories and increasingly precise measurement tools.

I don’t know enough about the details of how Einstein arrived at his general theory of relativity to say whether it fits this attribute cluster, but it appears to be missing the experimentation and improvement of measurements tools, and disagreements among researchers. Similarly, while there is significant disagreement among researchers working on theories in modern physics, I think there is substantial agreement on which phenomena need to be explained, how the relevant instruments work and so on.

Comment by kerry_vaughan on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T20:02:35.387Z · score: 11 (7 votes) · EA · GW

Hey Milan,

I'm Kerry and I'm the program manager for our early stage science research.

We've already been engaging with some of the progress studies folks (we've attended some of their meetups and members of our team know some of the people involved). I haven't talked to any of the folks working on metascience since taking on this position, but I used to work at the Arnold Foundation (now Arnold Ventures) who are funders in the space, so I know a bit about the area. Plus, some of our initial research has involved gaining some familiarity with the academic research in both metascience and the history and philosophy of science and I expect to stay up to date with the research in these areas in the future. There was also a good meetup for people interested in improving science at EAG: London this year and I was able to meet a few EAs who are becoming interested in this general topic.

I expect to engage with all of these groups more in the future, but will personally be prioritizing research and laying out the intellectual foundations for early stage science first before prioritizing engaging with nearby communities.

Comment by kerry_vaughan on Which Community Building Projects Get Funded? · 2019-11-14T20:11:21.309Z · score: 6 (6 votes) · EA · GW

"Business plans" aren't really a part of VC evaluations as far as I am aware. It certainly wasn't a part of YC's evaluation process. Eye-popping metrics that show growth are relevant as are the past experiencers of the founders, but VCs don't seem to rely much on abstract plans for what one intends to do as a component of evaluations.

Comment by kerry_vaughan on Which Community Building Projects Get Funded? · 2019-11-14T11:11:25.985Z · score: 13 (9 votes) · EA · GW

My guess is that optimal grantmaking in EA community building is going to be heavily network-based for several reasons.

  1. Running an excellent EA community is a social activity.

Grantmakers gain tons of information about how capable someone is likely to be at doing this by interacting with them socially and that requires meeting them through your networks.

  1. There are some signficant downside risks in funding an EA community builder and network-based funding derisks this.

If an EA community builder does something bad, having been funded by CEA means that it now reflects on the community as a whole and not just on the specific people involved. This means that funders need to both protect against the downside risks as well as fund promising projects. Having someone you know and trust vouch for someone you don't know is, per unit of time involved, one of the best ways I know of to figure out who is and isn't like to accidentally cause harm.

  1. There aren't good objective criteria for evaluating newer community builders.

For someone who has just started running an EA group, it's hard to provide objective numbers that show that you should be funded. Group size, for example, isn't a good proxy becuase small groups of highly dedicated, capable people are likely to be more valuable than large groups of less dedicated, less capable people. An evaluation of the community builders themselves is probably required and information from people in your network helps with this.

Comment by kerry_vaughan on Which Community Building Projects Get Funded? · 2019-11-14T10:40:43.325Z · score: 27 (15 votes) · EA · GW

An interesting comparison point is venture capital investing. VCs have a strong financial incentive to find and invest in all of the best companies regardless of location. Yet, as far as I know, networks matter a ton for getting VC funding and there are geographic clustering effects for companies get funded. We could conclude that VCs are allocating their capital inefficiently, and that there's a market opportunity for VC firms that have partners in many different locations all over the world.

I suspect that's not the right conclusion. Instead, I'd guess that the effect is created by lots of promising companies moving to a tech hub and the best companies being capable of networking their way to funders regardless of location. If you're a startup CEO and can't work out how to get a meeting with VCs, you might be in the wrong line of work.

Similarly, I think one conclusion that I'd like promising EA community leaders to reach from this analysis is that they should probably make sure to find ways to meet the people making grants in their areas. Being able to network seems like a core skill for a promising community builder, so this is an opportunity to exercise that skill. Of course, this doesn't mean that grantmakers shouldn't be working to expand the geographic scope of their grantmaking, it just means that if you're concerned that you're going to get left out of funding unfairly, there are steps you can take to prevent that.

Comment by kerry_vaughan on Which Community Building Projects Get Funded? · 2019-11-14T10:36:07.461Z · score: 12 (9 votes) · EA · GW

Just to add a datapoint to this analysis. I was in charge of the referral-based round of EA Grants in 2018. At that time I was based in Fort Worth, Texas for personal reasons. My networks probably had some of the geographic biases that you're concerned about, but for more complex reasons than my physical location.

(Note: I no longer work at CEA and do not speak on CEA's behalf)

Comment by kerry_vaughan on Kerry_Vaughan's Shortform · 2019-09-23T10:44:53.278Z · score: 13 (10 votes) · EA · GW

The scaffolding problem in early stage science

Part of the success of science comes from the creation and use of scientific instruments. Yet, before you can make good use of any new scientific instrument, you have to first solve what I’m going to call the “scaffolding problem.”

A scientific instrument is, broadly speaking, any device or tool that you can use to study the world. At the most abstract level, the way a scientific instrument works is that it interacts with the world in some way resulting in a change in its state. You then study the change in the instrument’s state as a way of learning about the world.

For example, imagine you want to use a thermometer to learn the temperature of a cup of water. Instead of studying the water directly, what the thermometer lets you do is study the thermometer itself to learn the temperature instead of studying the water directly. For a device as well-calibrated as a modern thermometer, this works extremely well.

Now imagine you’ve invented some new scientific instrument and you want to figure out whether it works. How would you go about doing that? This is a surprisingly difficult problem.

Here’s an abstract way of stating it:

  1. We want to learn about some phenomenon, X.
  2. X is not directly observable, so we infer it from some other phenomenon, Y.
  3. If we want to know if Y tells us about X, we cannot use Y itself, we must use some other phenomenon, Z.
  4. If Z is supposed to tell us about X, then either: 4a) There’s no need to infer X from Y, we should just infer it from Z OR 4b) We have to explain why we can infer X from Z, which repeats this problem

To understand the problem, take the case of the thermometer. If we have the world’s first thermometer what we want to know is whether the thermometer tells us about the temperature. But, to do that we need to know the temperature. But if we knew the temperature there wouldn’t be a need to invent a thermometer in the first place.

Given that we have scientific instruments like thermometers, you can guess that there is a solution to this problem. But, the solution is tricky and takes careful triangulation between multiple methods of studying the phenomenon, none of which you totally trust.

I plan to write more on this and how the scaffolding process works in the future

Comment by kerry_vaughan on Kerry_Vaughan's Shortform · 2019-09-20T12:28:39.364Z · score: 7 (7 votes) · EA · GW

Is there a "scientific method"?

If you learned about science in school, or read the Wikipedia page on the scientific method, you might have encountered the idea that there is a single thing called “The Scientific Method.” Different formulations of the scientific method are described differently, but it involves generating hypotheses, making predictions, running experiments, evaluating the results and then submitting them for peer review.

The idea is that all scientists follow something like this method.

The idea of there being a “scientific method” exists for some reason, but it’s probably not because this corresponds to the reality of actual science. This description from the Stanford Encyclopedia of Philosophy article on the scientific methodology is helpful:

”[t]he issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism ... Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate.”

Similarly, the physicist and Nobel Laureate Steven Weinberg said:

”The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend.”

This suggests that the mainstream view amongst those who seriously study the scientific method is that there isn’t a single method that comprises science, but that there are a variety of methods and the question is how many different methods to include.

This is weird.

Why is the public discussion about the scientific method so out-of-touch with the reality of science? My best guess is that public discussions of the scientific method are doing three different things:

  1. Governance

Scientific knowledge is afforded tremendous power and respect. For proponents of the ideology, it’s important to explain the source of the mandate for that power and respect and what it means for individuals. This includes distinguishing science from non-science, explaining why science is special and explaining why you should trust science.

The idea of “The Scientific Method” is useful for this governance function.

  1. Transmission of Culture

One component of the success of science is the scientific culture. Those involved in science have an approach to understanding the world that is careful, rigorous, and open to new evidence. Presenting a “Scientific Method” is a useful way of transmitting the careful rigorous nature of the scientific culture even if no singular method exists.

  1. Epistemology

Finally, some discussions of The Scientific Method contain claims about how one ought to come to understand the world.

Where the public discussion on scientific methodology is doing 1) or 2), it will make for bad epistemology. Unfortunately, I think much of the public discourse is doing 1) and 2). For this reason, I think it’s best to mostly ignore the public conversation on scientific methodology and whatever they taught you in school if you want to understand how to gain knowledge about the world.

Comment by kerry_vaughan on Effective Altruism is an Ideology, not (just) a Question · 2019-07-04T19:56:01.978Z · score: 7 (5 votes) · EA · GW

I really like this post. Thanks for writing it!

I suspect that an even easier way to get to the conclusion that EA is an ideology is to just try to come up with any plausible statement of what the concept means. At a minimum, I think the concept includes:

  1. A system or set of beliefs that tend to co-occur.
  2. Some set of goals which the beliefs are trying to accomplish

EAs tend to have a shared set of goals and tend to have shared beliefs, so EA is an ideology.

Comment by kerry_vaughan on EA Community Building Grants Update · 2018-11-28T19:32:35.709Z · score: 23 (8 votes) · EA · GW

Hi Michael,

It seems like the nature of your concern originates from this paragraph:

“We used the 80,000 Hours list of priority paths as the basis for our list of accredited roles, but expanded it to be somewhat broader. The areas and roles that we intend to accredit are still being decided upon, and we expect the number of accredited roles and areas to increase in the future. We’ve chosen a relatively restricted set of criteria for the time being, as we think the costs to later restricting the criteria will be significantly higher than the costs of expanding them.”

I think some additional color on how we expanded the criteria to be broader than 80K priority paths and how we expect to expand it to be broader in the future will mitigate some (but not all) of your concerns.

80K priority paths are the initial basis of our list of accredited roles because we have high credence that these are high-impact roles and because 80K can provide infrastructure to assist in helping members of local groups connect with some of these roles. We don’t expect that these are the only roles that are important and it is not our intent to accredit only these roles.

Interestingly, our initial list of accredited roles included any Open Phil or GiveWell grantee in order to make the list more inclusive. We now feel that this expansion might have been too inclusive because of the significant expansion in the organizations Open Phil has provided with grants. (For example, we probably wouldn't be happy to accredit someone working at VasoRx without further information.)

Our current best guess for how to proceed is to accredit anyone working at an organization on the 80K Job Board (which includes global poverty and animal welfare organization) but continue to review individual positive outcomes on a case-by-case basis and accredit some career-related outcomes that are neither on the 80K job board nor Open Phil/GiveWell grantees.

Unfortunately, any list of accredited outcomes that we distribute to grantees will be imperfect. As we mentioned in the original post, our goals in designing an evaluation process for EA community grants were to:

  • Give groups clear guidance on what would cause us to evaluate their activities favourably.
  • Make it easy for the EA community and potential funders to understand and evaluate the success of EA Community Grants.
  • Provide sufficient evidence of the value produced to enable CEA to make well-informed decisions on whether to renew funding for given groups, and whether to scale the EA Community Grants process.
  • Avoid incentivising groups to optimise for our metrics rather than for what is actually highest-impact.
  • Minimise the time cost to CEA and to groups in evaluating their results.

Our current best guess for how to do this is to use the cause prioritization research done by others to create an initial list of clear, easy-to-understand positive outcomes that we can communicate to CEA’s donors and grantees and then to accredit additional positive outcomes on a case-by-case basis. We think this is superior to accrediting everything on a case-by-case basis because it reduces uncertainty for grantees and makes the project easier for funders to evaluate. We also think that 80K represents the best research on what careers have an impact, so it would be surprising if we don’t take advantage of that resource.

If you think we should accredit entirely different outcomes, think this is the wrong approach for accrediting career-related outcomes, or think that we should use a different list, we’d be very open to suggestions. We plan to make the next iterations to our accredited criteria before the next application round in January so feedback now would be particularly timely.

Comment by kerry_vaughan on EA Grants applications are now open · 2018-10-16T09:53:18.031Z · score: 2 (2 votes) · EA · GW

EA Grants applications are now closed. We'll get back to all applications before October 26.

Comment by kerry_vaughan on EA Grants applications are now open · 2018-09-29T12:25:20.544Z · score: 6 (6 votes) · EA · GW

1) We don't have a full list at this time. We're still in the process of figuring out what our approach to communication for EA Grant should be. My best guess is that we'll want to share details of some, but not all of the grants that we make.

2) Not at the moment. I'm planning to have the person we hire be in charge of this process since I think the feedback loops from past grants will be important for helping them make good grant decisions in the future.

Comment by kerry_vaughan on CEA on community building, representativeness, and the EA Summit · 2018-08-21T23:04:48.829Z · score: 2 (2 votes) · EA · GW

Does this mean you wouldn't be keen on e.g. "cause-specific community liasons" who mainly talk to people with specific cause-prioritisations, maybe have some money to back projects in 'their' cause, etc? (I'm thinking of something analogous to an Open Philanthropy Project Program Officer )

I don't think I would be keen on this as stated. I would be keen on a system by which CEA talks to more people with a wider variety of views, but entrenching particular people or particular causes seems likely to be harmful to the long-term growth of the community.

Comment by kerry_vaughan on CEA on community building, representativeness, and the EA Summit · 2018-08-21T22:55:53.524Z · score: 8 (7 votes) · EA · GW

I agree that I might be wrong about this, but it's worth noting that I wasn't trying to make a claim about the modal EA. When talking about the emerging consensus I was implicitly referring to the influence-weighted opinion of EAs or something like that. This could be an area where I don't have access to a representative sample of influential EAs which would make it likely that the claim is false.

Comment by kerry_vaughan on CEA on community building, representativeness, and the EA Summit · 2018-08-18T00:25:39.114Z · score: 11 (11 votes) · EA · GW

Thanks Sam! This is really helpful. I'd be interested in talking on Skype about this sometime soon (just emailed you about it). Some thoughts below:

Is longtermism a cause?

One idea I've been thinking about is whether it makes sense to treat longtermism/the long-term future as a cause.

Longtermism is the view that most of the value of our actions lies in what happens in the future. You can hold that view and also hold the view that we are so uncertain about what will happen in the future that doing things with clear positive short-term effects is the best thing to do. Peter Hurford explains this view nicely here.

I do think that longtermism as a philosophical point of view is emerging as an intellectual consensus in the movement. Yet, I also think there are substantial and reasonable disagreements about what that means practically speaking. I'd be in favor of us working to ensure that people entering the community understand the details of that disagreement.

My guess is that while CEA is very positive on longtermism, we aren't anywhere near as positive on the cause/intervention combinations that longtermism typically suggests. For example, personally speaking, if it turned out that recruiting ML PhDs to do technical AI-Safety didn't have a huge impact I would be surprised but not very surprised.

Threading the needle

My feeling as I've been thinking about representativeness is that getting this right requires threading a very difficult needle because we need to optimize against a large number of constraints and considerations. Some of the constraints include:

  • Cause areas shouldn't be tribes -- I think cause area allegiance is operating as a kind of tribal signal in the movement currently. You're either on the global poverty tribe or the X-risk tribe or the animal welfare tribe and then people tend to defend the views of the tribe they happen to be associated with. I think this needs to stop if we want to build a community that can actually figure out how to do the most good and then do it. Focusing on cause areas as the unit of analysis for representativeness entrenches the tribal concern, but it's hard to get away from because it's an easy-to-understand unit of analysis.
  • We shouldn't entrench existing cause areas -- we should be aiming for an EA that has the ability to shift its consensus on the most pressing problems as we learn more. Some methods of increasing representativeness have the effect of entrenching current cause areas and making intellectual shifts harder.
  • Cause-impartiality can include having a view -- cause impartiality means that you do an impartial calculation of impact to determine what to work on. Such a calculation should lead to developing views on what causes are most important. Intellectual progress probably includes decreasing our uncertainty and having stronger views.
  • The view of CEA staff should inform, but not determine our work -- I don't think it's realistic or plausible for CEA to take actions as if we have no view on the relative importance of different problems, but it's also the case that our views shouldn't substantially determine what happens.
  • CEA should sometimes exercise leadership in the community -- I don't think that social movements automatically become excellent. Excellence typically has to be achieved on purpose by dedicated, skilled actors. I think CEA will often do work that represents the community, but will sometimes want to lead the community on important issues. The allocation of resources across causes could be one such area for leadership although I'm not certain.

There are also some other considerations around methods of improving representativeness. For example, consulting established EA orgs on representativeness concerns has the effect of entrenching the current systems of power in a way that may be bad, but that gives you a sense of the consideration space.

CEA and cause-impartiality

Suggestion: CEA should actively champion cause impartiality

I just wanted to briefly clarify that I don't think CEA taking a view in favor of longtermism or even in favor of specific causes that are associated with longtermism is evidence against us being cause-impartial. Cause-impartiality means that you do an impartial calculation of the impact of the cause and act on the basis of that. This is certainly what we think we've done when coming to views on specific causes although there's obviously room for reasonable disagreement.

I would find it quite odd if major organizations in EA (even movement building organizations) had no view on what causes are most important. I think CEA should be aspiring to have detailed, nuanced views that take into account our wide uncertainty, not no views on the question.

Making people feel listened to

I broadly agree with your points here. Regularly talking to and listening to more people in the community is something that I'm personally committed to doing.

Your section on representatives feels like you are trying to pin down a way of finding an exact number so you can say we have this many articles on topic x and this many on topic y and so on. I am not sure this is quite the correct framing.

Just to clarify, I also don't think trying to find a number that defines representativeness is the right approach, but I also don't want this to be a purely philosophical conversation. I want it to drive action.

Comment by kerry_vaughan on CEA on community building, representativeness, and the EA Summit · 2018-08-15T21:06:50.953Z · score: 4 (4 votes) · EA · GW

We're asking for feedback on who we should consult with in general, not just for EA Global.

In particular, the usual process of seeking advice from people we know and trust is probably producing a distortion where we aren't hearing from a true cross-section of the community, so figuring out a different process might be useful.

Comment by kerry_vaughan on CEA on community building, representativeness, and the EA Summit · 2018-08-15T21:01:16.028Z · score: 4 (4 votes) · EA · GW

The biggest open questions are:

1) In general, how can we build a community that is both cause impartial and also representative? 2) If we want to aim for representativeness, what reference class should we target?

Comment by kerry_vaughan on CEA on community building, representativeness, and the EA Summit · 2018-08-15T20:58:48.416Z · score: 2 (4 votes) · EA · GW

At the moment our mainline plan is this post with a request for feedback.

I've been talking with Joey Savoie and Tee Barnett about the issue. I intend to consult others as well, but I don't have a concrete plan for who to contact.

Comment by kerry_vaughan on Effective Altruism Grants project update · 2018-02-11T17:27:33.827Z · score: 1 (3 votes) · EA · GW

Currently planning to open EA Grants applications by the end of the month. I plan for the application to remain open so that I can accept applications on a rolling basis.

Comment by kerry_vaughan on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2018-01-02T21:58:25.246Z · score: 2 (2 votes) · EA · GW

I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was "underestimated the number of applications", it feels like you might still be making this mistake.

That's fair. My thinking in choosing £2m was that we would want to fund more projects than we had money to fund last year, but that we would have picked much of the low-hanging fruit, so there'd be less to fund.

In any case, I'm not taking that number too seriously. We should fund all the projects worth funding and raise more money if we need it.

Comment by kerry_vaughan on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2018-01-02T21:52:53.677Z · score: 1 (1 votes) · EA · GW

Will you allow individuals to fund EA Grants in the future.

We probably won't raise EA Grants money from more than a handful of donors. I think we can secure funding from CEA's existing donor base and the overhead of raising money from multiple funders probably isn't worth the cost.

That said, there are two related things that we will probably do:

  1. We'll probably refer some promising projects to other funders. We did this last round for projects that we couldn't fund for legal reasons and for projects where existing funders had more expertise in the project than we did.
  2. We'll probably send applicants that were close to getting funding but didn't to other funders that might be interested in the project.
Comment by kerry_vaughan on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2017-12-19T21:20:15.512Z · score: 6 (6 votes) · EA · GW

Good question. I agree that the process for Individual outreach is mysterious and opaque. My feeling is that this is because the approach is quite new, and we don't yet know how we'll select people or how we'll deliver value (although we have some hypotheses).

That said, there are two answers to this question depending on the timeline we're talking about.

In the short run, the primary objective is to learn more about what we can do to be helpful. My general heuristic is that we should focus on the people/activity combinations that seem to us to be likely to produce large effects so that we can get some useful results, and then iterate. (I can say more about why I think this is the right approach, if useful).

In practice, this means that in the short-run we'll work with people that we have more information on and easier access to. This probably means working with people that we meet at events like EA Global, people in our extended professional networks, EA Grants recipients, etc.

In the future, I'd want something much more systematic to avoid the concerns you've raised and to avoid us being too biased in favor of our preexisting social networks. You might imagine something like 80K coaching where we identify some specific areas where we think we can be helpful and then do broader outreach to people that might fall into those areas. In any case, we'll need to experiment and iterate more before we can design a more systematic process.

Comment by kerry_vaughan on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-07T22:55:00.940Z · score: 2 (2 votes) · EA · GW

3c. Other research, especially "learning to reason from humans," looks more promising than HRAD (75%?)

I haven't thought about this in detail, but you might think that whether the evidence in this section justifies the claim in 3c might depend, in part, on what you think the AI Safety project is trying to achieve.

On first pass, the "learning to reason from humans" project seems like it may be able to quickly and substantially reduce the chance of an AI catastrophe by introducing human guidance as a mechanism for making AI systems more conservative.

However, it doesn't seem like a project that aims to do either of the following:

(1) Reduce the risk of an AI catastrophe to zero (or near zero) (2) Produce an AI system that can help create an optimal world

If you think either (1) or (2) are the goals of AI Safety, then you might not be excited about the "learning to reason from humans" project.

You might think that "learning to reason from humans" doesn't accomplish (1) because a) logic and mathematics seem to be the only methods we have for stating things with extremely high certainty, and b) you probably can't rule out AI catastrophes with high certainty unless you can "peer inside the machine" so to speak. HRAD might allow you to peer inside the machine and make statements about what the machine will do with extremely high certainty.

You might think that "learning to reason from humans" doesn't accomplish (2) because it makes the AI human-limited. If we want an advanced AI to help us create the kind of world that humans would want "if we knew more, thought faster, were more the people we wished we were" etc. then the approval of actual humans might, at some point, cease to be helpful.

Comment by kerry_vaughan on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-07T05:45:21.484Z · score: 20 (20 votes) · EA · GW

This was the most illuminating piece on MIRIs work and on AI Safety in general that I've read in some time. Thank you for publishing it.

Comment by kerry_vaughan on Discussion: Adding New Funds to EA Funds · 2017-06-07T16:02:29.847Z · score: 2 (4 votes) · EA · GW

I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense).

Great idea. This makes sense to me.

Comment by kerry_vaughan on Discussion: Adding New Funds to EA Funds · 2017-06-07T16:00:28.653Z · score: 0 (0 votes) · EA · GW

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

The fund is whatever he thinks is best in EA Community building. If he wanted to fund other things the EA Community fund would not be a good option.

Comment by kerry_vaughan on Discussion: Adding New Funds to EA Funds · 2017-06-02T17:02:58.935Z · score: 1 (1 votes) · EA · GW

Hey Michael, great ideas. I'd like to see all of these as well. My concern would just be whether there are charities available to fund in the areas. Do you have some potential grant recipients for these funds in mind?

Comment by kerry_vaughan on Discussion: Adding New Funds to EA Funds · 2017-06-02T17:00:58.239Z · score: 3 (3 votes) · EA · GW

This is an interesting idea. I have a few hesitations about it, however:

  1. The number of organizations which are doing cause prioritization and not also doing EA Community Building is very small (I can't think of any off the top of my head).
  2. My sense is that Nick wants to fund both community building and cause prioritization, so splitting these might place artificial constraints on what he can fund.
  3. EA Community building has the least donations so far ($83,000). Splitting might make the resulting funds too small to be able to do much.
Comment by kerry_vaughan on Discussion: Adding New Funds to EA Funds · 2017-06-02T16:46:39.389Z · score: 0 (0 votes) · EA · GW

Great point.

A different option for handling this concern would be for us to let fund managers email the EA Funds users if they have a good opportunity, but lack funding.

Comment by kerry_vaughan on Effective altruism is self-recommending · 2017-05-08T17:33:28.448Z · score: 3 (3 votes) · EA · GW

Hey, Ben. Just wanted to note that I found this very helpful. Thank you.

Comment by kerry_vaughan on Effective altruism is self-recommending · 2017-04-28T17:58:28.078Z · score: 4 (4 votes) · EA · GW

We didn't offer any alternative events during Elon's panel because we (correctly) perceived that there wouldn't be demand for going to a different event and putting someone on stage with few people in the audience is not a good way to treat speakers.

We had to set up an overflow room for people that didn't make it into the main room during the Elon panel, and even the overflow room was standing room only.

I think this is worth pointing out because of the proceeding sentence:

However, EA leadership tends to privately focus on things like AI risk.

The implication is that we aimed to bias the conference towards AI risk and against global poverty because of some private preference for AI risk as a cause area.[1]

I think we can be fairly accused of aiming for Elon as an attendee and not some extremely well known global poverty person.

However, with the exception of Bill Gates (who we tried to get), I don't know of anyone in global poverty with anywhere close to the combination of a) general renown and b) reachability. So, I think trying to get Elon was probably the right call.

Given that Elon was attending, I don't see what reasonable options we had for more evenly distributing attention between plausible causes. Elon casts a big shadow.

[1] Some readers contacted me to let me know that they found this sentence confusing. To clarify, I do have personal views on which causes are higher impact than others, but the program design of EA Global was not an attempt to steer EA on the basis of those views.

Comment by kerry_vaughan on Effective altruism is self-recommending · 2017-04-27T20:56:51.581Z · score: 6 (6 votes) · EA · GW

Two years ago many attendees at the EA Global conference in the San Francisco Bay Area were surprised that the conference focused so heavily on AI risk, rather than the global poverty interventions they’d expected.

EA Global 2015 had one pannel on AI (in the morning, on day 2) and one talk tripplet on Global Poverty (in the afternoon, on day 2). Most of the content was not cause-specific.

People remember EA Global 2015 as having a lot of AI content because Elon Musk was on the AI pannel which made it loom very large in people's minds. So, while it's fair to say that more attention ended up on AI than on global poverty, it's not fair to say that the content focused more on AI than on global poverty

Comment by kerry_vaughan on Effective altruism is self-recommending · 2017-04-27T20:35:57.391Z · score: 6 (8 votes) · EA · GW

But the right thing to do, if you want to persuade people to delegate their giving decisions to Nick Beckstead, is to make a principled case for delegating giving decisions to Nick Beckstead.

I just want to note that we have tried to make this case.

The fund page for the Long-Term Future and EA Community funds includes an extensive list of organizations Nick has funded in the past and of his online writings.

In addition, our original launch post contained the following section:

Strong track record for finding high-leverage giving opportunities: the EA Giving Group DAF

The initial Long-Term Future and Effective Altruism Community funds will be managed by Nick Beckstead, a Program Officer at the Open Philanthropy Project who has helped advise a large private donor on donation opportunities for several years. The donor-advised fund (DAF) Nick manages was an early funder of CSER, FLI, Charity Entrepreneurship and Founders Pledge. A list of Nick’s past funding is available in his biography on this website.

We think this represents a strong track record, although the Open Philanthropy Project’s recent involvement in these areas may make it harder for the fund to find promising opportunities in the future.

Donors can give to the DAF directly by filling out this form and waiting for Nick to contact you. If you give directly the minimum contribution is $5,000. If you give via the EA Funds there is no minimum contribution and you can give directly online via credit/debit card, ACH, or PayPal. Nick's preference is that donors use the EA Funds to contribute.

Disclaimer: Nick Beckstead is a trustee of CEA. CEA has been a large recipient of the EA Giving Group DAFs funding in the past and is a potential future recipient of money allocated to the Movement Building fund.

My guess is that you feel that we haven't made the case for delegating to Nick as strongly or as prominently as we ought to. If so, I'd love some more specific feedback on how we can improve.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-23T19:32:52.171Z · score: 1 (1 votes) · EA · GW

I'm not sure that's true. There are a lot of venture funds in the Valley but that doesn't mean it's easy to get any venture fund to give you money.

I don't have the precise statistics handy, but my understanding is that VC returns are very good for a small number of firms and break-even or negative for most VC firms. If that's the case, it suggests that as more VCs enter the market, more bad companies are getting funded.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-23T19:24:54.707Z · score: 2 (2 votes) · EA · GW

It also doesn't help that most of the core objections people have brought up have been acknowledged but not addressed.

My sense (and correct me if I'm wrong) is that the biggest concerns seem to be related to the fact that there is only one fund for each cause area and the fact that Open Phil/GiveWell people are running each of the funds.

I share this concern and I agree that it is true that EA Funds has not been changed to reflect this. This is mostly because EA Funds simply hasn't been around for very long and we're currently working on improving the core product before we expand it.

What I've tried to do instead is precommit to 50% or less of the funds being managed by Open Phil/GiveWell and give a general timeline for when we expect to start making good on that committment. I know that doesn't solve the problem, but hopefully you agree that it's a step in the right direction.

That said, I'm sure there are other concerns that we haven't sufficiently addressed so far. If you know of some off the top of your head, feel free to post them as a reply to this comment. I'd be happy to either expand on my thoughts or address the issue immediately.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-23T19:01:03.111Z · score: 2 (2 votes) · EA · GW

Kerry can confirm or deny but I think he's referring to the fact that a bunch of people were surprised to see (e.g.? Not sure if there were other cases.) GWWC start recommending the EA funds and closing down the GWWC trust recently when CEA hadn't actually officially given the funds a 'green light' yet.

Correct. We had updated in favor of EA Funds internally but hadn't communicated that fact in public. When we started linking to EA Funds on the GWWC website, people were justifiably confused.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data.

The money moved is the strongest new data point.

It seemed quite plausible to me that we could have the community be largely supportive of the idea of EA Funds without actually using the product. This is more or less what happened with EA Ventures -- lots of people thought it was a good idea, but not many promising projects showed up and not many funders actually donated to the projects we happened to find.

Do you feel that the post as currently written still overhypes the communities perception of the project? If so, what changes would you suggest to bring it more in line with the observable evidence?

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-23T18:48:05.393Z · score: 2 (2 votes) · EA · GW

Hey Vipul, thanks for taking the time to write this. I think I largely agree with the points you've made here.

As we've stated in the past, the medium-term goal for EA Funds to have 50% or less of the fund managers be Open Phil/GiveWell staff. We haven't yet decided whether we would plan to add fund managers in new cause areas, add fund managers with different approaches in existing cause areas, or some combination of the two. Given that Global Health and Development has received the most funding, there is likely room for adding funds that take a different approach to funding the space. Personally, I'd be excited to see something like a high risk, high reward global health and development fund.

I probably disagree with changing the name of the fund right now as I think the current name does a good job of making it immediately clear what the fund is about. Because the UI of EA Funds shows you all the available funds and lets you split between them, we chose names that make it clear what the fund is about as compared to what the other funds are about.

If we added a fund that was also in Global Heath and Development, then it might make sense to change the current name of the Global Health and Development fund to make it clear how the two funds are distinct from one another.

By the way, if you know of solid thinkers in Global Heath and Development funding who are unaffiliated with GiveWell please feel free to email their names to me at kerry@effectivealtruism.org.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-22T16:48:22.607Z · score: 7 (7 votes) · EA · GW

We have an issue with our CMS which is making the grant information not show up on the website. I will include these grants and all future grants as soon as that is fixed.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T18:09:58.807Z · score: 1 (1 votes) · EA · GW

Unfortunately, we don't have any details around this at the moment. We should have more to share once we devote more time to this question over the summer.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T18:09:08.073Z · score: 2 (2 votes) · EA · GW

We haven't decided this yet, but I can share my current guesses. I expect that we'll be looking for fund managers who have worldviews that are different from the existing fund managers, who are careful thinkers, who are respected in the EA community and our likely pool of donors, and who are willing to devote a sufficient amount of time to manage the fund.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T18:06:26.865Z · score: 2 (2 votes) · EA · GW

We're still working on the process for adding new fund managers. New fund managers will not need to have a relationship with anyone on the team.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T18:03:33.457Z · score: 9 (9 votes) · EA · GW

In our post-donation survey, we ask whether people consider themselves a part of the EA Community. Out of 32 responses, 10 said no which indicates that around 1/3 of donors are new to EA.

However, donations from this group were generally quite small and some of them indicated that they had donated to places like AMF or GiveDirectly in the past. My overall guess is that the vast majority of money donated so far has been from people who were already familiar with EA.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T17:53:45.349Z · score: 9 (9 votes) · EA · GW

Specifically, what steps is CEA and Nick (a trustee of CEA) going to take to recuse themselves from discussions in the movement building fund?

The current process is that fund managers send grant recommendations to me and Tara and we execute them. Fund managers don't discuss their grant recommendations with us ahead of time and we don't have any influence over what they recommend.

From a legal standpoint, money donated to EA Funds has been donated to CEA. This means that we need board approval for each grant the fund managers recommend. The only cases I see at the moment where we might fail to approve a grant would be cases where a) the grant violates the stated goals of the fund or b) where the grant would not be consistent with CEA's broad charitable mission. I expect both of these cases to be unlikely to occur.

Will CEA apply for money through the fund?

At the moment there isn't really an application process. Any formal system for requesting grants would be set up by Nick without CEA's input or assistance.

That said, CEA is a potential recipient of money donated to the EA Community fund. If we believe that we can make effective use of money in the EA Community fund we will make our case to Nick for receiving funding. Nick's position as a trustee of CEA means that he has robust access to information about CEA's activities, bnudget, and funding sources.

Would there be any possibility of inappropriate pro-CEA bias if someone else applied for the fund wanting to do something similar to what CEA is doing or wants to do?

This is certainly possible. Because Nick talks to the other CEA trustees regularly, it is likely that he would know where other organizations overlap with CEA's work and it is likely that he would know what CEA staff think about other oganizations. This might cause him to judge other organizations more unfavorably than he might if he was not a CEA trustee.

I think the appropriate outside view is that Nick will be unintentionally biased in CEA's favor in cases where CEA conflicts with other EA community building organizations. My inside view from interacting with Nick is that he is a careful and thoughtful decision-maker who is good at remaining objective.

If you're worried about pro-CEA bias and if you don't have sufficient information about Nick to trust him, then you probably shouldn't donate to the EA Community Fund.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T17:19:14.174Z · score: 5 (5 votes) · EA · GW

I agree that people new to EA could find EA Funds much less persuasive than the previous donation recommendations we used. I expect that we'll find out whether or not this is true as we work on expanding EA Funds outside of EA. If non-EAs don't want to use EA Funds, then we'll probably want to lead with other examples of how people select effective donation options.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T17:12:43.819Z · score: 7 (7 votes) · EA · GW

Nick's recommendation came much sooner after launch than Lewis's, so Nick had much less money available at the time.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T17:11:07.967Z · score: 9 (9 votes) · EA · GW

But if I can't convince them to fund me for some reason and I think they're making a mistake, there are no other donors to appeal to anymore. It's all or nothing.

The upside of centralization is that it helps prevent the unilateralist curse for funding bad projects. As the number of funders increases, it becomes increasingly easy for the bad projects to find someone who will fund them.

That said, I share the concern that EA Funds will become a single point of failure for projects such that if EA Funds doesn't fund you, the project is dead. We probably want some centralization but we also want worldview diversification. I'm not yet sure how to accomplish this. We could create multiple versions of the current funds with different fund managers, but that is likely to be very confusing to most donors. I'm open to ideas on how to help with this concern.

Comment by kerry_vaughan on Update on Effective Altruism Funds · 2017-04-21T16:53:04.599Z · score: 8 (8 votes) · EA · GW

As much as I admire the care that has been put into EA Funds (e.g. the 'Why might you choose not to donate to this fund?' heading for each fund), this sentence came across as 'too easy' for me. To be honest, it made me wonder if the analysis was self-critical enough (I admit to having scanned it) as I'd be surprised if the trusted people you spoke with couldn't think of any significant risks. I also think 'largely positive' reception does not seem like a good indicator.

I agree. This was a mistake on my part. I was implicitly thinking about some of the recent feedback I'd read on Facebook and was not thinking about responses to the initial launch post.

I agree that it's not fair to say that the criticism have been predominately about website copy. I've changed the relevant section in the post to include links to some of the concerns we received in the launch post.

I'd like to develop some content for the EA Funds website that goes into potential harms of EA Funds that are separate from the question of whether EA Funds is the best option right now for individual donors. Do you have a sense of what concerns seem most compelling or that you'd particularly like to see covered?