EA Meta Fund: we are open to applications

2019-01-05T13:32:03.778Z · score: 24 (12 votes)
Comment by denise_melchin on Should donor lottery winners write reports? · 2018-12-23T11:30:12.827Z · score: 14 (6 votes) · EA · GW

My main worry about donor lottery reports is somewhat different. Usually, people seem to assign some extra credibility to a donor's reasoning if the donation/s is/are large. This seems reasonable to me, since donors who donate large sums often have a lot more experience with making donation decisions. But donor lottery winners have much less expertise than the average person who makes large donations (and only just as much as those long-term large donors had when they made a large donation for the first time).

In sum my concern is that people will trust donor lottery winner's evaluations of donation targets more than they should.

Comment by denise_melchin on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T13:47:22.256Z · score: 9 (7 votes) · EA · GW

Hello Alex,

We are interested in funding new projects (see also Alex Foster's response above).

I am also concerned about the difficulty of promising new projects to be discovered. Personally, I am happy to invest some time into evaluating new projects. This is why we have a grant consideration form you can fill out to be considered for receiving a grant. That said, we are time capacity constrained and would not be able to handle 100 applications per month in our current setup.

I have personally considered putting out proposals like you are suggesting, but am concerned about the time investment. First I would like to see how much interest we can gather in different ways.

Comment by denise_melchin on Takeaways from EAF's Hiring Round · 2018-11-20T21:59:31.427Z · score: 2 (1 votes) · EA · GW

To be clear, I meant asking for a reference before an offer is actually made, at the stage when offers are being decided (so that applicants who don't receive offers one way or the other don't 'use up' their references).

Comment by denise_melchin on Takeaways from EAF's Hiring Round · 2018-11-20T21:50:48.736Z · score: 22 (11 votes) · EA · GW

I would strongly advise against making reference checks even earlier in the process. In your particular case, I think it would have been better for both the applicants and the referees if you had done the reference check even later - only after deciding to make an offer (conditional on the references being alright).

Request for references early in the process have put me off applying for specific roles and would again. I'm not sure whether I have unusual preferences but I would be surprised if I did. References put a burden on the referees which I am only willing to impose in exceptional circumstances, and that only for a very limited number of times.

I'm not confident how the referees actually feel about giving references. When I had to give references, I found it mildly inconvenient and would certainly been unhappy if I had to do it numerous times (with either a call or an email).

But for imposing costs on the applicants, it is not important how the referees actually feel about giving references - what matters is how applicants think they feel about it.

If you ask for references early, you might put off a fraction of your applicant pool you don't want to put off.

Comment by denise_melchin on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-17T16:43:52.050Z · score: 15 (14 votes) · EA · GW

I don’t think unsuccessful applications at organizations that are distantly related to the content you’re criticizing constitute a conflict of interest.

If everybody listed their unsuccessful applications at the start of every EA Forum post, it would take up a lot of reader attention.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-18T07:46:49.527Z · score: 18 (14 votes) · EA · GW

The problem here is that people in the EA movement overtly associate being EA not with 'doing high-impact things' but with 'do EA-approved work, ideally at an EA org'.

It is not obvious to me how this is fixable. It doesn't help that recommendations change frequently, so that entering paths that were 'EA-approved' once aren't any longer. As Greg said, people won't want to risk that. It's unfortunate that we punish people for following previous recommendations. This also doesn't exactly incentivize people to follow current recommendations and leads to EAs being flakey, which is bad for long-term impact.

I think one thing that would be good for people is to have a better professional & do-gooding network outside of EA. If you are considering entering a profession, you can find dedicated people there and coordinate. You can also find other do-gooding communities. In both cases you can bring the moral motivations and the empirical standards to other aligned people.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-14T20:47:29.883Z · score: 2 (2 votes) · EA · GW

Oh I agree people will often learn useful things during application processes. I just think the opportunity cost can be very high, especially when processes take months and people have to wait to figure out whether they got into their top options. I also think those costs are especially high in the top applicants - they have to invest the most and might learn the most useful things, but they also lose the most due to higher opportunity costs.

And as you said, people who get filtered out early lose less time and other resources on application processes. But they might still feel negatively about it, especially given the messaging. Maybe their equally rejected friends feel just as bad, which in the future could dissuade other friends who might be potential top hires to even try.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T22:37:03.829Z · score: 13 (13 votes) · EA · GW

Personally, I still think it would be very useful to find more talented people and for more people to consider applying to these roles; we just need to bear in mind that these roles require a very unusual skill-set, so people should always have a good back-up plan.

I'm curious what your model of the % value increase in the top hire is when you, say, double current hiring pools. It needs to be high enough to offset the burnt value from people's investments in those application processes. This is not only expensive for individual applicants in the moment, but also carries the long term risk of demotivating people - and thereby having a counterfactually smaller hiring pool in future years.

EA seems to be already at the point where lots of applicants are frustrated and might value drift, thereby dropping out of the hiring pool. I am not keen on making this situation worse. It might cause permanent harm.

Do you agree there's a trade-off here? If so, I'm not sure whether our disagreement comes from different assessments of value increases in the top hire or burnt value in the hiring pool.

Comment by denise_melchin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T21:15:34.638Z · score: 1 (2 votes) · EA · GW

I had written the same comment, but then deleted it once I found out that it wasn't quite as true as I thought it was. In Nick's writeup the grants come from different funds according to their purpose. (I had previously thought the most recent round of grants granted money to the exact same organisations.)

Comment by denise_melchin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-10T19:08:37.305Z · score: 10 (14 votes) · EA · GW

Echoing David, I'm somewhat sceptical of the responses to "what skills and experience they think the community as a whole will need in the future". Does the answer refer to high impact opportunities in general in the world or only the ones who are mostly located at EA organisations?

I'm also not sure about the relevance to individual EA's career decisions. I think implying it might be relevant might be outright dangerous if this answer is built on the needs of jobs that are mostly located at EA organisations. From what I understand, EA organisations have had a sharp increase in not only the number, but also the quality of applications in recent times. That's great! But pretty unfortunate for people who took the arguments about 'talent constraints' seriously and focused their efforts on finding a job in the EA Community. They are now finding out that they may have little prospects, even if they are very talented and competent.

There's no shortage of high impact opportunities outside EA organisations. But the EA Community lacks the knowledge to identify them and resources to direct its talent there.

There are only a few dozen roles at EA orgs each year, nevermind roles that are a good fit for individual EA's skillset. Even if we only look at the most talented people, there are more capable people the EA Community isn't able to allocate among its own organizations. And this will only get worse - the EA Community is growing faster than jobs at EA orgs.

If we don't have the knowledge and connections to allocate all our talent right now, that's unfortunate, but not necessarily a big problem if this is something that is communicated. What is a big problem is to accidentally mislead people into thinking it's best to focus their career efforts mostly on EA orgs, instead of viewing them as a small sliver in a vast option space.

Comment by denise_melchin on Public Opinion about Existential Risk · 2018-08-25T15:52:56.961Z · score: 2 (2 votes) · EA · GW

Cool study! I wish there were more people who went out and just tested assumptions like this. One high level question:

People in the EA community are very concerned about existential risk, but what is the perception among the general public? Answering this question is highly important if you are trying to reduce existential risk.

Why is this question highly important for reducing extinction risks? This doesn't strike me as obvious. What kind of practical implications does it have if the general public either assigns existential risks either a very high or very low probability?

You could make an argument that this could inform recruiting/funding efforts. Presumably you can do more recruiting and receive more funding for reducing existential risks if there are more people who are concerned about extinction risks.

But I would assume the percentage of people who consider reducing existential risks to be very important to be much more relevant for recruiting and funding than the opinion of the 'general public'.

Though the opinion of those groups has a good chance of being positively correlated, this particular argument doesn't convince me that the opinion of the general public matters that much.

Comment by denise_melchin on Why are you here? An origin stories thread. · 2018-08-10T13:44:00.604Z · score: 1 (1 votes) · EA · GW

Some parts of this sound very similar to me, down to 'left-wing youth political organisation who likes to sing socialist songs' (want to PM me which one it was?).

I have noticed before how much more common activist backgrounds are in German EAs vs. Anglo-Saxon EAs. When I talked about it with other people, the main explanation we could come up with was different base rates of sociopolitical activism in the different countries, but I've never checked the numbers on that.

Comment by denise_melchin on When causes multiply · 2018-08-10T13:21:15.718Z · score: 0 (0 votes) · EA · GW

What you're saying is correct if you're assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn't quite right either: You can't compute an output elasticity if you have to divide by 0.)

But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.

When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I'm not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.

Comment by denise_melchin on When causes multiply · 2018-08-08T15:41:59.966Z · score: 0 (0 votes) · EA · GW

I address the points you mention in my response to Carl.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area.

I don't think this is understanding the issue correctly, but it's hard to say since I am a bit confused what you mean by 'more impactful' in the context of multiplying variables. Could you give an example?

Comment by denise_melchin on When causes multiply · 2018-08-08T15:20:15.411Z · score: 3 (3 votes) · EA · GW

Great comment, thank you. I actually agree with you. Perhaps I should have focussed less on discussing the cause-level and more the interventions level, but I think it is still good to encourage more careful thinking on a cause-wide level even if it won't affect the actual outcome of the decision-making. I think people rarely think about e.g. reducing extinction risks benefiting AMF donations as you describe it.

Let's hope people will be careful to consider multiplicative effects if we can affect the distribution between key variables.

Comment by denise_melchin on Current Estimates for Likelihood of X-Risk? · 2018-08-07T11:32:35.082Z · score: 3 (3 votes) · EA · GW

Do you have private access to the Good Judgement data? I've been thinking before about how it would be good to get superforecasters to answer such questions but didn't know of a way to access the results of previous questions.

(Though there is the question of how much superforecasters' previous track record on short-term questions translates to success on longer-term questions.)

Comment by denise_melchin on Leverage Research: reviewing the basic facts · 2018-08-05T08:47:01.236Z · score: 8 (8 votes) · EA · GW

What are the benefits of this suggestion?

Comment by denise_melchin on Why are you here? An origin stories thread. · 2018-08-05T08:30:59.352Z · score: 9 (9 votes) · EA · GW

Great idea!

When I was around 10, I found the killing and torture of animals for meat and fur atrocious, so this is when I decided to become vegetarian. I have been vegetarian since then.

It wasn't until a few years later that I became more interested in a larger variety of issues, with my pet topics being environmentalism and feminism. I started doing political work when I was 16. I joined a left-wing political group that also focussed on a lot of other issues, like global poverty, democracy and animal rights. It was the first time in my life I met smart and dedicated people.

Apart from that, I spent most of my time reading through all the non-fiction books in the library I could find. I had always wanted to go into academia. I think I started looking forward to doing a PhD when I was around 10.

When I was 17 I found LessWrong. A year later someone who was also interested in LessWrong introduced me to EA and I started talking to the Swiss EA crowd. I had never previously thought about cause prioritisation and was really excited about the concept. This was in 2012.

At the same time, I started a cultural anthropology degree. Given the focus of psychology on WEIRD subjects, it seemed like a great starting point to dismantle misconceptions about humanity. But I was quite disappointed in how the subject was taught, so half a year later, I switched to maths.

It was 2013 by now and I stayed in touch with the EA Community online and visited the UK and Swiss EA Hubs a couple of times. I lived in Germany at the time where no EA Community existed yet. I started organizing a local LW meetup.

I stopped doing political work when I was around 19 because I thought it wasn't "effective" enough. I thoroughly regret this. I had a great network and know quite a few people who have great roles now and lots of experience. EA only came around to politics as a worthwhile avenue to doing good years later.

I focussed on finishing my degree, continued to make sure to stay in touch with the international EA Community and started organizing a local EA meetup once there was more interest in EA in Germany. I mostly regret now how I spent those years. I wish I had been around more people who were actually trying to do things which I cannot say about my local EA/LW network. Continuing political work would have been good, or moving to an EA Hub. But the latter would have conflicted with my degree.

I finished my degree last year and moved to London and recently also spent a few months in Berkeley. This has been a large improvement compared to the previous situation.

Comment by denise_melchin on Problems with EA representativeness and how to solve it · 2018-08-05T08:22:27.771Z · score: 1 (1 votes) · EA · GW

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T19:23:44.849Z · score: 0 (0 votes) · EA · GW

That’s fair.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T16:53:52.163Z · score: 2 (2 votes) · EA · GW

+1 I didn’t spell it out this explicitly, but what I found slightly odd about this post is that infrastructure is not the bottleneck on more grant making, but qualified grant makers.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T09:41:10.725Z · score: 1 (3 votes) · EA · GW

I agree collaboration between the various implementations of the different ideas is valuable and it can be good to help out technically. I'm less convinced of starting a fused approach as an outsider. As Ryan Carey said, most important for good work in this field is i) having people good at grantmaking i.e. making funding decisions ii) the actual money.

Thinking about approaches how to ideally handle grantmaking without having either strikes me as putting the cart before the horse. While it might be great to have a fused approach, I think this will largely be up to the projects who have i) and ii) whether they wish to collaborate further, though other people might be able to help with technical aspects.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T10:26:43.588Z · score: 4 (8 votes) · EA · GW

All of your ideas listed are already being worked on by some people. I talked just yesterday to someone who is intending to implement #1 soon, #3 will likely be achieved by handling EA Grants differently in the future, and there are already a couple of people working on #2, though there is further room for improvement.

Comment by denise_melchin on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-08T19:55:31.516Z · score: 2 (2 votes) · EA · GW

It is still not clear to me how your model is different to what EAs usually call different levels of meta. What is it adding? Using words like 'construal level' complicates the matter further.

I'm happy to elaborate more via PM if you like.

Comment by denise_melchin on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T14:53:37.222Z · score: 6 (6 votes) · EA · GW

I think you're making some valuable points here (e.g. making sure information is properly implemented into the 'higher levels') but I think your posts would have been a lot better if had skipped all the complicated modelling and difficult language. It strikes me as superfluous and the main result seems to me that it makes your post harder to read without adding any content.

Comment by Denise_Melchin on [deleted post] 2018-07-04T10:37:53.309Z

(Denise as mod)

The EA Forum is a place for high level discussion on EA matters which are often too long or inappropriate in other spaces like Facebook. Not yet fully fledged or thoroughly argued ideas are better placed there, since the EA Forum gets too crowded otherwise.

Therefore I'll delete your post. You can modify it and repost, or alternatively, post it elsewhere (like the EA Hangout Facebook group).

Edit: All further comments will be deleted.

Comment by denise_melchin on Want to be more productive? · 2018-06-11T13:48:48.648Z · score: 7 (7 votes) · EA · GW

Usually advertising is not welcome, but in this case, Lynette asked (us EA Forum moderators) for permission in advance. Lynette got an EA Grant to do her work and it's complementary to other EA community services.

Comment by denise_melchin on To Grow a Healthy Movement, Pick the Low-Hanging Fruit · 2018-06-06T22:38:18.392Z · score: 15 (15 votes) · EA · GW

I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?

I can imagine there might be very different results depending on the framing.

My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.

Comment by denise_melchin on The counterfactual impact of agents acting in concert · 2018-05-29T13:07:00.215Z · score: 3 (3 votes) · EA · GW

Where are you actually disagreeing with Joey and the conclusions he is drawing?

Joey is arguing that the --EA Movement-- might accidentally overcount its impact by adding each individual actor's counterfactual impact together. You point out a scenario in which various individual actor's actions are necessary for the counterfactual impact to happen so it is legitimate for each actor to claim the full counterfactual impact. This seems tangential to Joey's point, which is fundamentally about the practical implications of this problem. The question of who is responsible for the counterfactual impact and who should get credit are being asked because as the EA Movement we have to decide how to allocate our resources to the different actors. We also need to be cautious not to overcount impact as a movement in our outside communications and to not get the wrong impression ourselves.

Comment by Denise_Melchin on [deleted post] 2018-05-29T12:27:00.649Z

I think it would have been better for you to post this as a comment on your own or Joey’s post. Having a discussion in three different places makes the discussion hard to follow. Two are more than enough.

Comment by denise_melchin on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2018-05-20T23:42:00.190Z · score: 25 (29 votes) · EA · GW

Thanks for trying to get a clearer handle on this issue by splitting it up by cause area.

One gripe I have with this debate is the focus on EA orgs. Effective Altruism is or should be about doing the most good. Organisations which are explicitly labelled Effective Altruist are only a small part of that. Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.

Whether 'doing the most good' in the world is more talent than funding constrained is much harder to prove but is the actually important question.

If we focus the debate on EA orgs and our general vision as a movement on orgs that are labelled EA, the EA Community runs the risk of overlooking efforts and opportunities which aren't branded EA.

Of course fixing global poverty takes more than ten people working on the problem. Filling the funding gap for GiveWell recommended charities won't be enough to fix it either. Using EA branded framing isn't special to you - but it can make us lose track of the bigger picture of all the problems that still need to be solved, and all the funding that is still needed for that.

If you want to focus on fixing global poverty, just because EA focuses on GW recommended charities doesn't mean EtG is the best approach - how about training to be a development economist instead? The world still needs more than ten additional ones of that. (Edit: But it is not obvious to me whether global poverty as a whole is more talent or funding constrained - you'd need to poll leading people who actually work in the field, e.g. leading development economists or development professors.)

Comment by denise_melchin on Against prediction markets · 2018-05-14T20:32:20.963Z · score: 0 (0 votes) · EA · GW

Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn't actually the main end goal I care about (but 'good done in the world' for which better forecasts of the future would be pretty useful).

Comment by denise_melchin on Against prediction markets · 2018-05-13T15:37:09.652Z · score: 2 (2 votes) · EA · GW

I assumed you didn't mean an internal World Bank prediction market, sorry about that. As I said above, I'm more optimistic about large workplaces employing prediction markets. I don't know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?

Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don't see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.

You also don't state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn't call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Comment by denise_melchin on Against prediction markets · 2018-05-13T08:41:39.368Z · score: 2 (2 votes) · EA · GW

I'm arguing that the limit is hard to reach and when it isn't being reached, prediction markets are usually worse than alternatives. I'd be excited about a prediction market like Scott is describing in his post, but we are quite far away from implementing anything like that.

I also find it ironic that Scott's example discusses how hard election prediction markets are to corrupt, which is precisely what happened in the Intrade example above.

Comment by denise_melchin on Against prediction markets · 2018-05-13T08:36:41.303Z · score: 8 (8 votes) · EA · GW

I'm arguing against prediction markets being the best alternative in many situations contemplated by EAs, which is something I have heard said or implied by a lot of EAs in conversations I've had with them. Most notably, I think a lot of EAs are unaware of the arguments I make in the post and I wanted to have them written up for future reference.

Comment by denise_melchin on Against prediction markets · 2018-05-13T08:33:06.773Z · score: 4 (4 votes) · EA · GW

I don't think prediction markets are overused by EAs, I think they are advocated for too much (both for internal lower stakes situations as well as for solving problems in the world) when they are not the best alternative for a given problem.

One problem with prediction markets is that they are hassle to implement which is why people don't actually want to implement them. But since they are often the first alternative suggestion to the status quo within EA, better solutions in lower stakes situations like office forecasts which might have a chance of actually getting implemented don't even get discussed.

I don't think an office prediction market would be bad or not useful once you ignore opportunity costs, just worse than the alternatives. To be fair, I'm somewhat more optimistic for implementing office prediction markets in large workspaces like Google, but not for the small EA orgs we have. In those they would more likely take up a bunch of work without actually improving the situation much.

How large do you think a market needs to be to be efficient enough to be better than, say, asking Tetlock for the names of the top 30 superforecasters and hiring them to assess the problem? Given that political betting, despite being pretty large, had such big trouble as described in the post, I'm afraid an efficient enough prediction market would take a lot of work to implement. I agree with you the added incentive structure would be nice, which might well make up for a lack of efficiency.

But again, I'm still optimistic about sufficiently large stock market like prediction markets.

Comment by denise_melchin on Against prediction markets · 2018-05-12T17:02:46.554Z · score: 8 (8 votes) · EA · GW

I agree with you prediction markets are in many cases better than the status quo. I'm not comparing prediction markets to perfection but to their alternatives (like extremizing team forecasts). I'm also only arguing that prediction markets are overrated within EA, not in the wider world. I'd assume they're underrated outside of libertarian-friendly circles.

All in all, for which problems prediction markets do better than which alternatives is an empirical question, which I state in the post:

How stringently the conditions for market efficiency need to be met for a market to actually be efficient is an empirical question. How efficient a prediction market needs to be to give better forecasts than the alternatives is another one.

Do you disagree that in the specific examples I have given (an office prediction market about the timeline of a project, an election prediction market) having a prediction market is worse than the alternatives?

It would be good if you could give concrete examples where you expect prediction markets to be the best alternative.

Prediction markets are a neat concept, and are often regarded highly in the EA sphere. I think they are often not the best alternative for a given problem and are insufficiently compared to those alternatives within EA. Perhaps because they are such a neat concept - "let's just do a prediction market!" sounds a lot more exciting than discussing a problem in a team and extremizing the team's forecast even though a prediction market would be a lot more work.

Comment by denise_melchin on Concrete Ways to Reduce Risks of Value Drift · 2018-05-11T18:29:53.355Z · score: 6 (6 votes) · EA · GW

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Doing effective altruistic things ≠ Doing Effective Altruism™ things

All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can't all work at the main EA orgs.

There are lots of highly impactful opportunities out there that aren't branded as EA - check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.

We should be interested in having as high an impact as possible and not in 'performing EA-ness'.

I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an 'EA approved' workplace like D pmind or J n Street) - or nothing. That's counterproductive and sad.

A potential explanation: it's difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and 'status'.

As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.

Comment by denise_melchin on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-09T21:18:25.169Z · score: 9 (9 votes) · EA · GW

Not sure i agree with this. Certainly there is less focus on donating hug sums of money, but that may also be explained by the shift to EA Orgs now often recommending direct work. But i think the EA community as a hole now focusses less on attracting huge ammounts of people and more on keeping the existing members engaged and dedicated and influencing their career choice (if i remember correctly the strategic write-ups from both CEA and EAF seem to reflect this).

For instance, the recent strategy write-up by CEA mentions dedication as an important factor:

We can think of the amount of good someone can be expected to do as being the product of three factors (in a mathematical sense): 1. Resources: The extent of the resources (money, useful labor, etc.) they have to offer; 2. Dedication: The proportion of these resources that are devoted to helping; 3. Realization: How efficiently the resources devoted to helping are used

(top level comment to not make the thread even more messy)

When we talk about dedication and what that looks like in people, I think we can have very different images in mind. We could think of a 'dedicated EA' and think of two different archetypes (of course, reality is more messy than that and people might actually be both):

Person A talks about dedicating their life to having a high impact, about the willingness for self-sacrifice, about optimising everything for this one goal. They're very enthusiastic, think about all their options to do good and talk about nothing but EA.

Person B is careful and measured. They think about how they can use their career and other resources to have a very high impact and about the long road to being in highly impactful position in a later point in their career. They want to make sure they get there by having a proper work-life balance in the process.

When I say (and I think this is true for Joey as well) that EA emphasises dedication much less, I think about dedication in the way that person A embodies. I think CEA in their material think about dedication more in the way of Person B.

EA was much smaller and less professional in the past. That also meant that the 'highest status' positions were much more easily accessible. When I met Joey in 2013, he was interning at 80,000hours and then started his own project with Charity Science and people thought highly of him for that. Now it is not possible anymore to easily intern or volunteer at high profile EA orgs ('management capacity constraints'). Easily accessible positions still exist, but due to the professionalisation and growth of the EA movement, they're less 'high status' and therefore less appealing.

The type of people like Joey who just went out and started their own projects they were enthusiastic about are also relatively speaking (compared to the now 'high status' EA endeavours) less likely to get funding today. I think this might actually be where some part of the conflict about funding constraints and whether small student-y projects are worth funding or not is actually coming from - do we want to support an EA culture where we encourage young people to do random EA projects? Or do we want to foster a professional environment?

I think the move towards professionalising EA has been correct, but we should be aware of the costs it has imposed on people who liked the young people dedicated person A vibe of EA in the past. One alternative name proposal for EA was 'super hardcore do-gooder' - unthinkable today.

Comment by denise_melchin on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-06T09:26:06.951Z · score: 9 (9 votes) · EA · GW

Another factor leading to dedication being emphasized less might be that people are less motivated to be dedicated these days. The growth of the movement and the funding available have resulted in an individual’s EA contributions mattering far less than they used to.

The increased concern about downside risk has also made it much harder to ‘use up’ your dedication. A few years ago you could at least always do some outreach - now it’s commonly considered far less clear the sign on that is positive.

Comment by denise_melchin on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-06T09:15:30.883Z · score: 17 (17 votes) · EA · GW

I’m curious what kind of experiences people in the dedicated group actually had that put them off if you could elaborate on that.

I share the impression that dedication is less encouraged in EA these days than five years ago. I’m also personally very disappointed by that since high dedication felt like a major asset I could bring to EA. Now I feel more like it doesn’t matter which is discouraging.

My guess is that this is because high dedication is a trait of youth movements and the age of the median and perhaps more importantly the most influential EAs has gone up in the mean time. EA has lost its youth movement-y vibe.

I’m also interested whether the other movements you’re comparing EA to are youth movements?

Comment by denise_melchin on Giving Later in Life: Giving More · 2018-05-03T20:11:09.896Z · score: 4 (4 votes) · EA · GW

I think this claim is often true if someone wants to stay in the same location - however, that is very expensive for someone’s career.

Considering EA’s focus on ‘having a good career’ for which the willingness to move is important, buying a property seems much less likely to be a good call compared to the average person. Unless being willing to move whenever a better opportunity arises is not something you’re willing to do anyway, of course.

Comment by denise_melchin on Should there be an EA crowdfunding platform? · 2018-05-02T22:58:34.008Z · score: 1 (1 votes) · EA · GW

We do not disagree much then! The difference seems to come down to what the funding situation actually is and not how it should be.

I see a lot more than a couple of funders per cause area - why are you not counting all the EtGers? Most projects don’t need access to large funders.

Comment by denise_melchin on Should there be an EA crowdfunding platform? · 2018-05-02T10:24:25.699Z · score: 1 (3 votes) · EA · GW

I don’t think of having a (very) limited pool of funders who judge your project as such a negative thing. As it’s been pointed out before, evaluating projects is very time intensive.

You’re also implicitly assuming that there’s little information in the rejection of funders. I think if you have been rejected by 3+ funders, where you hopefully got a good sense for why, you should seriously reconsider your project.

Otherwise you might fall prey to the unilateralist’s curse - most people think your project is not worth funding, possibly because it has some risk of causing harm (either directly or indirectly by stopping others from taking up a similar space) but you only need one person who is not dissuaded by that.

Comment by denise_melchin on Empirical data on value drift · 2018-04-27T13:33:40.134Z · score: 7 (7 votes) · EA · GW

I still think you're focussing too much on changed values as opposed to implementation difficulties (I consider lack of motivation an example of those).

With short and long term versions of both and with it being pretty likely that “value change” would lead to “action change” over time

I think it's actually usually the other way around - action change comes first, and then value change is a result of that. This also seems to be true for your hypothetical Alice in your comment above. AFAIK it's a known psychology result that people don't really base their actions on their values, but instead derive their values from their actions.

All in all, I consider the ability to have a high impact EA-wise much more related to someone's environment than to someone's 'true self with the right values'. I would therefore frame the focus on how to get people to have a high impact somewhat differently: How can we set up supportive environments so people are able to execute the necessary actions for having a high impact?

And not how can we lock in people so they don't change their values - though the actual answers to those questions might not be that different.

Comment by denise_melchin on Empirical data on value drift · 2018-04-24T17:20:55.294Z · score: 15 (17 votes) · EA · GW

Thanks for collecting the data Joey! Really useful.

i) I'm not sure whether 'value drift' is a good term to describe loss of motivation for altruistic actions. I'm also not sure whether the data you collected is a good proxy for loss of motivation for altruistic actions.

To me the term value drift implies that the values of the value drifting person are less important to them than they used to be, as opposed to finding them harder to implement. Your data is consistent with both interpretations. I also wouldn't call someone who still cares as much about their values but finds it harder to be motivated having 'value drifted'.

If we observe someone moving to a different location and then contributing less EA wise, then this can have multiple causes. Maybe their values actually changed, maybe they lost motivation or EA contributions have just become harder to do because there's less EA information and fewer people to do projects with around.

As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses.

ii) This is somewhat tangential to the post, but since having kids came up as a potential reason for value drifting, I'd like to mention how unfortunate it can be for people who have had kids if other EAs assume they have value drifted as a result.

I've had a lot of trouble within the last year in EA spaces after having a baby. EAs around me constantly assume that I suddenly don't care anymore about having a high impact and might just want to be a stay at home parent. This is incredibly insulting and hurtful to me. Especially if it comes from people whom I have known for a long time and who should know this would completely go against my (EA & feminist) values. Particularly bitter is how gendered this assumption is. My kids' dad (also an EA) never gets asked whether he wants to be a stay at home parent now.

I really had expected the EA community to be better at this. It also makes me wonder on how many opportunities to contribute I might have missed out on. The EA community often relays information about opportunities only informally, if someone is assumed to not be interested in contributing the information about opportunities is much less likely to reach them. Thus the belief that EAs will contribute much less once they have kids might turn into a self-fulfilling prophecy.

Comment by denise_melchin on The person-affecting value of existential risk reduction · 2018-04-13T17:07:28.803Z · score: 9 (9 votes) · EA · GW

I've been saying to people that I wish there was a post series about all the practical implications of different philosophical positions (I often have the unflattering impression philosophy EAs like to argue about them just because it's their favourite nerd topic - and not because of the practical relevance).

So special thanks to you for starting it! ;-)

Comment by denise_melchin on Comparative advantage in the talent market · 2018-04-12T18:58:00.066Z · score: 5 (5 votes) · EA · GW

I completely agree. I considered making the point in the post itself, but I didn't because I'm not sure about the practical implications myself!

Comment by denise_melchin on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-27T17:25:39.964Z · score: 1 (1 votes) · EA · GW

It is still unclear to me whether the statutory holidays are supposed to be included in the 25 days paid days off or in addition to.

Comment by denise_melchin on Causal Networks Model I: Introduction & User Guide · 2018-02-09T00:55:44.810Z · score: 0 (0 votes) · EA · GW

Thank you for your comment. I agree our model is only a very basic version and it would be interesting to see it developed further. (Though there are currently no further plans for development that I know of.)

This model was created in about 14 wks of FTEs. I expect a project like you're proposing to take much longer.