Comment by denise_melchin on Request for comments: EA Projects evaluation platform · 2019-03-21T18:22:44.095Z · score: 12 (4 votes) · EA · GW

(I still feel like I don’t really understand where you’re coming from.)

I am concerned that your model of how idea proposals get evaluated (and then plausibly funded) is a bit off. From the original post:

hard to evaluate which project ideas are excellent , which are probably good, and which are too risky for their estimated return.

You are missing one major category here: projects which are simply bad because they do have approximately zero impact, but aren't particularly risky. I think this category is the largest of the the four.

Which projects have a chance of working and which don't is often pretty clear to people who have experience evaluating projects quite quickly (which is why Oli suggested 15min for the initial investigation above). It sounds to me a bit like your model of ideas which get proposed is that most of them are pretty valuable. I don't think this is the case.

When funders give general opinions on what should or should not get started or how you value or not value things, again, I think you are at greater risk of having too much of an influence on the community. I do not believe the knowledge of the funders is strictly better than the knowledge of grant applicants.

I am confused by this. Knowledge of what?

The role of funders/evaluators is to evaluate projects (and maybe propose some for others to do). To do this well they need to have a good mental map of what kind of projects have worked or not worked in the past, what good and bad signs are, ideally from an explicit feedback loop from funding projects and then seeing how the projects turn out. The role of grant applicants is to come up with some ideas they could execute. Do you disagree with this?

Comment by denise_melchin on Request for comments: EA Projects evaluation platform · 2019-03-21T15:10:11.348Z · score: 10 (5 votes) · EA · GW
I think it much harder to give open feedback if it is closely tied with funding. Feedback from funders can easily have too much influence on people, and should be very careful and nuanced, as it comes from some position of power. I would expect adding financial incentives can easily be detrimental for the process. (For self-referential example, just look on this discussion: do you think the fact that Oli dislikes my proposal and suggest LTF can back something different with $20k will not create at least some unconscious incentives?)

I'm a bit confused here. I think I disagree with you, but maybe I am not understanding you correctly.

I consider having people giving feedback to have 'skin in the game' to be important for the accuracy of the feedback. Most people don't enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.

Funders instead can give you a strong signal - a signal which is unfortunately somewhat binary and lacks nuance. But someone being willing to fund something or not is a much stronger signal for the value of a proposal than comments from friends on a GoogleDoc. This is especially true if people proposing ideas don't take into account how hard it is to discourage people and don't interpret feedback in that light.

Comment by denise_melchin on A guide to improving your odds at getting a job in EA · 2019-03-19T15:42:27.490Z · score: 18 (11 votes) · EA · GW
EA jobs, unlike many other jobs, do not compare very well to other kinds of work experience,

I'm pretty sceptical of this claim (not just made here, but also made in many other posts). I think this might be true for some roles like the Research Analyst positions at the Open Philanthropy Project which combine academic research with grantmaking which is unusual in the wider job market.

But I don't see why e.g. operations at an average EA organisation would not compare well to other kinds of work experience in operations. I'm happy to hear counterarguments to this.

The underlying crux here might be that I'm generally wary of any claims of 'EA exceptionalism'.

Comment by denise_melchin on A guide to improving your odds at getting a job in EA · 2019-03-19T15:37:26.298Z · score: 24 (14 votes) · EA · GW

This list seems roughly reasonable. What most stands out to me is that your suggestions are extremely time consuming, especially in aggregate. The hours applicants to jobs at EA organisations spend on timed work tests and honing their CVs pale in comparison.

I also think your suggestions are applicable to some other fields which might be of interest to people who are trying to have a high impact. It is not unusual for desirable roles in e.g. international development to require hundreds to thousands of hours of investment.

However, if people are investing those thousands of hours into learning about EA, they will not spend them investing in international development or nuclear security.

While people following your suggestions might benefit individually, as a movement we and the world might be worse off.

Comment by denise_melchin on EA is vetting-constrained · 2019-03-13T13:33:35.392Z · score: 52 (15 votes) · EA · GW

(Funding manager of the EA Meta Fund here)

We have run an application round for our last distribution for the first time. I conducted the very initial investigation which I communicated to the committee. Previous grantees came all through our personal network.

Things we learnt during our application round:

i) We got significantly fewer applications than we expected and would have been able to spend more time vetting projects. This was not a bottleneck. After some investigation through personal outreach I have the impression there are not many projects being started in the Meta space (this is different for other funding spaces).

ii) We were able to fund a decent fraction of the applications we received (25%?). For about half of the applications I was reasonably confident that they did not meet the bar so I did not investigate further. The remaining quarter felt borderline to me, I often still investigated but the results confirmed my initial impression.

My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained. One thing we want to do in the future is to run a request for proposals process.

Comment by denise_melchin on SHOW: A framework for shaping your talent for direct work · 2019-03-13T08:37:38.458Z · score: 11 (6 votes) · EA · GW

This isn't really comparing like with like however - in one case you're doing cold outreach and in others there are established application processes. It might make more sense to compare the demand for researcher positions with e.g. the Toby Ord's Research Assistant position.

But if your point is that people should be more willing to do cold outreach for research assistant positions like you did, that seems fair.

Comment by denise_melchin on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T14:02:35.888Z · score: 11 (8 votes) · EA · GW
many candidates treated the process like a 2 way application the whole way through the process. This three off my intuitions and normally I would have dropped all candidates who weren’t signalling they were specifically very excited about my role. First call excluded.

I wonder whether this is just a result of people on both sides of the application process knowing each other in a social context.

If the candidate knows they will interact with people making the hiring decision in the future, they might not want them to feel bad about rejecting them. The people making the hiring decision might arguably feel less bad about not hiring someone if the candidate wasn't that excited. Lack of excitement also allows the candidate to save face if they get rejected, which also only matters because the candidate and the person making the hiring decision might interact socially in the future.

Comment by denise_melchin on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T14:48:36.248Z · score: 52 (33 votes) · EA · GW

I don't really agree with your second and third point. Seeing this problem and responding by trying to create more 'capital letter EA jobs' strikes me as continuing to pursue a failing strategy.

What (in my opinion) the EA Community needs is to get away from this idea of channelling all committed people to a few organisations - the community is growing faster* than the organisations, and those numbers are unlikely to add up in the mid term.

Committing all our people to a few organisations seriously limits our impact in the long run. There are plenty of opportunities to have a large impact out there - we just need to appreciate them and pursue them. One thing I would like to see is stronger profession-specific networks in EA.

It's catastrophic that new and long-term EAs now consider their main EA activity to be to apply for the same few jobs instead of trying to increase their donations or investing in non-'capital letter EA' promising careers.

But this is hardly surprising given past messaging. The only reason EA organisations can get away with having very expensive hiring rounds for the applicants is because there are a lot of strongly committed people out there willing to take on that cost. Organisations cannot get away with this in most of the for-profit sector.

*Though this might be slowing down somewhat, perhaps because of this 'being an EA is applying unsuccessfully for the same few jobs' phenomena.

Comment by denise_melchin on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T12:35:41.882Z · score: 47 (29 votes) · EA · GW

I really appreciate you writing this. You are not the first person to consider doing so and I applaud you for actually doing it.

Comment by denise_melchin on EA grants available to individuals (crosspost from LessWrong) · 2019-02-08T15:16:54.133Z · score: 15 (8 votes) · EA · GW

Hi Jameson,

I'm a fund manager for the EA Meta Fund. Your assessment in your post is incorrect - we are also open to individual grant applications, though applications for the February distribution have now closed. I'd expect them to open again in a couple of months.

I'm curious how you got the impression that we aren't open to applications. It's important to us that we are able to reach all interested individuals so any insight into where we may have failed to communicate that is useful to us.

EA Meta Fund: we are open to applications

2019-01-05T13:32:03.778Z · score: 26 (13 votes)
Comment by denise_melchin on Should donor lottery winners write reports? · 2018-12-23T11:30:12.827Z · score: 14 (6 votes) · EA · GW

My main worry about donor lottery reports is somewhat different. Usually, people seem to assign some extra credibility to a donor's reasoning if the donation/s is/are large. This seems reasonable to me, since donors who donate large sums often have a lot more experience with making donation decisions. But donor lottery winners have much less expertise than the average person who makes large donations (and only just as much as those long-term large donors had when they made a large donation for the first time).

In sum my concern is that people will trust donor lottery winner's evaluations of donation targets more than they should.

Comment by denise_melchin on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T13:47:22.256Z · score: 9 (7 votes) · EA · GW

Hello Alex,

We are interested in funding new projects (see also Alex Foster's response above).

I am also concerned about the difficulty of promising new projects to be discovered. Personally, I am happy to invest some time into evaluating new projects. This is why we have a grant consideration form you can fill out to be considered for receiving a grant. That said, we are time capacity constrained and would not be able to handle 100 applications per month in our current setup.

I have personally considered putting out proposals like you are suggesting, but am concerned about the time investment. First I would like to see how much interest we can gather in different ways.

Comment by denise_melchin on Takeaways from EAF's Hiring Round · 2018-11-20T21:59:31.427Z · score: 2 (1 votes) · EA · GW

To be clear, I meant asking for a reference before an offer is actually made, at the stage when offers are being decided (so that applicants who don't receive offers one way or the other don't 'use up' their references).

Comment by denise_melchin on Takeaways from EAF's Hiring Round · 2018-11-20T21:50:48.736Z · score: 22 (11 votes) · EA · GW

I would strongly advise against making reference checks even earlier in the process. In your particular case, I think it would have been better for both the applicants and the referees if you had done the reference check even later - only after deciding to make an offer (conditional on the references being alright).

Request for references early in the process have put me off applying for specific roles and would again. I'm not sure whether I have unusual preferences but I would be surprised if I did. References put a burden on the referees which I am only willing to impose in exceptional circumstances, and that only for a very limited number of times.

I'm not confident how the referees actually feel about giving references. When I had to give references, I found it mildly inconvenient and would certainly been unhappy if I had to do it numerous times (with either a call or an email).

But for imposing costs on the applicants, it is not important how the referees actually feel about giving references - what matters is how applicants think they feel about it.

If you ask for references early, you might put off a fraction of your applicant pool you don't want to put off.

Comment by denise_melchin on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-17T16:43:52.050Z · score: 15 (14 votes) · EA · GW

I don’t think unsuccessful applications at organizations that are distantly related to the content you’re criticizing constitute a conflict of interest.

If everybody listed their unsuccessful applications at the start of every EA Forum post, it would take up a lot of reader attention.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-18T07:46:49.527Z · score: 24 (16 votes) · EA · GW

The problem here is that people in the EA movement overtly associate being EA not with 'doing high-impact things' but with 'do EA-approved work, ideally at an EA org'.

It is not obvious to me how this is fixable. It doesn't help that recommendations change frequently, so that entering paths that were 'EA-approved' once aren't any longer. As Greg said, people won't want to risk that. It's unfortunate that we punish people for following previous recommendations. This also doesn't exactly incentivize people to follow current recommendations and leads to EAs being flakey, which is bad for long-term impact.

I think one thing that would be good for people is to have a better professional & do-gooding network outside of EA. If you are considering entering a profession, you can find dedicated people there and coordinate. You can also find other do-gooding communities. In both cases you can bring the moral motivations and the empirical standards to other aligned people.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-14T20:47:29.883Z · score: 2 (2 votes) · EA · GW

Oh I agree people will often learn useful things during application processes. I just think the opportunity cost can be very high, especially when processes take months and people have to wait to figure out whether they got into their top options. I also think those costs are especially high in the top applicants - they have to invest the most and might learn the most useful things, but they also lose the most due to higher opportunity costs.

And as you said, people who get filtered out early lose less time and other resources on application processes. But they might still feel negatively about it, especially given the messaging. Maybe their equally rejected friends feel just as bad, which in the future could dissuade other friends who might be potential top hires to even try.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T22:37:03.829Z · score: 13 (13 votes) · EA · GW

Personally, I still think it would be very useful to find more talented people and for more people to consider applying to these roles; we just need to bear in mind that these roles require a very unusual skill-set, so people should always have a good back-up plan.

I'm curious what your model of the % value increase in the top hire is when you, say, double current hiring pools. It needs to be high enough to offset the burnt value from people's investments in those application processes. This is not only expensive for individual applicants in the moment, but also carries the long term risk of demotivating people - and thereby having a counterfactually smaller hiring pool in future years.

EA seems to be already at the point where lots of applicants are frustrated and might value drift, thereby dropping out of the hiring pool. I am not keen on making this situation worse. It might cause permanent harm.

Do you agree there's a trade-off here? If so, I'm not sure whether our disagreement comes from different assessments of value increases in the top hire or burnt value in the hiring pool.

Comment by denise_melchin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T21:15:34.638Z · score: 1 (2 votes) · EA · GW

I had written the same comment, but then deleted it once I found out that it wasn't quite as true as I thought it was. In Nick's writeup the grants come from different funds according to their purpose. (I had previously thought the most recent round of grants granted money to the exact same organisations.)

Comment by denise_melchin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-10T19:08:37.305Z · score: 10 (14 votes) · EA · GW

Echoing David, I'm somewhat sceptical of the responses to "what skills and experience they think the community as a whole will need in the future". Does the answer refer to high impact opportunities in general in the world or only the ones who are mostly located at EA organisations?

I'm also not sure about the relevance to individual EA's career decisions. I think implying it might be relevant might be outright dangerous if this answer is built on the needs of jobs that are mostly located at EA organisations. From what I understand, EA organisations have had a sharp increase in not only the number, but also the quality of applications in recent times. That's great! But pretty unfortunate for people who took the arguments about 'talent constraints' seriously and focused their efforts on finding a job in the EA Community. They are now finding out that they may have little prospects, even if they are very talented and competent.

There's no shortage of high impact opportunities outside EA organisations. But the EA Community lacks the knowledge to identify them and resources to direct its talent there.

There are only a few dozen roles at EA orgs each year, nevermind roles that are a good fit for individual EA's skillset. Even if we only look at the most talented people, there are more capable people the EA Community isn't able to allocate among its own organizations. And this will only get worse - the EA Community is growing faster than jobs at EA orgs.

If we don't have the knowledge and connections to allocate all our talent right now, that's unfortunate, but not necessarily a big problem if this is something that is communicated. What is a big problem is to accidentally mislead people into thinking it's best to focus their career efforts mostly on EA orgs, instead of viewing them as a small sliver in a vast option space.

Comment by denise_melchin on Public Opinion about Existential Risk · 2018-08-25T15:52:56.961Z · score: 2 (2 votes) · EA · GW

Cool study! I wish there were more people who went out and just tested assumptions like this. One high level question:

People in the EA community are very concerned about existential risk, but what is the perception among the general public? Answering this question is highly important if you are trying to reduce existential risk.

Why is this question highly important for reducing extinction risks? This doesn't strike me as obvious. What kind of practical implications does it have if the general public either assigns existential risks either a very high or very low probability?

You could make an argument that this could inform recruiting/funding efforts. Presumably you can do more recruiting and receive more funding for reducing existential risks if there are more people who are concerned about extinction risks.

But I would assume the percentage of people who consider reducing existential risks to be very important to be much more relevant for recruiting and funding than the opinion of the 'general public'.

Though the opinion of those groups has a good chance of being positively correlated, this particular argument doesn't convince me that the opinion of the general public matters that much.

Comment by denise_melchin on Why are you here? An origin stories thread. · 2018-08-10T13:44:00.604Z · score: 1 (1 votes) · EA · GW

Some parts of this sound very similar to me, down to 'left-wing youth political organisation who likes to sing socialist songs' (want to PM me which one it was?).

I have noticed before how much more common activist backgrounds are in German EAs vs. Anglo-Saxon EAs. When I talked about it with other people, the main explanation we could come up with was different base rates of sociopolitical activism in the different countries, but I've never checked the numbers on that.

Comment by denise_melchin on When causes multiply · 2018-08-10T13:21:15.718Z · score: 0 (0 votes) · EA · GW

What you're saying is correct if you're assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn't quite right either: You can't compute an output elasticity if you have to divide by 0.)

But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.

When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I'm not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.

Comment by denise_melchin on When causes multiply · 2018-08-08T15:41:59.966Z · score: 0 (0 votes) · EA · GW

I address the points you mention in my response to Carl.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area.

I don't think this is understanding the issue correctly, but it's hard to say since I am a bit confused what you mean by 'more impactful' in the context of multiplying variables. Could you give an example?

Comment by denise_melchin on When causes multiply · 2018-08-08T15:20:15.411Z · score: 3 (3 votes) · EA · GW

Great comment, thank you. I actually agree with you. Perhaps I should have focussed less on discussing the cause-level and more the interventions level, but I think it is still good to encourage more careful thinking on a cause-wide level even if it won't affect the actual outcome of the decision-making. I think people rarely think about e.g. reducing extinction risks benefiting AMF donations as you describe it.

Let's hope people will be careful to consider multiplicative effects if we can affect the distribution between key variables.

Comment by denise_melchin on Current Estimates for Likelihood of X-Risk? · 2018-08-07T11:32:35.082Z · score: 3 (3 votes) · EA · GW

Do you have private access to the Good Judgement data? I've been thinking before about how it would be good to get superforecasters to answer such questions but didn't know of a way to access the results of previous questions.

(Though there is the question of how much superforecasters' previous track record on short-term questions translates to success on longer-term questions.)

When causes multiply

2018-08-06T15:51:45.619Z · score: 17 (17 votes)
Comment by denise_melchin on Leverage Research: reviewing the basic facts · 2018-08-05T08:47:01.236Z · score: 8 (8 votes) · EA · GW

What are the benefits of this suggestion?

Comment by denise_melchin on Why are you here? An origin stories thread. · 2018-08-05T08:30:59.352Z · score: 9 (9 votes) · EA · GW

Great idea!

When I was around 10, I found the killing and torture of animals for meat and fur atrocious, so this is when I decided to become vegetarian. I have been vegetarian since then.

It wasn't until a few years later that I became more interested in a larger variety of issues, with my pet topics being environmentalism and feminism. I started doing political work when I was 16. I joined a left-wing political group that also focussed on a lot of other issues, like global poverty, democracy and animal rights. It was the first time in my life I met smart and dedicated people.

Apart from that, I spent most of my time reading through all the non-fiction books in the library I could find. I had always wanted to go into academia. I think I started looking forward to doing a PhD when I was around 10.

When I was 17 I found LessWrong. A year later someone who was also interested in LessWrong introduced me to EA and I started talking to the Swiss EA crowd. I had never previously thought about cause prioritisation and was really excited about the concept. This was in 2012.

At the same time, I started a cultural anthropology degree. Given the focus of psychology on WEIRD subjects, it seemed like a great starting point to dismantle misconceptions about humanity. But I was quite disappointed in how the subject was taught, so half a year later, I switched to maths.

It was 2013 by now and I stayed in touch with the EA Community online and visited the UK and Swiss EA Hubs a couple of times. I lived in Germany at the time where no EA Community existed yet. I started organizing a local LW meetup.

I stopped doing political work when I was around 19 because I thought it wasn't "effective" enough. I thoroughly regret this. I had a great network and know quite a few people who have great roles now and lots of experience. EA only came around to politics as a worthwhile avenue to doing good years later.

I focussed on finishing my degree, continued to make sure to stay in touch with the international EA Community and started organizing a local EA meetup once there was more interest in EA in Germany. I mostly regret now how I spent those years. I wish I had been around more people who were actually trying to do things which I cannot say about my local EA/LW network. Continuing political work would have been good, or moving to an EA Hub. But the latter would have conflicted with my degree.

I finished my degree last year and moved to London and recently also spent a few months in Berkeley. This has been a large improvement compared to the previous situation.

Comment by denise_melchin on Problems with EA representativeness and how to solve it · 2018-08-05T08:22:27.771Z · score: 1 (1 votes) · EA · GW

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T19:23:44.849Z · score: 0 (0 votes) · EA · GW

That’s fair.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T16:53:52.163Z · score: 3 (3 votes) · EA · GW

+1 I didn’t spell it out this explicitly, but what I found slightly odd about this post is that infrastructure is not the bottleneck on more grant making, but qualified grant makers.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T09:41:10.725Z · score: 1 (3 votes) · EA · GW

I agree collaboration between the various implementations of the different ideas is valuable and it can be good to help out technically. I'm less convinced of starting a fused approach as an outsider. As Ryan Carey said, most important for good work in this field is i) having people good at grantmaking i.e. making funding decisions ii) the actual money.

Thinking about approaches how to ideally handle grantmaking without having either strikes me as putting the cart before the horse. While it might be great to have a fused approach, I think this will largely be up to the projects who have i) and ii) whether they wish to collaborate further, though other people might be able to help with technical aspects.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T10:26:43.588Z · score: 4 (8 votes) · EA · GW

All of your ideas listed are already being worked on by some people. I talked just yesterday to someone who is intending to implement #1 soon, #3 will likely be achieved by handling EA Grants differently in the future, and there are already a couple of people working on #2, though there is further room for improvement.

Comment by denise_melchin on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-08T19:55:31.516Z · score: 2 (2 votes) · EA · GW

It is still not clear to me how your model is different to what EAs usually call different levels of meta. What is it adding? Using words like 'construal level' complicates the matter further.

I'm happy to elaborate more via PM if you like.

Comment by denise_melchin on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T14:53:37.222Z · score: 6 (6 votes) · EA · GW

I think you're making some valuable points here (e.g. making sure information is properly implemented into the 'higher levels') but I think your posts would have been a lot better if had skipped all the complicated modelling and difficult language. It strikes me as superfluous and the main result seems to me that it makes your post harder to read without adding any content.

Comment by Denise_Melchin on [deleted post] 2018-07-04T10:37:53.309Z

(Denise as mod)

The EA Forum is a place for high level discussion on EA matters which are often too long or inappropriate in other spaces like Facebook. Not yet fully fledged or thoroughly argued ideas are better placed there, since the EA Forum gets too crowded otherwise.

Therefore I'll delete your post. You can modify it and repost, or alternatively, post it elsewhere (like the EA Hangout Facebook group).

Edit: All further comments will be deleted.

Comment by denise_melchin on Want to be more productive? · 2018-06-11T13:48:48.648Z · score: 7 (7 votes) · EA · GW

Usually advertising is not welcome, but in this case, Lynette asked (us EA Forum moderators) for permission in advance. Lynette got an EA Grant to do her work and it's complementary to other EA community services.

Comment by denise_melchin on To Grow a Healthy Movement, Pick the Low-Hanging Fruit · 2018-06-06T22:38:18.392Z · score: 15 (15 votes) · EA · GW

I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?

I can imagine there might be very different results depending on the framing.

My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.

Comment by denise_melchin on The counterfactual impact of agents acting in concert · 2018-05-29T13:07:00.215Z · score: 3 (3 votes) · EA · GW

Where are you actually disagreeing with Joey and the conclusions he is drawing?

Joey is arguing that the --EA Movement-- might accidentally overcount its impact by adding each individual actor's counterfactual impact together. You point out a scenario in which various individual actor's actions are necessary for the counterfactual impact to happen so it is legitimate for each actor to claim the full counterfactual impact. This seems tangential to Joey's point, which is fundamentally about the practical implications of this problem. The question of who is responsible for the counterfactual impact and who should get credit are being asked because as the EA Movement we have to decide how to allocate our resources to the different actors. We also need to be cautious not to overcount impact as a movement in our outside communications and to not get the wrong impression ourselves.

Comment by Denise_Melchin on [deleted post] 2018-05-29T12:27:00.649Z

I think it would have been better for you to post this as a comment on your own or Joey’s post. Having a discussion in three different places makes the discussion hard to follow. Two are more than enough.

Comment by denise_melchin on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2018-05-20T23:42:00.190Z · score: 26 (30 votes) · EA · GW

Thanks for trying to get a clearer handle on this issue by splitting it up by cause area.

One gripe I have with this debate is the focus on EA orgs. Effective Altruism is or should be about doing the most good. Organisations which are explicitly labelled Effective Altruist are only a small part of that. Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.

Whether 'doing the most good' in the world is more talent than funding constrained is much harder to prove but is the actually important question.

If we focus the debate on EA orgs and our general vision as a movement on orgs that are labelled EA, the EA Community runs the risk of overlooking efforts and opportunities which aren't branded EA.

Of course fixing global poverty takes more than ten people working on the problem. Filling the funding gap for GiveWell recommended charities won't be enough to fix it either. Using EA branded framing isn't special to you - but it can make us lose track of the bigger picture of all the problems that still need to be solved, and all the funding that is still needed for that.

If you want to focus on fixing global poverty, just because EA focuses on GW recommended charities doesn't mean EtG is the best approach - how about training to be a development economist instead? The world still needs more than ten additional ones of that. (Edit: But it is not obvious to me whether global poverty as a whole is more talent or funding constrained - you'd need to poll leading people who actually work in the field, e.g. leading development economists or development professors.)

Comment by denise_melchin on Against prediction markets · 2018-05-14T20:32:20.963Z · score: 0 (0 votes) · EA · GW

Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn't actually the main end goal I care about (but 'good done in the world' for which better forecasts of the future would be pretty useful).

Comment by denise_melchin on Against prediction markets · 2018-05-13T15:37:09.652Z · score: 2 (2 votes) · EA · GW

I assumed you didn't mean an internal World Bank prediction market, sorry about that. As I said above, I'm more optimistic about large workplaces employing prediction markets. I don't know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?

Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don't see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.

You also don't state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn't call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Comment by denise_melchin on Against prediction markets · 2018-05-13T08:41:39.368Z · score: 2 (2 votes) · EA · GW

I'm arguing that the limit is hard to reach and when it isn't being reached, prediction markets are usually worse than alternatives. I'd be excited about a prediction market like Scott is describing in his post, but we are quite far away from implementing anything like that.

I also find it ironic that Scott's example discusses how hard election prediction markets are to corrupt, which is precisely what happened in the Intrade example above.

Comment by denise_melchin on Against prediction markets · 2018-05-13T08:36:41.303Z · score: 8 (8 votes) · EA · GW

I'm arguing against prediction markets being the best alternative in many situations contemplated by EAs, which is something I have heard said or implied by a lot of EAs in conversations I've had with them. Most notably, I think a lot of EAs are unaware of the arguments I make in the post and I wanted to have them written up for future reference.

Comment by denise_melchin on Against prediction markets · 2018-05-13T08:33:06.773Z · score: 4 (4 votes) · EA · GW

I don't think prediction markets are overused by EAs, I think they are advocated for too much (both for internal lower stakes situations as well as for solving problems in the world) when they are not the best alternative for a given problem.

One problem with prediction markets is that they are hassle to implement which is why people don't actually want to implement them. But since they are often the first alternative suggestion to the status quo within EA, better solutions in lower stakes situations like office forecasts which might have a chance of actually getting implemented don't even get discussed.

I don't think an office prediction market would be bad or not useful once you ignore opportunity costs, just worse than the alternatives. To be fair, I'm somewhat more optimistic for implementing office prediction markets in large workspaces like Google, but not for the small EA orgs we have. In those they would more likely take up a bunch of work without actually improving the situation much.

How large do you think a market needs to be to be efficient enough to be better than, say, asking Tetlock for the names of the top 30 superforecasters and hiring them to assess the problem? Given that political betting, despite being pretty large, had such big trouble as described in the post, I'm afraid an efficient enough prediction market would take a lot of work to implement. I agree with you the added incentive structure would be nice, which might well make up for a lack of efficiency.

But again, I'm still optimistic about sufficiently large stock market like prediction markets.

Comment by denise_melchin on Against prediction markets · 2018-05-12T17:02:46.554Z · score: 8 (8 votes) · EA · GW

I agree with you prediction markets are in many cases better than the status quo. I'm not comparing prediction markets to perfection but to their alternatives (like extremizing team forecasts). I'm also only arguing that prediction markets are overrated within EA, not in the wider world. I'd assume they're underrated outside of libertarian-friendly circles.

All in all, for which problems prediction markets do better than which alternatives is an empirical question, which I state in the post:

How stringently the conditions for market efficiency need to be met for a market to actually be efficient is an empirical question. How efficient a prediction market needs to be to give better forecasts than the alternatives is another one.

Do you disagree that in the specific examples I have given (an office prediction market about the timeline of a project, an election prediction market) having a prediction market is worse than the alternatives?

It would be good if you could give concrete examples where you expect prediction markets to be the best alternative.

Prediction markets are a neat concept, and are often regarded highly in the EA sphere. I think they are often not the best alternative for a given problem and are insufficiently compared to those alternatives within EA. Perhaps because they are such a neat concept - "let's just do a prediction market!" sounds a lot more exciting than discussing a problem in a team and extremizing the team's forecast even though a prediction market would be a lot more work.

Against prediction markets

2018-05-12T12:08:35.307Z · score: 18 (20 votes)
Comment by denise_melchin on Concrete Ways to Reduce Risks of Value Drift · 2018-05-11T18:29:53.355Z · score: 6 (6 votes) · EA · GW

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Doing effective altruistic things ≠ Doing Effective Altruism™ things

All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can't all work at the main EA orgs.

There are lots of highly impactful opportunities out there that aren't branded as EA - check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.

We should be interested in having as high an impact as possible and not in 'performing EA-ness'.

I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an 'EA approved' workplace like D pmind or J n Street) - or nothing. That's counterproductive and sad.

A potential explanation: it's difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and 'status'.

As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.

Comment by denise_melchin on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-09T21:18:25.169Z · score: 9 (9 votes) · EA · GW

Not sure i agree with this. Certainly there is less focus on donating hug sums of money, but that may also be explained by the shift to EA Orgs now often recommending direct work. But i think the EA community as a hole now focusses less on attracting huge ammounts of people and more on keeping the existing members engaged and dedicated and influencing their career choice (if i remember correctly the strategic write-ups from both CEA and EAF seem to reflect this).

For instance, the recent strategy write-up by CEA mentions dedication as an important factor:

We can think of the amount of good someone can be expected to do as being the product of three factors (in a mathematical sense): 1. Resources: The extent of the resources (money, useful labor, etc.) they have to offer; 2. Dedication: The proportion of these resources that are devoted to helping; 3. Realization: How efficiently the resources devoted to helping are used

(top level comment to not make the thread even more messy)

When we talk about dedication and what that looks like in people, I think we can have very different images in mind. We could think of a 'dedicated EA' and think of two different archetypes (of course, reality is more messy than that and people might actually be both):

Person A talks about dedicating their life to having a high impact, about the willingness for self-sacrifice, about optimising everything for this one goal. They're very enthusiastic, think about all their options to do good and talk about nothing but EA.

Person B is careful and measured. They think about how they can use their career and other resources to have a very high impact and about the long road to being in highly impactful position in a later point in their career. They want to make sure they get there by having a proper work-life balance in the process.

When I say (and I think this is true for Joey as well) that EA emphasises dedication much less, I think about dedication in the way that person A embodies. I think CEA in their material think about dedication more in the way of Person B.

EA was much smaller and less professional in the past. That also meant that the 'highest status' positions were much more easily accessible. When I met Joey in 2013, he was interning at 80,000hours and then started his own project with Charity Science and people thought highly of him for that. Now it is not possible anymore to easily intern or volunteer at high profile EA orgs ('management capacity constraints'). Easily accessible positions still exist, but due to the professionalisation and growth of the EA movement, they're less 'high status' and therefore less appealing.

The type of people like Joey who just went out and started their own projects they were enthusiastic about are also relatively speaking (compared to the now 'high status' EA endeavours) less likely to get funding today. I think this might actually be where some part of the conflict about funding constraints and whether small student-y projects are worth funding or not is actually coming from - do we want to support an EA culture where we encourage young people to do random EA projects? Or do we want to foster a professional environment?

I think the move towards professionalising EA has been correct, but we should be aware of the costs it has imposed on people who liked the young people dedicated person A vibe of EA in the past. One alternative name proposal for EA was 'super hardcore do-gooder' - unthinkable today.

Comment by denise_melchin on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-05-06T09:26:06.951Z · score: 9 (9 votes) · EA · GW

Another factor leading to dedication being emphasized less might be that people are less motivated to be dedicated these days. The growth of the movement and the funding available have resulted in an individual’s EA contributions mattering far less than they used to.

The increased concern about downside risk has also made it much harder to ‘use up’ your dedication. A few years ago you could at least always do some outreach - now it’s commonly considered far less clear the sign on that is positive.

Comparative advantage in the talent market

2018-04-11T23:48:56.176Z · score: 22 (26 votes)

Meta: notes on EA Forum moderation

2018-03-16T21:14:20.570Z · score: 9 (9 votes)

Causal Networks Model I: Introduction & User Guide

2017-11-17T14:51:50.396Z · score: 14 (14 votes)

Request for Feedback: Researching global poverty interventions with the intention of founding a charity.

2015-05-06T10:22:15.298Z · score: 19 (21 votes)

Meetup : How can you choose the best career to do the most good?

2015-03-23T13:17:00.725Z · score: 0 (0 votes)

Meetup : Frankfurt: "Which charities should we donate to?"

2015-02-27T20:42:24.786Z · score: 0 (0 votes)

What we learned from our donation match

2015-02-07T23:13:32.758Z · score: 5 (5 votes)

How can people be persuaded to give more (and more effectively)?

2014-10-14T09:49:42.426Z · score: 6 (8 votes)