Comment by denise_melchin on What's the best structure for optimal allocation of EA capital? · 2019-06-05T13:04:59.196Z · score: 32 (14 votes) · EA · GW

I find some of the statements in your post a bit jarring, and this is not the first time I feel like this when reading your writing. The founders of Good Ventures are multi-billionaires who have been influenced by EA ideas which stem to some extent from the EA community. This is excellent. But the EA community does not have ownership over this money. Your writing makes it sound like it does, which I find presumptuous and off-putting.

For the future, I would recommend that you should try to better understand the relationships between different individuals and institutions that are associated with the Effective Altruism community before asking questions like this one.

(writing in personal capacity here, not as a mod)

Comment by denise_melchin on What's the best structure for optimal allocation of EA capital? · 2019-06-05T13:03:12.968Z · score: 9 (2 votes) · EA · GW

I find your writing a bit jarring, and this is not the first time. The founders of Good Ventures are multi-billionaires who have been influenced by EA ideas which stem from the EA community. This is excellent. But the EA community does not have ownership over this money. Your writing makes it sound like it does, which I find presumptuous and off-putting.

For the future, I would recommend that you should try to better understand the relationships between different individuals and institutions that are associated with the Effective Altruism community before asking questions like this one.

Comment by denise_melchin on Is preventing child abuse a plausible Cause X? · 2019-05-08T11:05:46.171Z · score: 6 (3 votes) · EA · GW

I think it's worth noting what the report says about family structures that fall in neither category - both married and unmarried parents where one parent isn't biologically related to the child but still takes on a parental role as well as single parents without a partner fall somewhere in between married biological parents and single parents with partner in terms of child abuse rates.

(I was a bit confused and thought 'single parents with partner' included cases in which the partner takes on parental responsibility so the high rate seemed off to me.)

Comment by denise_melchin on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-07T16:49:07.945Z · score: 16 (6 votes) · EA · GW

But asking privately only gives one person the answer, instead of many. I'm a bit surprised by your response - I had expected that the group who knows the answer usually has better things to do than answer random emails, while there are a lot of individuals who probably have knowledge like this whose time isn't as valuable.

Comment by denise_melchin on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-05-01T14:15:18.163Z · score: 11 (8 votes) · EA · GW

This seems like a reasonable piece to me, laying out the basic groundwork for more future scrutiny on philanthropy's impact on the biosecurity field, but not more than that. ('Establishing common knowledge' seems like a good summary to me.)

A large influx of money can significantly change a field, and generally speaking, it is much harder for sudden big changes to improve the state of affairs than to make them worse. That said, sudden large changes, even if net positive overall, will often have some negative side effects, and I would expect more money for a 'do good-ing' field to lead to more good overall.

Something that might be interesting to see would be a survey of top people in the biosecurity field how this has changed their field and whether they view this change as positive. Generally speaking, I would expect them to have a much better grasp of empirical prioritisation questions in biosecurity than a few people at a large foundation, no matter how careful they are and how much work they put in. The more work large foundations put into being in touch with people in the field, the less concerned one needs to be I think.

Similar criticisms also exist in other fields, e.g. about the Gates foundation drowning out primary health care work by focusing on vaccinations and specific diseases and inadvertently causing some harm this way. I have not investigated the merits of this criticism, but it seems like a worthwhile thing to do.

Comment by denise_melchin on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-20T06:14:37.189Z · score: 17 (8 votes) · EA · GW
80,000 Hours thinks earning to give is the best option for a substantial number of people -- those for whom it's their comparative advantage. They are keen, however, to make sure that people fully consider direct work options, instead of defaulting to earning to give because they’ve heard it is the best way to do good with one’s career.

If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give. Have they revised this opinion or am I remembering it incorrectly?

If not, your description seems a bit misleading to me. Substantial number sounds like a significantly higher fraction of people to me, perhaps something like 40% instead of 15%.

Comment by denise_melchin on EA Forum Prize: Winners for February 2019 · 2019-04-01T12:34:33.303Z · score: 7 (5 votes) · EA · GW

As one of the people who voted, I was also surprised and disappointed by this. But different voters applied different standards on what kind of content they wish to support.

Comment by denise_melchin on Request for comments: EA Projects evaluation platform · 2019-03-21T18:22:44.095Z · score: 12 (4 votes) · EA · GW

(I still feel like I don’t really understand where you’re coming from.)

I am concerned that your model of how idea proposals get evaluated (and then plausibly funded) is a bit off. From the original post:

hard to evaluate which project ideas are excellent , which are probably good, and which are too risky for their estimated return.

You are missing one major category here: projects which are simply bad because they do have approximately zero impact, but aren't particularly risky. I think this category is the largest of the the four.

Which projects have a chance of working and which don't is often pretty clear to people who have experience evaluating projects quite quickly (which is why Oli suggested 15min for the initial investigation above). It sounds to me a bit like your model of ideas which get proposed is that most of them are pretty valuable. I don't think this is the case.

When funders give general opinions on what should or should not get started or how you value or not value things, again, I think you are at greater risk of having too much of an influence on the community. I do not believe the knowledge of the funders is strictly better than the knowledge of grant applicants.

I am confused by this. Knowledge of what?

The role of funders/evaluators is to evaluate projects (and maybe propose some for others to do). To do this well they need to have a good mental map of what kind of projects have worked or not worked in the past, what good and bad signs are, ideally from an explicit feedback loop from funding projects and then seeing how the projects turn out. The role of grant applicants is to come up with some ideas they could execute. Do you disagree with this?

Comment by denise_melchin on Request for comments: EA Projects evaluation platform · 2019-03-21T15:10:11.348Z · score: 10 (5 votes) · EA · GW
I think it much harder to give open feedback if it is closely tied with funding. Feedback from funders can easily have too much influence on people, and should be very careful and nuanced, as it comes from some position of power. I would expect adding financial incentives can easily be detrimental for the process. (For self-referential example, just look on this discussion: do you think the fact that Oli dislikes my proposal and suggest LTF can back something different with $20k will not create at least some unconscious incentives?)

I'm a bit confused here. I think I disagree with you, but maybe I am not understanding you correctly.

I consider having people giving feedback to have 'skin in the game' to be important for the accuracy of the feedback. Most people don't enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.

Funders instead can give you a strong signal - a signal which is unfortunately somewhat binary and lacks nuance. But someone being willing to fund something or not is a much stronger signal for the value of a proposal than comments from friends on a GoogleDoc. This is especially true if people proposing ideas don't take into account how hard it is to discourage people and don't interpret feedback in that light.

Comment by denise_melchin on A guide to improving your odds at getting a job in EA · 2019-03-19T15:42:27.490Z · score: 18 (11 votes) · EA · GW
EA jobs, unlike many other jobs, do not compare very well to other kinds of work experience,

I'm pretty sceptical of this claim (not just made here, but also made in many other posts). I think this might be true for some roles like the Research Analyst positions at the Open Philanthropy Project which combine academic research with grantmaking which is unusual in the wider job market.

But I don't see why e.g. operations at an average EA organisation would not compare well to other kinds of work experience in operations. I'm happy to hear counterarguments to this.

The underlying crux here might be that I'm generally wary of any claims of 'EA exceptionalism'.

Comment by denise_melchin on A guide to improving your odds at getting a job in EA · 2019-03-19T15:37:26.298Z · score: 24 (14 votes) · EA · GW

This list seems roughly reasonable. What most stands out to me is that your suggestions are extremely time consuming, especially in aggregate. The hours applicants to jobs at EA organisations spend on timed work tests and honing their CVs pale in comparison.

I also think your suggestions are applicable to some other fields which might be of interest to people who are trying to have a high impact. It is not unusual for desirable roles in e.g. international development to require hundreds to thousands of hours of investment.

However, if people are investing those thousands of hours into learning about EA, they will not spend them investing in international development or nuclear security.

While people following your suggestions might benefit individually, as a movement we and the world might be worse off.

Comment by denise_melchin on EA is vetting-constrained · 2019-03-13T13:33:35.392Z · score: 54 (17 votes) · EA · GW

(Funding manager of the EA Meta Fund here)

We have run an application round for our last distribution for the first time. I conducted the very initial investigation which I communicated to the committee. Previous grantees came all through our personal network.

Things we learnt during our application round:

i) We got significantly fewer applications than we expected and would have been able to spend more time vetting projects. This was not a bottleneck. After some investigation through personal outreach I have the impression there are not many projects being started in the Meta space (this is different for other funding spaces).

ii) We were able to fund a decent fraction of the applications we received (25%?). For about half of the applications I was reasonably confident that they did not meet the bar so I did not investigate further. The remaining quarter felt borderline to me, I often still investigated but the results confirmed my initial impression.

My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained. One thing we want to do in the future is to run a request for proposals process.

Comment by denise_melchin on SHOW: A framework for shaping your talent for direct work · 2019-03-13T08:37:38.458Z · score: 11 (6 votes) · EA · GW

This isn't really comparing like with like however - in one case you're doing cold outreach and in others there are established application processes. It might make more sense to compare the demand for researcher positions with e.g. the Toby Ord's Research Assistant position.

But if your point is that people should be more willing to do cold outreach for research assistant positions like you did, that seems fair.

Comment by denise_melchin on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T14:02:35.888Z · score: 11 (8 votes) · EA · GW
many candidates treated the process like a 2 way application the whole way through the process. This three off my intuitions and normally I would have dropped all candidates who weren’t signalling they were specifically very excited about my role. First call excluded.

I wonder whether this is just a result of people on both sides of the application process knowing each other in a social context.

If the candidate knows they will interact with people making the hiring decision in the future, they might not want them to feel bad about rejecting them. The people making the hiring decision might arguably feel less bad about not hiring someone if the candidate wasn't that excited. Lack of excitement also allows the candidate to save face if they get rejected, which also only matters because the candidate and the person making the hiring decision might interact socially in the future.

Comment by denise_melchin on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T14:48:36.248Z · score: 53 (34 votes) · EA · GW

I don't really agree with your second and third point. Seeing this problem and responding by trying to create more 'capital letter EA jobs' strikes me as continuing to pursue a failing strategy.

What (in my opinion) the EA Community needs is to get away from this idea of channelling all committed people to a few organisations - the community is growing faster* than the organisations, and those numbers are unlikely to add up in the mid term.

Committing all our people to a few organisations seriously limits our impact in the long run. There are plenty of opportunities to have a large impact out there - we just need to appreciate them and pursue them. One thing I would like to see is stronger profession-specific networks in EA.

It's catastrophic that new and long-term EAs now consider their main EA activity to be to apply for the same few jobs instead of trying to increase their donations or investing in non-'capital letter EA' promising careers.

But this is hardly surprising given past messaging. The only reason EA organisations can get away with having very expensive hiring rounds for the applicants is because there are a lot of strongly committed people out there willing to take on that cost. Organisations cannot get away with this in most of the for-profit sector.

*Though this might be slowing down somewhat, perhaps because of this 'being an EA is applying unsuccessfully for the same few jobs' phenomena.

Comment by denise_melchin on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T12:35:41.882Z · score: 48 (30 votes) · EA · GW

I really appreciate you writing this. You are not the first person to consider doing so and I applaud you for actually doing it.

Comment by denise_melchin on EA grants available to individuals (crosspost from LessWrong) · 2019-02-08T15:16:54.133Z · score: 15 (8 votes) · EA · GW

Hi Jameson,

I'm a fund manager for the EA Meta Fund. Your assessment in your post is incorrect - we are also open to individual grant applications, though applications for the February distribution have now closed. I'd expect them to open again in a couple of months.

I'm curious how you got the impression that we aren't open to applications. It's important to us that we are able to reach all interested individuals so any insight into where we may have failed to communicate that is useful to us.

EA Meta Fund: we are open to applications

2019-01-05T13:32:03.778Z · score: 26 (13 votes)
Comment by denise_melchin on Should donor lottery winners write reports? · 2018-12-23T11:30:12.827Z · score: 14 (6 votes) · EA · GW

My main worry about donor lottery reports is somewhat different. Usually, people seem to assign some extra credibility to a donor's reasoning if the donation/s is/are large. This seems reasonable to me, since donors who donate large sums often have a lot more experience with making donation decisions. But donor lottery winners have much less expertise than the average person who makes large donations (and only just as much as those long-term large donors had when they made a large donation for the first time).

In sum my concern is that people will trust donor lottery winner's evaluations of donation targets more than they should.

Comment by denise_melchin on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T13:47:22.256Z · score: 9 (7 votes) · EA · GW

Hello Alex,

We are interested in funding new projects (see also Alex Foster's response above).

I am also concerned about the difficulty of promising new projects to be discovered. Personally, I am happy to invest some time into evaluating new projects. This is why we have a grant consideration form you can fill out to be considered for receiving a grant. That said, we are time capacity constrained and would not be able to handle 100 applications per month in our current setup.

I have personally considered putting out proposals like you are suggesting, but am concerned about the time investment. First I would like to see how much interest we can gather in different ways.

Comment by denise_melchin on Takeaways from EAF's Hiring Round · 2018-11-20T21:59:31.427Z · score: 2 (1 votes) · EA · GW

To be clear, I meant asking for a reference before an offer is actually made, at the stage when offers are being decided (so that applicants who don't receive offers one way or the other don't 'use up' their references).

Comment by denise_melchin on Takeaways from EAF's Hiring Round · 2018-11-20T21:50:48.736Z · score: 22 (11 votes) · EA · GW

I would strongly advise against making reference checks even earlier in the process. In your particular case, I think it would have been better for both the applicants and the referees if you had done the reference check even later - only after deciding to make an offer (conditional on the references being alright).

Request for references early in the process have put me off applying for specific roles and would again. I'm not sure whether I have unusual preferences but I would be surprised if I did. References put a burden on the referees which I am only willing to impose in exceptional circumstances, and that only for a very limited number of times.

I'm not confident how the referees actually feel about giving references. When I had to give references, I found it mildly inconvenient and would certainly been unhappy if I had to do it numerous times (with either a call or an email).

But for imposing costs on the applicants, it is not important how the referees actually feel about giving references - what matters is how applicants think they feel about it.

If you ask for references early, you might put off a fraction of your applicant pool you don't want to put off.

Comment by denise_melchin on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-17T16:43:52.050Z · score: 15 (14 votes) · EA · GW

I don’t think unsuccessful applications at organizations that are distantly related to the content you’re criticizing constitute a conflict of interest.

If everybody listed their unsuccessful applications at the start of every EA Forum post, it would take up a lot of reader attention.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-18T07:46:49.527Z · score: 24 (16 votes) · EA · GW

The problem here is that people in the EA movement overtly associate being EA not with 'doing high-impact things' but with 'do EA-approved work, ideally at an EA org'.

It is not obvious to me how this is fixable. It doesn't help that recommendations change frequently, so that entering paths that were 'EA-approved' once aren't any longer. As Greg said, people won't want to risk that. It's unfortunate that we punish people for following previous recommendations. This also doesn't exactly incentivize people to follow current recommendations and leads to EAs being flakey, which is bad for long-term impact.

I think one thing that would be good for people is to have a better professional & do-gooding network outside of EA. If you are considering entering a profession, you can find dedicated people there and coordinate. You can also find other do-gooding communities. In both cases you can bring the moral motivations and the empirical standards to other aligned people.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-14T20:47:29.883Z · score: 2 (2 votes) · EA · GW

Oh I agree people will often learn useful things during application processes. I just think the opportunity cost can be very high, especially when processes take months and people have to wait to figure out whether they got into their top options. I also think those costs are especially high in the top applicants - they have to invest the most and might learn the most useful things, but they also lose the most due to higher opportunity costs.

And as you said, people who get filtered out early lose less time and other resources on application processes. But they might still feel negatively about it, especially given the messaging. Maybe their equally rejected friends feel just as bad, which in the future could dissuade other friends who might be potential top hires to even try.

Comment by denise_melchin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-13T22:37:03.829Z · score: 13 (13 votes) · EA · GW

Personally, I still think it would be very useful to find more talented people and for more people to consider applying to these roles; we just need to bear in mind that these roles require a very unusual skill-set, so people should always have a good back-up plan.

I'm curious what your model of the % value increase in the top hire is when you, say, double current hiring pools. It needs to be high enough to offset the burnt value from people's investments in those application processes. This is not only expensive for individual applicants in the moment, but also carries the long term risk of demotivating people - and thereby having a counterfactually smaller hiring pool in future years.

EA seems to be already at the point where lots of applicants are frustrated and might value drift, thereby dropping out of the hiring pool. I am not keen on making this situation worse. It might cause permanent harm.

Do you agree there's a trade-off here? If so, I'm not sure whether our disagreement comes from different assessments of value increases in the top hire or burnt value in the hiring pool.

Comment by denise_melchin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-11T21:15:34.638Z · score: 1 (2 votes) · EA · GW

I had written the same comment, but then deleted it once I found out that it wasn't quite as true as I thought it was. In Nick's writeup the grants come from different funds according to their purpose. (I had previously thought the most recent round of grants granted money to the exact same organisations.)

Comment by denise_melchin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-10T19:08:37.305Z · score: 10 (14 votes) · EA · GW

Echoing David, I'm somewhat sceptical of the responses to "what skills and experience they think the community as a whole will need in the future". Does the answer refer to high impact opportunities in general in the world or only the ones who are mostly located at EA organisations?

I'm also not sure about the relevance to individual EA's career decisions. I think implying it might be relevant might be outright dangerous if this answer is built on the needs of jobs that are mostly located at EA organisations. From what I understand, EA organisations have had a sharp increase in not only the number, but also the quality of applications in recent times. That's great! But pretty unfortunate for people who took the arguments about 'talent constraints' seriously and focused their efforts on finding a job in the EA Community. They are now finding out that they may have little prospects, even if they are very talented and competent.

There's no shortage of high impact opportunities outside EA organisations. But the EA Community lacks the knowledge to identify them and resources to direct its talent there.

There are only a few dozen roles at EA orgs each year, nevermind roles that are a good fit for individual EA's skillset. Even if we only look at the most talented people, there are more capable people the EA Community isn't able to allocate among its own organizations. And this will only get worse - the EA Community is growing faster than jobs at EA orgs.

If we don't have the knowledge and connections to allocate all our talent right now, that's unfortunate, but not necessarily a big problem if this is something that is communicated. What is a big problem is to accidentally mislead people into thinking it's best to focus their career efforts mostly on EA orgs, instead of viewing them as a small sliver in a vast option space.

Comment by denise_melchin on Public Opinion about Existential Risk · 2018-08-25T15:52:56.961Z · score: 2 (2 votes) · EA · GW

Cool study! I wish there were more people who went out and just tested assumptions like this. One high level question:

People in the EA community are very concerned about existential risk, but what is the perception among the general public? Answering this question is highly important if you are trying to reduce existential risk.

Why is this question highly important for reducing extinction risks? This doesn't strike me as obvious. What kind of practical implications does it have if the general public either assigns existential risks either a very high or very low probability?

You could make an argument that this could inform recruiting/funding efforts. Presumably you can do more recruiting and receive more funding for reducing existential risks if there are more people who are concerned about extinction risks.

But I would assume the percentage of people who consider reducing existential risks to be very important to be much more relevant for recruiting and funding than the opinion of the 'general public'.

Though the opinion of those groups has a good chance of being positively correlated, this particular argument doesn't convince me that the opinion of the general public matters that much.

Comment by denise_melchin on Why are you here? An origin stories thread. · 2018-08-10T13:44:00.604Z · score: 1 (1 votes) · EA · GW

Some parts of this sound very similar to me, down to 'left-wing youth political organisation who likes to sing socialist songs' (want to PM me which one it was?).

I have noticed before how much more common activist backgrounds are in German EAs vs. Anglo-Saxon EAs. When I talked about it with other people, the main explanation we could come up with was different base rates of sociopolitical activism in the different countries, but I've never checked the numbers on that.

Comment by denise_melchin on When causes multiply · 2018-08-10T13:21:15.718Z · score: 0 (0 votes) · EA · GW

What you're saying is correct if you're assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn't quite right either: You can't compute an output elasticity if you have to divide by 0.)

But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.

When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I'm not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.

Comment by denise_melchin on When causes multiply · 2018-08-08T15:41:59.966Z · score: 0 (0 votes) · EA · GW

I address the points you mention in my response to Carl.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area.

I don't think this is understanding the issue correctly, but it's hard to say since I am a bit confused what you mean by 'more impactful' in the context of multiplying variables. Could you give an example?

Comment by denise_melchin on When causes multiply · 2018-08-08T15:20:15.411Z · score: 3 (3 votes) · EA · GW

Great comment, thank you. I actually agree with you. Perhaps I should have focussed less on discussing the cause-level and more the interventions level, but I think it is still good to encourage more careful thinking on a cause-wide level even if it won't affect the actual outcome of the decision-making. I think people rarely think about e.g. reducing extinction risks benefiting AMF donations as you describe it.

Let's hope people will be careful to consider multiplicative effects if we can affect the distribution between key variables.

Comment by denise_melchin on Current Estimates for Likelihood of X-Risk? · 2018-08-07T11:32:35.082Z · score: 3 (3 votes) · EA · GW

Do you have private access to the Good Judgement data? I've been thinking before about how it would be good to get superforecasters to answer such questions but didn't know of a way to access the results of previous questions.

(Though there is the question of how much superforecasters' previous track record on short-term questions translates to success on longer-term questions.)

When causes multiply

2018-08-06T15:51:45.619Z · score: 19 (18 votes)
Comment by denise_melchin on Leverage Research: reviewing the basic facts · 2018-08-05T08:47:01.236Z · score: 8 (8 votes) · EA · GW

What are the benefits of this suggestion?

Comment by denise_melchin on Why are you here? An origin stories thread. · 2018-08-05T08:30:59.352Z · score: 9 (9 votes) · EA · GW

Great idea!

When I was around 10, I found the killing and torture of animals for meat and fur atrocious, so this is when I decided to become vegetarian. I have been vegetarian since then.

It wasn't until a few years later that I became more interested in a larger variety of issues, with my pet topics being environmentalism and feminism. I started doing political work when I was 16. I joined a left-wing political group that also focussed on a lot of other issues, like global poverty, democracy and animal rights. It was the first time in my life I met smart and dedicated people.

Apart from that, I spent most of my time reading through all the non-fiction books in the library I could find. I had always wanted to go into academia. I think I started looking forward to doing a PhD when I was around 10.

When I was 17 I found LessWrong. A year later someone who was also interested in LessWrong introduced me to EA and I started talking to the Swiss EA crowd. I had never previously thought about cause prioritisation and was really excited about the concept. This was in 2012.

At the same time, I started a cultural anthropology degree. Given the focus of psychology on WEIRD subjects, it seemed like a great starting point to dismantle misconceptions about humanity. But I was quite disappointed in how the subject was taught, so half a year later, I switched to maths.

It was 2013 by now and I stayed in touch with the EA Community online and visited the UK and Swiss EA Hubs a couple of times. I lived in Germany at the time where no EA Community existed yet. I started organizing a local LW meetup.

I stopped doing political work when I was around 19 because I thought it wasn't "effective" enough. I thoroughly regret this. I had a great network and know quite a few people who have great roles now and lots of experience. EA only came around to politics as a worthwhile avenue to doing good years later.

I focussed on finishing my degree, continued to make sure to stay in touch with the international EA Community and started organizing a local EA meetup once there was more interest in EA in Germany. I mostly regret now how I spent those years. I wish I had been around more people who were actually trying to do things which I cannot say about my local EA/LW network. Continuing political work would have been good, or moving to an EA Hub. But the latter would have conflicted with my degree.

I finished my degree last year and moved to London and recently also spent a few months in Berkeley. This has been a large improvement compared to the previous situation.

Comment by denise_melchin on Problems with EA representativeness and how to solve it · 2018-08-05T08:22:27.771Z · score: 1 (1 votes) · EA · GW

I also agree with the comment above that it's important to distinguish between what we call "the long-term value thesis" and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there's better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.

Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T19:23:44.849Z · score: 0 (0 votes) · EA · GW

That’s fair.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T16:53:52.163Z · score: 3 (3 votes) · EA · GW

+1 I didn’t spell it out this explicitly, but what I found slightly odd about this post is that infrastructure is not the bottleneck on more grant making, but qualified grant makers.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-11T09:41:10.725Z · score: 1 (3 votes) · EA · GW

I agree collaboration between the various implementations of the different ideas is valuable and it can be good to help out technically. I'm less convinced of starting a fused approach as an outsider. As Ryan Carey said, most important for good work in this field is i) having people good at grantmaking i.e. making funding decisions ii) the actual money.

Thinking about approaches how to ideally handle grantmaking without having either strikes me as putting the cart before the horse. While it might be great to have a fused approach, I think this will largely be up to the projects who have i) and ii) whether they wish to collaborate further, though other people might be able to help with technical aspects.

Comment by denise_melchin on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations · 2018-07-10T10:26:43.588Z · score: 4 (8 votes) · EA · GW

All of your ideas listed are already being worked on by some people. I talked just yesterday to someone who is intending to implement #1 soon, #3 will likely be achieved by handling EA Grants differently in the future, and there are already a couple of people working on #2, though there is further room for improvement.

Comment by denise_melchin on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-08T19:55:31.516Z · score: 2 (2 votes) · EA · GW

It is still not clear to me how your model is different to what EAs usually call different levels of meta. What is it adding? Using words like 'construal level' complicates the matter further.

I'm happy to elaborate more via PM if you like.

Comment by denise_melchin on The Values-to-Actions Decision Chain: a lens for improving coordination · 2018-07-04T14:53:37.222Z · score: 6 (6 votes) · EA · GW

I think you're making some valuable points here (e.g. making sure information is properly implemented into the 'higher levels') but I think your posts would have been a lot better if had skipped all the complicated modelling and difficult language. It strikes me as superfluous and the main result seems to me that it makes your post harder to read without adding any content.

Comment by Denise_Melchin on [deleted post] 2018-07-04T10:37:53.309Z

(Denise as mod)

The EA Forum is a place for high level discussion on EA matters which are often too long or inappropriate in other spaces like Facebook. Not yet fully fledged or thoroughly argued ideas are better placed there, since the EA Forum gets too crowded otherwise.

Therefore I'll delete your post. You can modify it and repost, or alternatively, post it elsewhere (like the EA Hangout Facebook group).

Edit: All further comments will be deleted.

Comment by denise_melchin on Want to be more productive? · 2018-06-11T13:48:48.648Z · score: 7 (7 votes) · EA · GW

Usually advertising is not welcome, but in this case, Lynette asked (us EA Forum moderators) for permission in advance. Lynette got an EA Grant to do her work and it's complementary to other EA community services.

Comment by denise_melchin on To Grow a Healthy Movement, Pick the Low-Hanging Fruit · 2018-06-06T22:38:18.392Z · score: 16 (16 votes) · EA · GW

I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?

I can imagine there might be very different results depending on the framing.

My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.

Comment by denise_melchin on The counterfactual impact of agents acting in concert · 2018-05-29T13:07:00.215Z · score: 3 (3 votes) · EA · GW

Where are you actually disagreeing with Joey and the conclusions he is drawing?

Joey is arguing that the --EA Movement-- might accidentally overcount its impact by adding each individual actor's counterfactual impact together. You point out a scenario in which various individual actor's actions are necessary for the counterfactual impact to happen so it is legitimate for each actor to claim the full counterfactual impact. This seems tangential to Joey's point, which is fundamentally about the practical implications of this problem. The question of who is responsible for the counterfactual impact and who should get credit are being asked because as the EA Movement we have to decide how to allocate our resources to the different actors. We also need to be cautious not to overcount impact as a movement in our outside communications and to not get the wrong impression ourselves.

Comment by Denise_Melchin on [deleted post] 2018-05-29T12:27:00.649Z

I think it would have been better for you to post this as a comment on your own or Joey’s post. Having a discussion in three different places makes the discussion hard to follow. Two are more than enough.

Comment by denise_melchin on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2018-05-20T23:42:00.190Z · score: 26 (30 votes) · EA · GW

Thanks for trying to get a clearer handle on this issue by splitting it up by cause area.

One gripe I have with this debate is the focus on EA orgs. Effective Altruism is or should be about doing the most good. Organisations which are explicitly labelled Effective Altruist are only a small part of that. Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained.

Whether 'doing the most good' in the world is more talent than funding constrained is much harder to prove but is the actually important question.

If we focus the debate on EA orgs and our general vision as a movement on orgs that are labelled EA, the EA Community runs the risk of overlooking efforts and opportunities which aren't branded EA.

Of course fixing global poverty takes more than ten people working on the problem. Filling the funding gap for GiveWell recommended charities won't be enough to fix it either. Using EA branded framing isn't special to you - but it can make us lose track of the bigger picture of all the problems that still need to be solved, and all the funding that is still needed for that.

If you want to focus on fixing global poverty, just because EA focuses on GW recommended charities doesn't mean EtG is the best approach - how about training to be a development economist instead? The world still needs more than ten additional ones of that. (Edit: But it is not obvious to me whether global poverty as a whole is more talent or funding constrained - you'd need to poll leading people who actually work in the field, e.g. leading development economists or development professors.)

Comment by denise_melchin on Against prediction markets · 2018-05-14T20:32:20.963Z · score: 0 (0 votes) · EA · GW

Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn't actually the main end goal I care about (but 'good done in the world' for which better forecasts of the future would be pretty useful).

Comment by denise_melchin on Against prediction markets · 2018-05-13T15:37:09.652Z · score: 2 (2 votes) · EA · GW

I assumed you didn't mean an internal World Bank prediction market, sorry about that. As I said above, I'm more optimistic about large workplaces employing prediction markets. I don't know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?

Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don't see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.

You also don't state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn't call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Against prediction markets

2018-05-12T12:08:35.307Z · score: 18 (20 votes)

Comparative advantage in the talent market

2018-04-11T23:48:56.176Z · score: 22 (26 votes)

Meta: notes on EA Forum moderation

2018-03-16T21:14:20.570Z · score: 9 (9 votes)

Causal Networks Model I: Introduction & User Guide

2017-11-17T14:51:50.396Z · score: 14 (14 votes)

Request for Feedback: Researching global poverty interventions with the intention of founding a charity.

2015-05-06T10:22:15.298Z · score: 19 (21 votes)

Meetup : How can you choose the best career to do the most good?

2015-03-23T13:17:00.725Z · score: 0 (0 votes)

Meetup : Frankfurt: "Which charities should we donate to?"

2015-02-27T20:42:24.786Z · score: 0 (0 votes)

What we learned from our donation match

2015-02-07T23:13:32.758Z · score: 5 (5 votes)

How can people be persuaded to give more (and more effectively)?

2014-10-14T09:49:42.426Z · score: 6 (8 votes)