Posts

Denise_Melchin's Shortform 2020-09-03T06:11:42.046Z · score: 6 (1 votes)
Doing good is as good as it ever was 2020-01-22T22:09:03.527Z · score: 88 (42 votes)
EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th 2019-09-13T19:34:24.347Z · score: 34 (16 votes)
EA Meta Fund: we are open to applications 2019-01-05T13:32:03.778Z · score: 27 (14 votes)
When causes multiply 2018-08-06T15:51:45.619Z · score: 19 (18 votes)
Against prediction markets 2018-05-12T12:08:35.307Z · score: 19 (21 votes)
Comparative advantage in the talent market 2018-04-11T23:48:56.176Z · score: 24 (27 votes)
Meta: notes on EA Forum moderation 2018-03-16T21:14:20.570Z · score: 9 (9 votes)
Causal Networks Model I: Introduction & User Guide 2017-11-17T14:51:50.396Z · score: 14 (14 votes)
Request for Feedback: Researching global poverty interventions with the intention of founding a charity. 2015-05-06T10:22:15.298Z · score: 19 (21 votes)
Meetup : How can you choose the best career to do the most good? 2015-03-23T13:17:00.725Z · score: 0 (0 votes)
Meetup : Frankfurt: "Which charities should we donate to?" 2015-02-27T20:42:24.786Z · score: 0 (0 votes)
What we learned from our donation match 2015-02-07T23:13:32.758Z · score: 5 (5 votes)
How can people be persuaded to give more (and more effectively)? 2014-10-14T09:49:42.426Z · score: 6 (8 votes)

Comments

Comment by denise_melchin on [Link] "Where are all the successful rationalists?" · 2020-10-17T21:29:22.764Z · score: 8 (5 votes) · EA · GW

This post seems to fail to ask the fundamental question "winning at what?". If you don't want to become a leading politician or entrepeneur, then applying rationality skills obviously won't help you get there.

The EA community (which is distinct from the rationality community, which the author fails to note) clearly has a goal however: doing a lot of good. How much money GiveWell has been able to move to AMF clearly has improved a lot over the past ten years, but as the author says, that only proves they have convinced others of rationality. We still need to check whether deaths from malaria have actually been going down a corresponding amount due to AMF doing more distributions. I am not aware of any investigations of this question.

Some people in the rationalist community likely only have 'understand the world really well' as their goal, which is hard to measure the success of, though better forecasts can be one example. I think the rationality community stocking up on food in February before it was sold out everywhere is a good example of a success, but probably not the sort of shining example the author might be looking for.

If your goal is to have a community where a specific rationalist-ish cluster of people shares ideas, it seems like the rationalist community has done pretty well.

[Edit: redacted for being quickly written, and in retrospective failing to engage with the author's perspective and the rationality community's stated goals]

Comment by denise_melchin on What actually is the argument for effective altruism? · 2020-10-13T20:47:51.887Z · score: 6 (4 votes) · EA · GW

Thank you so much for the podcast Ben (and Arden!), it made me excited to see more podcasts and post of the format 'explain basic frameworks and/or assumptions behind your thinking'. I particularly appreciated that you mentioned that regression to the mean has a different meaning in a technical statistical context than the more colloquial EA one you used.

One thing that I have been thinking about since reading the podcast is that you are explicitly defining increasing the amount of doing good by spending more of your resources as not part of the core idea of the EA if I understood correctly, and only trying to increase the amount of doing good per unit of resources. It was not entirely clear to me how large a role you think increasing the amount of resources people spend on doing good should play in the community.

I think I have mostly thought of increasing or meeting an unusually high threshold of resources spend on doing good as an important part of EA culture, but I am not sure whether others view it the same. I'm also not sure whether considering it as such is conducive to maximizing overall impact.

Anyway, this is not an objection, my thoughts are a bit confused and I'm not sure whether I'm actually properly interacting with something you said. I just wanted to express a weak level of surprise and that this part of your definition felt notable to me.

Comment by denise_melchin on jackmalde's Shortform · 2020-10-13T18:51:05.528Z · score: 2 (1 votes) · EA · GW

I was thinking the same! I had to google Muzak, but that also seems like pretty nice music to me.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-10-01T21:17:43.635Z · score: 28 (13 votes) · EA · GW

Something I have been wondering about is how social/'fluffy' the EA Forum should be. Most posts just make various claims and then the comments are mostly about disagreement with those claims. (There have been various threads about how to handle disagreements, but this is not what I am getting at here.) Of course not all posts fall in this category: AMAs are a good example, and they encourage people to indulge in their curiosity about others and their views. This seems like a good idea to me.

For example, I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting. This would clutter the comment section more, but it might be worth it by people feeling more connected to the community if they hear more specific positive feedback.

I feel like Facebook groups used to do more online community fostering within EA than they do now, and the EA Forum hasn't quite assumed the role they used to play. I don't know whether it should. It is valuable to have a space dedicated to 'serious discussions'. Although having an online community space might be more important than usual while we are all stuck at home.

Comment by denise_melchin on Parenting: Things I wish I could tell my past self · 2020-10-01T19:46:29.106Z · score: 17 (7 votes) · EA · GW

Thank you so much for this post! It's one of these posts that gives the community a more community like feel which is nice.

To share my experience: I have two kids, they are 10 and 3.5. What I would tell my younger self before my first kid mostly revolves around "slack", everything else went very well! I think my predictions around what having a kid would be like were mostly pretty decent and mentally preparing for a lot of challenges paid off.

But one thing I did not fully account for is how having slack for my future plans matters and how having a child would reduce the amount of slack I had a lot. Slack would have been most relevant in case I wanted to change my future plans which I did not expect to change much (this is more of a young person error). I did not properly budget for opportunities opening up/maybe changing my mind. E.g. it had not occurred to me that going to university abroad might be a better option than in my home country, but that would have been very difficult with a child.

I think my predictions and mindset were actually more off before my second child. I think I was much less mentally prepared for challenges and did not budget for them in the same way as I had before my first child. Some of that was due to underestimating how different children can be and how much your experience can differ between different children. I had heard this from other parents, but did not really want it to be true, surely I knew what was up after one child already? As it turned out, my experiences were pretty different with both my children - with my first, sleep had never been that big of a deal, my second still does not quite properly sleep through the night at the age of 3.5 years. However, taking care of my second during daylight hours has been a lot easier than with my first, I didn't realise babies could be so easy!

Not mentally (and practically) preparing for challenges the same way for my second as I had before my first was partially the same mistake, but deserves its own mention. I find it a bit tricky to say how 'wrong' that was however, would I actually want to let my younger self before my second child know about the challenges I had? I was more engaged with wishful thinking, but babies are hard work, and maybe parents need a bit of wishful thinking to actually be willing to have another one. Otherwise hyperbolic discounting would stop them.

This is also the way I feel now - I'm hoping to have a third child soon-ish, but pretend to myself that everything will be easy peasy, because my tendency to hyperbolically discount might deter me. Deluding myself might just be correct.

I don't think I changed much as a person due to having children.

Comment by denise_melchin on Thomas Kwa's Shortform · 2020-09-30T17:15:13.261Z · score: 5 (3 votes) · EA · GW

Strong upvoted. Thank you so much for providing further resources, extremely helpful, downloading them all on my Kindle now!

Comment by denise_melchin on 5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities · 2020-09-29T11:21:53.540Z · score: 45 (15 votes) · EA · GW

I want to use the opportunity to point out that you can pledge more than 10%! This hasn't always been in my conscious awareness as much as it possibly should have been.

I pledged 10% in 2013, but changed my pledge to 20% a few months ago. :-)

Comment by denise_melchin on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-24T11:20:51.558Z · score: 7 (4 votes) · EA · GW

Thank you for writing this! I once failed a job interview because what I learned from the EA community as a 'confidence interval' was actually a credible interval. Pretty embarrassing.

Comment by denise_melchin on Buck's Shortform · 2020-09-24T09:12:32.272Z · score: 8 (2 votes) · EA · GW

It also looks like the post got a fair number of downvotes, and that its karma is way lower than for other posts by the same author or on similar topics. So it actually seems to me the karma system is working well in that case.

That's what I thought as well. The top critical comment also has more karma than the top level post, which I have always considered to be functionally equivalent to a top level post being below par.

Comment by denise_melchin on Thomas Kwa's Shortform · 2020-09-23T20:25:23.538Z · score: 19 (9 votes) · EA · GW

I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.

My current moral views seem to be something close to "reflected" preference utilitarianism, but now that I think this is my view, I find it quite hard to figure out what this actually means in practice.

My impression is that most EAs don't have a very preference utilitarian view and prefer to advocate for their own moral views. You may want to look at my most recent post on my shortform on this topic.

If you would like to set up a call sometime to discuss further, please PM!

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-21T21:08:00.774Z · score: 8 (2 votes) · EA · GW

Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.

I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.

I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.

Comment by denise_melchin on Stefan_Schubert's Shortform · 2020-09-19T12:35:33.056Z · score: 2 (1 votes) · EA · GW

People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

I'm not sure I agree with this, so it is not obvious to me that there is anything special about GP research. But it depends on who you mean by 'people' and what your evidence is. The reference class of research also matters - I expect people are more willing to believe physicists, but less so sociologists.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-19T11:42:50.772Z · score: 19 (12 votes) · EA · GW

[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]

I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.

The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.

If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA.

This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.

But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing. What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).

I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-17T20:54:29.842Z · score: 3 (2 votes) · EA · GW

Thank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post.

I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what').

The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.

Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-17T16:35:21.710Z · score: 15 (8 votes) · EA · GW

[epistemic status: musing]

When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').

I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.

Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem.

A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection.

These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what. EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits.

Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me.

Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.

Comment by denise_melchin on Buck's Shortform · 2020-09-13T19:33:27.921Z · score: 11 (4 votes) · EA · GW

I have felt this way as well. I have been a bit unhappy with how many upvotes in my view low quality critiques of mine have gotten (and think I may have fallen prey to a poor incentive structure there). Over the last couple of months I have tried harder to avoid that by having a mental checklist before I post anything but not sure whether I am succeeding. At least I have gotten fewer wildly upvoted comments!

Comment by denise_melchin on How have you become more (or less) engaged with EA in the last year? · 2020-09-11T19:19:08.998Z · score: 17 (7 votes) · EA · GW

EA becoming more intellectually sophisticated does not feel like a contradiction to what I was trying to communicate when I said intellectually stale (I may have expressed myself poorly!). As you said, there were more new ideas at the beginning but that is not the only way to be intellectually non-stale. While there may be more fine-grained detailed and possibly more true claims out in the EA community right now, that does not mean that participating in contributing these ideas is as accessible to people as it once was which is part of what I consider non-staleness to be.

I am a bit confused what you are trying to communicate when you say that it's unfair to only look at the number of ideas.

Comment by denise_melchin on How have you become more (or less) engaged with EA in the last year? · 2020-09-11T17:13:28.561Z · score: 32 (14 votes) · EA · GW

My response feels similar to Joey's and Kerry's.

I care about doing as much as I have always done and am as invested in doing it, but have found the EA community to become intellectually stale. Personally, I also feel like the EA community does not incentivise me to do good as much as it once did (but more to 'perform EA-ness').

I am still as socially involved as I have previously been, but feel more emotionally disconnected as well as less intellectually excited.

Comment by denise_melchin on Asking for advice · 2020-09-09T18:08:33.850Z · score: 4 (2 votes) · EA · GW

Very similar here. I wouldn't quite say unfriendly/status thing, but like a social interaction with a friend got sucked into commercialized business mode ("capitalism ate your friendships!" - definitely not my endorsed reaction, but feels kind of true).

Comment by denise_melchin on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T08:37:33.310Z · score: 13 (5 votes) · EA · GW

I would really appreciate if commentators were more careful to speak about this specific instance of uninviting a speaker instead of uninviting speakers in general, or at least clarify why they choose to speak about the general case.

I am not sure whether they choose to speak about the general case because they think uninviting in this particular case would in itself be an appropriate choice, but it sets up a slippery slope to uninvite more and more speakers, or whether this is because uninviting in this particular case is already net negative for the movement.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-03T06:11:42.449Z · score: 9 (6 votes) · EA · GW

There is now a Send to Kindle Chrome browser extension, powered by Amazon. I have been finding it very valuable for actually reading long EA Forum posts as well as 80,000hours podcast transcripts.

Comment by denise_melchin on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T15:13:47.509Z · score: 39 (23 votes) · EA · GW

I certainly agree that it would be great if the debate was thoughtful on all sides. But I am reluctant to punish emotional responses in these contexts.

When I look at this thread, I see a lack of women participating. Exceptions: Khorton, and Julia clarifying a CEA position. There were also a couple of people whose gender I could not quickly identify.

There are various explanations for this. I am not sure the gender imbalance on this thread is actually worse than on other threads. It could be noise. But I know why I said nothing: I found writing a thoughtful, non-emotional response too hard. I expect to fail because the subject is too upsetting.

This systematically biases the debate in favour of people who bear no emotional cost in participating.

Comment by denise_melchin on EA Meta Fund Grants – July 2020 · 2020-08-14T11:25:56.360Z · score: 12 (8 votes) · EA · GW

In short: yes, we are open to funding other mentorship programmes, including ones open to men.

I would be pretty sad if people felt less motivated to start a mentorship programme because we already funded another. I am hoping for the opposite effect. I agree that mentorship is very valuable.

My intuition is that people consider us willing to fund a project with one target audience as positive evidence for us willing to fund a similar project with a different target audience, as it provides proof of concept that we are willing to fund such projects in principle. For example, we have funded fundraising & tax-deductibility initiatives in different countries so far and we keep seeing applications for them.

If someone wants to start a mentorship programme with a different target audience to WANBAM, I am keen for them to apply to the Meta Fund.

Comment by denise_melchin on EA is vetting-constrained · 2020-06-29T16:51:14.621Z · score: 10 (4 votes) · EA · GW

Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12).

Comment by denise_melchin on Max_Daniel's Shortform · 2020-06-14T18:00:55.192Z · score: 21 (8 votes) · EA · GW

(Have not read through Max' link dump yet, which seems very interesting, I also feel some skepticism of the 'new optimism' worldview.)

One major disappointment in Pinker's book as well as in related writings for me has been that they do little to acknowledge that how much progress you think the world has seen depends a lot on your values. To name some examples, not everyone views the legalization of gay marriage and easier access to abortion as progress, and not everyone thinks that having plentiful access to consumer goods is a good thing.

I would be very interested in an analysis of 'progress' in light of the different moral foundations discussed by Haidt. I have the impression that Pinker exclusively focuses on the 'care/harm' foundation, while completely ignoring others like Sanctity/purity or Authority/respect and this might be where some part of the disconnect between the 'New optimists' and opponents is coming from.

Comment by denise_melchin on What are the leading critiques of "longtermism" and related concepts · 2020-06-03T17:50:14.880Z · score: 3 (2 votes) · EA · GW

That's very fair, I should have been a lot more specific in my original comment. I have been a bit disappointed that within EA longtermism is so often framed in utilitarian terms - I have found the collection of moral arguments in favour of protecting the long-term future brought forth in The Precipice a lot more compelling and wish they would come up more frequently.

Comment by denise_melchin on Finding an egg cell donor in the EA community · 2020-05-30T17:45:59.806Z · score: 4 (2 votes) · EA · GW

You would need to check the legality of this however - this is illegal in at least a few European countries, including the UK and Germany.

Comment by denise_melchin on What are the leading critiques of "longtermism" and related concepts · 2020-05-30T17:42:48.816Z · score: 11 (8 votes) · EA · GW

Most people don't value not-yet-existing people as much as people already alive. I think it is the EA community holding the fringe position here, not the other way around. Neither is total utilitarianism a majority view among philosophers. (You might want to look into critiques of utilitarianism.)

If you pair this value judgement with a belief that existential risk is less valuable to work on than other issues for affecting people this century, you will probably want to work on "non-longtermist" problems.

Comment by denise_melchin on Finding an egg cell donor in the EA community · 2020-05-24T09:47:34.841Z · score: 11 (7 votes) · EA · GW

Hi linn!

Which country are you in? I have been putting a lot of thought into becoming an egg donor in the UK over the past few months and am currently in the evaluation process for one egg bank and one matching service.

First I would like to note that while most matching services primarily match on phenotype, there certainly are some where you get a detailed profile from the potential donors. I would be happy to tell you the name of the matching agency in the UK that I have been working with which strongly encourages getting a good personality match.

I would expect finding a donor directly from the EA community to be much harder, but maybe someone will respond to your request (but it would be good to know where you live!). Feel free to PM to chat more.

Comment by denise_melchin on Long-Term Future Fund and EA Meta Fund applications open until June 12th · 2020-05-15T13:41:20.122Z · score: 13 (8 votes) · EA · GW

We have a limited pot of money available, so our decisions are primarily bottlenecked by its size. We have occasionally (once?) decided to not to spend the complete available amount to have more money available for the next distribution cycle, when we had reason to assume we would be able to make stronger grants then.

I am not sure whether that answered your question?

Comment by denise_melchin on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-15T11:10:43.265Z · score: 19 (7 votes) · EA · GW

This is very much an aside, but I would be really curious how many people you perceive as having changed their views to longtermism would actually agree with this. (According to David's analysis, it is probably a decent amount.)

E.g. I'm wondering whether I would count in this category. From the outside I might have looked like I changed my views towards longtermism, while from the inside I would describe my views as pretty agnostic, but I prioritised community preferences over my own. There might also be some people who felt like they had to appear to have or act on longtermist views to not lose access to the community.

Comment by denise_melchin on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-13T16:35:50.749Z · score: 6 (3 votes) · EA · GW

Yes, that is what I meant. Thank you so much for providing additional analysis!

Comment by denise_melchin on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-11T17:31:57.102Z · score: 71 (28 votes) · EA · GW

Thank you for looking into the numbers! While I don't have a strong view on how representative the EA Leaders forum is, taking the survey results about engagement at face value doesn't seem right to me.

On the issue of long-termism, I would expect that people who don't identify as long-termists to now report to be less engaged with the EA Community (especially with the 'core') and identify as EA less. Long-termism has become a dominant orientation in the EA Community which might put people off the EA Community, even if their personal views and actions related to doing good haven't changed, e.g. their donations amounts and career plans. The same goes for looking at how long people have been involved with EA - people who aren't compelled by long-termism might have dropped out of identifying as EA without actually changing their actions.

Comment by denise_melchin on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-19T11:35:15.541Z · score: 54 (27 votes) · EA · GW

Strong upvoted. I think a post like this is extremely useful as a resource to clarify 80,000 hours role for the community. I appreciate 80,000 hours has previously been putting effort into communicating how they see their role in the community in comments on this Forum, but communicating this clearly in one place so people can easily point to it seems very valuable to me.

Comment by denise_melchin on Discontinuous progress in history: an update · 2020-04-18T17:21:26.759Z · score: 8 (6 votes) · EA · GW

Great post!

Brief note: I found The Victorian Internet by Tom Standage (basically a history of the telegraph) very useful for training my intuition what the development of a discontinuity like the telegraph looked like both from the scientists' and engineers' perspective as well as the societal changes that followed.

Comment by denise_melchin on My personal cruxes for working on AI safety · 2020-02-13T22:03:43.648Z · score: 11 (8 votes) · EA · GW

This was great, thank you. I've been asking people about their reasons to work on AI safety as opposed to other world improving things, assuming they want to maximize the world improving things they do. Wonderful when people write it up without me having to ask!

One thing this post/your talk would have benefited from to make things clearer (or well, at least for me) is if you gave more detail on the question of how you define 'AGI', since all the cruxes depend on it.

Thank you for defining AGI as something that can do regularly smart human things and then asking the very important question how expensive that AGI is. But what are those regularly smart human things? What fraction of them would be necessary (though that depends a lot on how you define 'task')?

I still feel very confused about a lot of things. My impression is that AI is much better than humans at quite a few narrow tasks though this depends on the definition. If AI was suddenly much better than humans at half of all the tasks human can do, but sucked at the rest, then that wouldn't count as artificial 'general' intelligence under your definition(?) but it's unclear to me whether that would be any less transformative though this depends a lot on the cost again. Now that I think about it, I don't think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there. I'm also not clear on whether you use the same definition as other people, whether those usually use the same one and how much all the other cruxes depend on how exactly you define AGI.

Comment by denise_melchin on Who should give sperm/eggs? · 2020-02-12T21:44:17.321Z · score: 5 (3 votes) · EA · GW

I'm fairly surprised by this response, this doesn't match what I have read. The Human Fertilisation and Embryology Authority imposes a limit for sperm and egg donors to donate to a maximum of ten families in the UK, although there is no limit on how many children might be born to these ten families (I'm struggling to link, but google 'HFEA ten family limit'). But realistically, they won't all want to have three children.

I'm curious whether you have a source for the claim that 99% of prospective sperm donors in the UK get rejected? I'm much less confident about this, but this doesn't line up with my impression. I also didn't have the impression they were particularly picky about egg donors, unlike in the US.

But yes, it's true for sperm and egg donors alike that in the UK they can be contacted once the offspring turns 18.

Comment by denise_melchin on Who should give sperm/eggs? · 2020-02-09T19:27:29.284Z · score: 5 (3 votes) · EA · GW

There are also multiple medical and genetic appointments required in advance. I am currently undergoing the process to become an egg donor in the UK (though there is a good chance that I will be rejected) and the process is quite involved. To some extent, this is also true for sperm donors.

Comment by denise_melchin on Who should give sperm/eggs? · 2020-02-09T19:25:33.391Z · score: 5 (3 votes) · EA · GW

Adding to what Khorton said, it depends a lot on what your bar for doing good that you consider worth doing is and what you consider 'doing good' to be.

In the UK, there is an egg and sperm donor shortage, so there is some chance you will cause children to exist that wouldn't have existed otherwise (instead of just 'replacing' children).

Comment by denise_melchin on Doing good is as good as it ever was · 2020-01-26T18:18:44.633Z · score: 2 (1 votes) · EA · GW

No, I haven't. Given the amount of upvotes Phil's comment received (from which I conclude a decent fraction of people do find arguments in this space demotivating which is important to know) I will probably read up on it again. But I very rarely write top-level posts and the probability of this investigation turning into one is negligible.

Comment by denise_melchin on Doing good is as good as it ever was · 2020-01-25T19:58:34.442Z · score: 4 (3 votes) · EA · GW

Through thinking about these comments, I did remember an EA Forum thread in which ii) and iii) were argued about from 4 years ago: https://forum.effectivealtruism.org/posts/ajPY6zxSFr3BbMsb5/are-givewell-top-charities-too-speculative

It's worth reading the comment section in full. Turns out my position has been consistent for the past 4 years (though I should have remembered that thread!).

Comment by denise_melchin on Doing good is as good as it ever was · 2020-01-25T19:31:49.411Z · score: 18 (8 votes) · EA · GW

I've been involved in the community since 2012 - the changes seem drastic to me, both based on in-person interactions with dozens of people as well as changes in the online landscape (e.g. the EA Forum/EA Facebook groups).

But that is not in itself surprising. The EA community is on average older than when it started. Youth movements are known for becoming less enthusiastic and ambitious over time, when it turns out that changing the world is actually really, really hard.

A better test is: how motivated do EAs feel who are of a similar demographic to long-term EAs years ago when EA started? I have the impression they are much less motivated. It used to be a common occurrence in e.g. Facebook groups to see people writing about how motivating they have found it to be around other EAs. This is much rarer than it used to be. I've met a few new-ish early 20s EAs and I don't think I can even name a single one who is as enthusiastic as the average EA was in 2013. I wonder whether the lack of new projects being started by young EAs is partially caused by this (though I am sure there are other causes).

To be clear, I don't think there has been as drastic a change since 2018, which is I think when you started participating in the community.

Comment by denise_melchin on Doing good is as good as it ever was · 2020-01-22T22:08:36.926Z · score: 4 (2 votes) · EA · GW

In principle you only need i) and iii), that's true, but I think in practice ii) is usually also required. Humans are fairly scope insensitive, and I doubt we'd see low community morale from ordinary do gooding actions being less good by a factor of two or three. As an example, historically GiveWell estimates of how much saving a life with AMF costs have differed by about this much - and it didn't seem to have much of an impact on community morale. Not so now.

Our crux seems to be that you assume cluelessness or ideas in the same space are a large factor in producing low community morale for doing good. I must admit that I was surprised by this response, I personally haven't found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.

Comment by denise_melchin on Doing good is as good as it ever was · 2020-01-19T09:54:09.803Z · score: 12 (5 votes) · EA · GW

Yep, I agree that if i) you personally buy into the long-termist thesis, and ii) you expect the long-term effects of ordinary do gooding actions to be bigger than short-term effects, and iii) you expect these long-term effects to be negative, then it makes sense to be less enthusiastic about your ability to do good than before.

However, I doubt most people who feel like I described in the post fall into this category. As you said, you were uncertain about how common this feeling is. Lots of people hear about the much bigger impact you can have by focussing on the far future. Significantly fewer are well versed in the specific details and practical implications of long-termism.

While I have heard about people believing ii) and iii), I haven't seen either argument carefully written up anywhere. I'd assume this is true for lots of people. There has been a big push in the EA community to believe i), this has not been true for ii) and iii) as far as I can tell.

Comment by denise_melchin on In praise of unhistoric heroism · 2020-01-08T16:39:32.734Z · score: 53 (28 votes) · EA · GW

Thinking about this further, one concern I have with this post as well as Ollie's comment is that I think people could unduly underrate the amount of good the average Westerner can actually do after reading it.

If you have a reasonably high salary or donate more than 10% (and assuming donations don't become much less cost-effective) to AMF or similarly effective charities, you can save hundreds of lives over your lifetime. Saving one life via AMF is currently estimated to cost around only £2,500. If you only earn the average graduate salary forever and only donate 10%, you can still save dozens of lives.

For reference, Oskar Schindler saved 1200 lives and is now famous for it worldwide.

My words at someone's funeral who saved dozens or even hundreds of lives would be a lot more laudatory than what was said about Dorothea.

Comment by denise_melchin on In praise of unhistoric heroism · 2020-01-08T12:01:20.344Z · score: 6 (5 votes) · EA · GW

Great post. I also think we could work more on the root cause of people feeling like this. Perhaps the message should be: "Doing good and having an impact is not about you. Doing good is for the world, its people and other living beings."

Comment by denise_melchin on More info on EA Global admissions · 2020-01-06T14:39:43.008Z · score: 34 (13 votes) · EA · GW

This might be a slightly silly suggestion and I'm not sure how best to implement it, but I think it might be useful to remind potential attendees that attending EAG is not obligatory just because you are part of the EA Community and/or care a lot about doing good well. I heard from a few people who weren't particularly excited about attending EAG, but still did it because that's 'what you do as an EA'. It seems sad that these people take up spots from people who are actually keen on EAG itself.

It only occurred to me fairly late last year that attending EAG is actually entirely optional. On a side note, rising ticket prices did help me come to the realisation that I did not actually want to go (and therefore didn't take up a spot from someone who was more keen on going).

Comment by denise_melchin on More info on EA Global admissions · 2020-01-04T23:19:06.691Z · score: 24 (9 votes) · EA · GW

I don't feel like I get more value out of large conferences and I'd be curious about seeing more data on this question. For me, having more people at a conference makes it harder to physically find the people I actually want to talk to. They make up a smaller fraction of attendees and are more spread out. I have also had the impression that conversations at large conferences are shorter. In combination, I get much less value out of very large events compared to small or medium sized ones.

The event size was one of the main reasons I decided not to attend EAG London this year for the first time. It is too big for me to get sufficient value out of it.

Comment by denise_melchin on Thoughts on doing good through non-standard EA career pathways · 2020-01-04T19:56:03.269Z · score: 13 (7 votes) · EA · GW

5. also has a negative impact on the people who are trying to decide between different career options and would actually be happy to hear constructive criticism. I often feel like I cannot trust others to be honest in their feedback if I'm deciding between career options because they prefer to be 'nice'.

Comment by denise_melchin on EA Meta Fund November 2019 Payout Report · 2019-12-11T19:31:17.507Z · score: 8 (6 votes) · EA · GW

Well, I'd assume this is because the LTFF team has more time available than the Meta Fund team. Plausibly largely driven by one volunteer who is very happy to spend a lot of time on the LTFF.