Posts

[Cause Exploration Prizes] Preventing stillbirths 2022-08-29T10:55:47.578Z
Why I am probably not a longtermist 2021-09-23T17:24:22.123Z
Why do content blockers still suck? 2021-01-15T22:57:36.480Z
How high impact are UK policy career paths? 2020-12-17T15:04:08.621Z
My mistakes on the path to impact 2020-12-04T22:13:30.309Z
Brief book review 2020 2020-12-03T21:30:07.843Z
Denise_Melchin's Shortform 2020-09-03T06:11:42.046Z
Doing good is as good as it ever was 2020-01-22T22:09:03.527Z
EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th 2019-09-13T19:34:24.347Z
EA Meta Fund: we are open to applications 2019-01-05T13:32:03.778Z
When causes multiply 2018-08-06T15:51:45.619Z
Against prediction markets 2018-05-12T12:08:35.307Z
Comparative advantage in the talent market 2018-04-11T23:48:56.176Z
Meta: notes on EA Forum moderation 2018-03-16T21:14:20.570Z
Causal Networks Model I: Introduction & User Guide 2017-11-17T14:51:50.396Z
Request for Feedback: Researching global poverty interventions with the intention of founding a charity. 2015-05-06T10:22:15.298Z
Meetup : How can you choose the best career to do the most good? 2015-03-23T13:17:00.725Z
Meetup : Frankfurt: "Which charities should we donate to?" 2015-02-27T20:42:24.786Z
What we learned from our donation match 2015-02-07T23:13:32.758Z
How can people be persuaded to give more (and more effectively)? 2014-10-14T09:49:42.426Z

Comments

Comment by Denise_Melchin on Open EA Global · 2022-09-06T17:18:37.444Z · EA · GW

Yvain is Scott’s old LW name.

Comment by Denise_Melchin on Give us feedback on the new version of the Effective Altruism Handbook! · 2022-09-03T19:58:49.711Z · EA · GW

I'm afraid I don't know anything. While I still like my piece it wasn't intended to provide a strong case against longtermism, only to briefly explore my personal disagreements. In such a piece I would want to see the case against longtermism from different value systems as well as actually engaging with the empirics around cause prioritisation, apart from the obvious: being a lot more thorough than I was.

Comment by Denise_Melchin on My mistakes on the path to impact · 2022-09-02T20:02:55.977Z · EA · GW

I’m sorry I’m only getting to this comment now: I would like to clarify that the reason I started to work outside the EA sphere was not exclusively financial. I decided against exploring this, but I had some suggestions for a generic grant in my direction. The work I did as a research assistant was also on a grant.

I much prefer a “real job”, and as far as I can tell, there are still very few opportunities in the EA job market I’d find enticing. I care about receiving plenty of feedback and legible career capital and that’s much easier as part of an organization.

(But if someone wants to pay me six figures to write blogposts, they should let me know!)

I’m also a bit confused by your framing of “getting to keep” me. I am right here, reading your comment. :)

Comment by Denise_Melchin on Give us feedback on the new version of the Effective Altruism Handbook! · 2022-09-01T20:37:42.479Z · EA · GW

There’s a small selfish part of me which is happy that my “Why I am probably not a longtermist” post is shared as the critical piece on longtermism.

There’s a much bigger part which would wish that someone had written up something much more substantial though! I am a bit appalled that my post seems to be the best we as a movement have to offer to newcomers on critical perspectives.

Comment by Denise_Melchin on [Cause Exploration Prizes] Preventing stillbirths · 2022-08-29T20:32:07.284Z · EA · GW

I did not know this at the time of writing, but GiveWell recommended an Incubation Grant to an Evidence Action programme for syphilis treatment during pregnancy in 2020. They view the moral weights of stillbirth prevention as highly uncertain, in their CEA they are assigning 33 QALYs to a stillbirth averted. This is consistent with a number I found once for what the British NHS assigns.

The CEA for syphilis prevention includes stillbirths averted in its total cost per life saved (coming out to a bit over a $1,000), which is inconsistent with how GiveWell handles stillbirths in the CEA for malaria prevention. Stillbirths are not counted in the cost per life saved for malaria prevention.

Stillbirths only provide a 9% “supplemental intervention-level adjustment” on top of the cost per life saved in the malaria prevention CEA. If one stillbirth comes on two child deaths due to malaria and a stillbirth's prevention is valued at 33/84 compared to a child under 5 passing away, this 9% supplemental intervention-level adjustment should be twice as big.

Comment by Denise_Melchin on Monkeypox: Is it worth worrying about? · 2022-08-17T16:54:09.895Z · EA · GW

Current reporting on monkeypox, particularly from government agencies/public health officials have been pretty terrible, trying to downplay that MPXV is predominantly spreading through sexual activity between men.

The only source for this claim you give is US based. I have not investigated this broadly, but the first two countries whose disease protection agencies I checked do make very clear that this outbreak is primarily in men who have sex with men.

The UK Health Security Agency on latest updates on monkeypox:

"While anyone can get monkeypox, the majority of monkeypox cases in the UK continue to be in gay, bisexual and other men who have sex with men, with the infection being passed on mainly through close contact in interconnected sexual networks."

The NHS Webpage on monkeypox says the same thing:

Anyone can get monkeypox. Though currently most cases have been in men who are gay, bisexual or have sex with other men, so it's particularly important to be aware of the symptoms if you're in these groups.

The German Robert Koch Institut (a federal government agency and research institute on disease control and prevention) on the monkeypox outbreak and cases and situation in Germany:

Die Übertragungen erfolgen in diesem Ausbruch nach derzeitigen Erkenntnissen in erster Linie im Rahmen von sexuellen Aktivitäten, aktuell insbesondere bei Männern, die sexuelle Kontakte mit anderen Männern haben.

(translated: According to most recent findings the infections in this outbreak are primarily occurring after sexual contact, especially in men who have sexual contact with other men. )

Comment by Denise_Melchin on If you fail, you will still be loved and can be happy; a love letter to all EAs · 2022-07-25T07:04:05.785Z · EA · GW

Thank you for writing this Nuno.

Posts around self-worth, not feeling "smart enough" and related topics on the EA Forum don't resonate with me despite having had some superficially similar experiences in EA to the people who are struggling.

My best guess is this is because this is true for me

Or, in other words, I agree that having psychological safety is good. But I think this is the case for true psychological safety, which could come from a circle of close friends or family who are in fact willing to support you in hard times. So psychological safety > no psychological safety >> a veneer of psychological safety that fails when it is tested.

I am happily married (to someone I found in the EA Community in 2014) and have a strong relationship with my parents.

That said, I do think there is something wrong with the EA Community when people trying to do as much good as they can do not feel appreciated! But it's important to narrow down what exactly it is that people should be able to expect from the Community (and where it needs to change) and what not.

Comment by Denise_Melchin on Why EAs are skeptical about AI Safety · 2022-07-19T08:30:04.403Z · EA · GW

Thanks for doing this!

The strength of the arguments is very mixed as you say. If you wanted to find good arguments, I think it might have been better to focus on people with more exposure to the arguments. But knowing more about where a diverse set of EAs is at in terms of persuasion is good too, especially for AI safety community builders.

Comment by Denise_Melchin on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-11T09:05:03.234Z · EA · GW

Ah, when you said 'significant amount' I assumed you meant a lot more. 10% of the total does not seem like much to me.

Comment by Denise_Melchin on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-10T12:33:45.451Z · EA · GW

Sorry, I didn't want to imply Caplan was making a more nuanced argument than you suggested! I do think he makes a much more nuanced argument than the OP suggests however.

EAs seem generally receptive to resources like Emily Oster’s books, Brian Caplan’s book, or Scott Alexander’s Biodeterminist Guide (and its sequel), which all suggest to varying degrees that a significant amount of the toil of parenting can be forgone with near-zero cost.

I think this is not only false, but also none of the authors claim this.

Comment by Denise_Melchin on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-10T08:56:39.648Z · EA · GW

I am not excited. In my experience it is common for parents of young children to have a lot of ideas on this they are keen to implement but dial back on this as their kids get older. Implementing such ideas is a lot of work! You are not able to pursue a full-time career while fully homeschooling your kids. You would forfeit all the benefits of them growing bigger and needing you less. Also, my experience is that most parents realise that outdoing the traditional school system or alternatives with homeschooling is a much higher bar than they thought. This was definitely true for me. (My oldest is ~12.)

Comment by Denise_Melchin on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-10T08:50:18.334Z · EA · GW

Paraphrasing Caplan without doublechecking his sources: the shared environmental effects on politics and religion are on political and religious labels, not necessarily on actions. So your kid might also call themselves a Christian, but does not actually go to church that much.

I agree we shouldn't discourage EAs from having kids too much for some of the reasons you mention, but I am not sure who you are arguing against? I think anti-kid sentiment used to be stronger in the early days of EA but I have not seen it around in years.

Wanting to justify having children with a low chance that they are going to have a large impact later seems like a bad idea to me. It might hurt your relationship with them or worse, cause mental health issues. Have children if you want them, don't have any if you don't.

As Abby has said, I don't think a significant part of parenting toil can actually be foregone. To be fair, I don't think Scott or Bryan Caplan actually claims that it can be! Caplan argues against ferrying your kids to lots of different after-school activities. But frankly, I don't know any parent who does this in the first place.

I am not able to comment on how having children has impacted my aspirations or productivity, as I had my first child before I encountered EA (or finished school, for that matter).

Comment by Denise_Melchin on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-06T14:20:45.515Z · EA · GW

That makes sense, thank you!

Comment by Denise_Melchin on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-06T09:41:00.219Z · EA · GW

Thank you for sharing!

My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally.

I share the experience that my concern is stronger when I am in a worse mood but I am not sure I share your conclusion.

My concern comes from an intuitive judgement when I am in a bad mood. When I am in a good mood it requires cognitive effort to remember how badly off many other people and animals are.

I don't want to deprioritise the worst off in favour of creating many happy lives in the future just because I have a very privileged life and "forget" how badly off others are.

Comment by Denise_Melchin on Why I am probably not a longtermist · 2022-04-21T09:23:04.844Z · EA · GW

This is a link collection for content relevant to my post published since, for ease of reference.

Focusing on the empirical arguments to prioritise x-risks instead of philosophical ones (which I could not be more supportive of):

  1. Carl Shulman’s 80,000hours podcast on the common sense case for existential risk

  2. Scott Alexander writing about the terms long-termism and existential risks

On the definition of existential risk (as I find Bostrom’s definition dubious):

  1. Linch asking how existential risk should be defined

  2. Based on this comment thread in a different question by Linch

  3. Zoe’s paper which also has other stuff I have not yet read in full

How GCBRs could remain a solved problem, thereby getting us closer to existential security:

  1. A blogpost by Carl which cross-posted to the EA Forum later than it was published on the blog
Comment by Denise_Melchin on Free-spending EA might be a big problem for optics and epistemics · 2022-04-21T09:06:15.083Z · EA · GW

You should keep in mind that high-earning positions enable a large amount of donations! Money is a lot more flexible in which cause you can deploy it to. In light of current salaries, one could even work on x-risks as a global poverty EtG strategy.

Comment by Denise_Melchin on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-20T08:24:48.409Z · EA · GW

I think neartermist is completely fine. I have no negative associations with the term, and suspect the only reason it sounds negative is because longtermism is predominant in the EA Community.

Comment by Denise_Melchin on Democratising Risk - or how EA deals with critics · 2021-12-30T17:46:14.328Z · EA · GW

Is there a non-PDF version of the paper available? (e.g. html)

From skimming a couple of the argments seem to be the same I brought up here so I'd like to read the paper in full, but knowing myself I won't have the patience to get through a 35 page pdf.

Comment by Denise_Melchin on Why I am probably not a longtermist · 2021-09-27T18:14:33.971Z · EA · GW

I would be interested to read this!

Comment by Denise_Melchin on Why I am probably not a longtermist · 2021-09-27T17:41:22.529Z · EA · GW

This is just a note that I still intend to respond to a lot of comments, but I will be slow! (I went into labour as I was writing my previous batch of responses and am busy baby cuddling now.)

Comment by Denise_Melchin on Why I am probably not a longtermist · 2021-09-25T10:12:38.950Z · EA · GW

On your second bullet point what I would add to Carl's and Ben's posts you link to is that suffering is not the only type of disvalue or at least "nonvalue" (e.g. meaninglessness comes to mind). Framing this in Haidt's moral foundations theory, suffering is only addressing the care/harm foundation.

Also, I absolutely value positive experiences! More so for making existing people happy, but also somewhat for creating happy people. I think I just prioritise it a bit less than the longtermists around me compared to avoiding misery.

I will try to respond to the s-risk point elsewhere.

Comment by Denise_Melchin on Why I am probably not a longtermist · 2021-09-25T08:47:42.187Z · EA · GW

Thank you everyone for the many responses! I will address one point which came up in multiple comments here as a top-level comment, and otherwise respond to comments.

Regarding the length of the long-term future: My main concern here is that it seems really hard to reach existential security (i.e. extinction risks falling to smaller and smaller levels), especially given that extinction risks have been rising in recent decades. If we do not reach existential security the future population is much smaller accordingly and gets less weight in my considerations. I take concerns around extinction risks seriously - but they are an argument against longtermism, not in favour of it. It just seems really weird to me to jump from 'extinction risks are rising so much, we must prioritize them!' to 'there is lots of value in the long-term future'. The latter is only true if we manage to get rid of those extinction risks.

The line about totalitarianism is not central for me. Oops. Clearly should not have opened the section with a reference to it.

I think even with totalitarianism reaching existential security is really hard - the world would need to be permanently locked into a totalitarian state.

I recommend reading this shortform discussion on reaching existential security.

Something that stood out to me in that discussion (in a comment by Paul Christiano: "Stepping back, I think the key object-level questions are something like "Is there any way to build a civilization that is very stable?" and "Will people try?" It seems to me you should have a fairly high probability on "yes" to both questions.")

as well as Toby's EAG Reconnect AMA is how much of the belief that we can reach existential security might be based on a higher level of baseline optimism than I have about humanity.

Comment by Denise_Melchin on Why I am probably not a longtermist · 2021-09-25T07:37:48.900Z · EA · GW

Thanks for trying to summarise my views! This is helpful for me to see where I got the communication right and where I did not. I'll edit your summary accordingly where you are off:

  1. You have person-affecting tendencies which make you unconcerned less concerned with reducing extinction risks than longtermists, although you are still concerned about the nearterm impacts and put at least some value on the loss of future generations (which also depends on how long/big we can expect the future to be)
  2. You are suffering-focused [Edit: I would not have previously described my views that way, but I guess it is an accurate enough description]
  3. You don’t think humanity is very good now nor that it is likely to be in the future under a sort of ‘business as usual’ path, which makes you unenthusiastic want to prioritise about making the future good over making it long or big
  4. You don’t think the future will be long (unless we have totalitarianism) which reduces the scope for doing good by focusing on the future
  5. You’re sceptical clueless whether there are lock-in scenarios we can affect within the next few decades, and don’t think there is much point of trying to affect them beyond this time horizon
Comment by Denise_Melchin on UK's new 10-year "National AI Strategy," released today · 2021-09-23T08:54:29.959Z · EA · GW

Wow. I am still reading through this, but I am impressed with the quality input the UK government has clearly received and how well they wrote up their considerations and conclusions. Maybe this is normal for the reference class for UK gov strategy documents (if so I was unaware), but it is not something I was expecting.

Comment by Denise_Melchin on Does the Forum Prize lead people to write more posts? · 2021-09-21T08:36:45.543Z · EA · GW

More on that qualitative feedback: While people generally react quite positively to winning the prize, few people have explicitly told us it made them want to write more (even when we asked directly), and our surveys of the Forum's userbase haven’t found many people saying that the chance of winning a prize leads them to invest more time in writing.

Previously I think I responded that it did not motivate me to write more/better, but in retrospect I think this is just false. At least to me, it feels very arrogant to be hopeful that I could win a prize and therefore encourages dishonesty with myself. I expect this to be similar for other people.

Comment by Denise_Melchin on Pandemic prevention in German parties' federal election platforms · 2021-09-19T16:38:31.991Z · EA · GW

Thank you, this was really interesting! I voted already, mostly based on global aid, refugee, animal welfare and climate change considerations, but I would have wanted to look at pandemic considerations too if I had known a quick way to do it at the time. So I expect this to be a very helpful overview for others!

Comment by Denise_Melchin on Frank Feedback Given To Very Junior Researchers · 2021-09-03T08:44:01.266Z · EA · GW

I agree with the gist of this comment, but just a brief note that you do not need to do direct work to be "part of the EA community". Donating is good as well. :-)

Comment by Denise_Melchin on More EAs should consider “non-EA” jobs · 2021-08-21T10:28:52.775Z · EA · GW

I didn't originally, but then did when I could not get an offer for an EA job.

I do think in many cases EA org jobs will be better in terms of impact (or more specifically: high impact non-EA org jobs are hard to find) so I do not necessarily consider this approach wrong. Once you fail to get an EA job, you will eventually be forced to consider getting a non-EA job.

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-17T19:08:37.473Z · EA · GW

Thank you for providing more colour on your view, that's useful!

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-16T12:17:19.061Z · EA · GW

I am still confused whether you are talking about full-time work. I'd very much hope a full-time community builder produces more value than a donation of a couple of thousand dollars to the EA Funds.

But if you are not discussing full-time work and instead part-time activities like occasionally hosting dinners on EA related themes it makes sense to compare this to 10% donations (though I also don't know why you are evaluating 10% donations at ~$2000, median salary in most rich countries is more than 10 times that).

But then it doesn't make sense to compare the 10% donations and part-time activities to the very demanding direct work paths (e.g. AI safety research). Donating $2000 (or generally 10%, unless they are poor) requires way less dedication than fully focussing your career on a top priority path.

Someone who would be dedicated enough to pursue a priority path but is unable to should in many cases be able to donate way more than $2000. Let's say they are "only" in the 90th percentile for ability in a rich country and will draw a 90th percentile salary, which is above £50,000 in the UK (source). If they have the same dedication level as someone in a top priority path they should be able to donate ~£15,000 of that. That is 10 times as much as $2000!

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-14T09:14:35.027Z · EA · GW

I think I agree that the cutoff is if anything higher than top 3% which is why I said originally 'at best'. The smaller that top number is the more glaring is the oversight not to mention this explicitly everytime we have conversations on this topic.

I have been thinking about the initiative bit, thank you for bringing it up. It seems to me that ability and initiative/independentmindedness somewhat tradeoff against each other, so if you are not on the top 3% (or whatever) for ability, you might be able to still have more impact through direct work than donations with a lot of initiative. Buck argues along these lines in his post on doing good through non-standard EA career paths.

That would also be my response to 'but you can work in government or academia'. As soon as "impact" is not strictly speaking in your job description and therefore your impact won't just come from having higher aptitude than the second best candidate, you can possibly do a lot of good by showing a lot of initiative.

The same can be said re. what Jonas said below:

I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.

If you are good at initiative you are maybe able to find the high impact paths which are harder to find than the priority paths and "make up" for lower ability this way.

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-13T20:19:37.875Z · EA · GW

Hm, I agree that the most impactful careers are competitive, but the different careers themselves seem to require very different aptitudes and abilities so I'm not sure the same small group would be at the top of each of these career trajectories.

I agree with this. But I think adding all of these groups together won't result in much more than the top 3% of the population. You don't just need to be in the top 3% to be an AI safety researcher in terms of ability/aptitude for ML research, this will be much more selective. Say it's 0.3%. Same goes for directing global aid budgets efficiently. While these paths require somewhat different abilities/aptitudes, proficiency in them will be very correlated with each other.

In my view the majority of people currently involved in EA could develop a skillset that's quite useful for direct work.

I don't disagree with this, but this is not the bar I have in mind. I think it's worth trying your aptitude for direct work even if you are likely not in the top ~3% (often you won't even know where you are!) but with the expectation that the majority of your impact may likely still come from your donations in the long term.

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-13T20:10:41.252Z · EA · GW

The first thing that comes to mind here is that replaceability is a concern for direct work, but not for donations. Previously, the argument has been that replaceability does not matter as much for the very high impact roles as they are likely heavy tailed and therefore the gap between the first and second applicant large.

But that is not true anymore once you leave the tails, you get the full impact from donations but less impact from direct work due to replaceability concerns. This also makes me a bit confused about your statement that income is unusually heavy-tailed compared to direct work - possibly, but I am specifically not talking about the tails, but about everyone who isn't in the top ~3% for "ability".

Or looking at this differently: for the top few percent we think they should try to have their impact via direct work first. But it seems pretty clear (at least I think so?) that a person in the bottom 20% percentile in a rich country should try to maximise income to donate instead of direct work. The crossover point where one should switch from focusing on direct work instead of donations therefore needs to be somewhere between the 20% and 97%. It is entirely possible that it is pretty low on that curve and admittedly most people interested in EA are above average in ability, but the crossover point has to be somewhere and then we need to figure out where.

For working in government policy I also expect only the top ~3% in ability have a shot at highly impactful roles or are able to shape their role in an impactful way outside of their job description. When you talk about advocacy I am not sure whether you still mean full-time roles. If so, I find it plausible that you do not need to be in the top ~3% for community building roles, but that is mostly because we have plenty of geographical areas where noone is working on EA community building full-time, which lowers the bar for having an impact.

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-12T18:52:47.316Z · EA · GW

[Focusing on donations vs. impact through direct work]

This is somewhat of a followup to this discussion with Jonas (where I think we mostly ended up talking past each other) as well as a response to posts like this one by Ben T.

In the threads above the authors are arguing that it makes more sense for “the community” to,say, focus on various impacts through direct work than through donations. I think this answer is the right type of answer for the question in which direction the community should be steered to maximise impact, so particularly relevant for people who have a big influence on the community.

But I am thinking about the question of how we can maximise the potential of current community members. For plenty of them, high impact job options like becoming a grantmaker directing millions of dollars or meaningfully influencing developing world health policy will not be realistic paths.

In Ben’s post, he discusses how even if you are trying to aim for such high impact paths you should have backup options in mind which I completely agree with.

What I would add is that if high impact direct job options do not work out for you, most of the time you should focus on donations instead. To be clear, I think it is worth trying high impact direct work paths first!

My impression is that at best the top 3% of people in rich countries in terms of ability (intelligence, work ethic, educational credentials) are able to pursue such high impact options. What I have a less good sense of is whether other people agree with this number.

It is easy to do a lot of good by having a job with average pay and donating 10-20% of your income. To me, it seems really hard to do much more good than that unless you are in the top 3% in terms of ability and put in a lot of effort to enter such a high impact path.

I would like to understand better whether other people disagree with this number or whether they are writing for the top ~3% as their target audience. If it’s the first then I am wondering whether this is from a different assessment for how common high impact roles are or how difficult they are.

Comment by Denise_Melchin on Denise_Melchin's Shortform · 2021-08-12T12:01:56.353Z · EA · GW

In case you didn't know it yet, you can access a user list of the EA Forum here, where you can see and sort by karma, post count and comment count.

Comment by Denise_Melchin on [PR FAQ] Adding profile pictures to the Forum · 2021-08-10T15:07:14.596Z · EA · GW

I am not Larks, but I really like having the Forum as a space where appearance is ~irrelevant. There are not many such (EA) spaces and I do not want the main one where only thoughts matter being taken away. In my experience, people including EAs do treat you differently based on how you look. It is nice not to have to deal with that for once.

Comment by Denise_Melchin on [PR FAQ] Adding profile pictures to the Forum · 2021-08-10T09:16:28.662Z · EA · GW

I am not a fan either. Something I'd much rather like to see is more encouragement to use identifiable user names. Many people do, especially longer term users, but increasingly many people do not.

Comment by Denise_Melchin on Effective Altruism Polls: A resource that exists · 2021-07-10T17:57:05.248Z · EA · GW

Hi Aaron, thanks for sharing these!

I think it would be best to blur names and pictures however unless you asked for consent by everyone who is depicted. I have not voted in any of the polls, but if I had I would have done it with the assumption that this cannot be traced back to me outside of Facebook.

Comment by Denise_Melchin on Rodents farmed for pet snake food · 2021-05-04T18:38:26.168Z · EA · GW

There's something about this exchange I find super charming, thank you for sharing. Maybe how kind you both are, trying to help each other, with both of you earnestly motivated by completely different target audiences - you trying to do well by rats and mice, and the snake owner by snakes.

Comment by Denise_Melchin on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T18:23:55.181Z · EA · GW

I like this comment! But I think I would actually go a step further:

I don’t dispute the expertise of the people you listed.

I haven't thought too hard about this, but I think I do actually dispute the expertise of the people Ryan listed. But that is nothing personal about them!

When I think of the term 'expert' I usually have people in mind who are building on decades of knowledge of a lot of different practitioners in their field. The field of global priorities has not existed long enough and has not developed enough depth to have meaningful expertise as I think of the term.

I am very happy to defer to experts if they have orders of magnitude more knowledge than me in a field. I will gladly accept the help of an electrician for any complex electrical problem despite the fact that I changed a light switch that one time.

But I don't think that applies to global priorities for people who are already heavily involved in the EA community - the gap between these EAs and the global priorities 'experts' listed in terms of knowledge about global priorities is much, much smaller than between me and electrician about electrics. So it's much less obvious whether it makes sense for these people to defer.

Comment by Denise_Melchin on My personal cruxes for focusing on existential risks / longtermism / anything other than just video games · 2021-04-13T14:29:25.175Z · EA · GW

Thank you for writing this! I'm currently drafting something similar and your post gave me some new ideas on how to structure it so it would be easy to follow.

Comment by Denise_Melchin on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-07T08:37:24.707Z · EA · GW

This is really cool, thank you! :-)

One thought: it is much easier now than it used to be to look at highly upvoted posts. In the old forum old popular posts simply fell by the wayside, now you can sort by them. We also now have the favourites section which encourages people to read highly upvoted posts they haven't read yet.

So I think highly popular posts now look more popular than they really are compared to the past even according to your metric.

Also, I'm proud to say I guessed the most well received forum post according to your metric correctly!

Comment by Denise_Melchin on Jakob_J's Shortform · 2021-04-06T20:43:54.977Z · EA · GW

I was thinking of a salary in the mid £40k range when I said that I feel like I need a higher salary to be able to afford living in London with children as it is my salary as a civil servant. :-) That is significantly above median and average UK salary. And still ~20% above median London salary, though I struggled to quickly find numbers for average London salary.

I think if you have two people earning £40k+ each having kids in London is pretty doable even if both are GWWC pledgers. I think I'd feel uncomfortable if both parents brought in less than £30k, though it would be fine in different areas of the UK.

Only few people in the UK can earn above £80k. Most people have kids anyway. I personally wouldn't think the trade-off you are suggesting is worth it on selfish/child benefiting grounds alone (ignoring EtG potential). But different parents want to make different trade-offs for their children, they value different things.

If you are surrounded by people who think £80k salaries are a necessity to raise children, maybe you would find it helpful to surround yourself more with many different kinds of families of different socioeconomic backgrounds. Their kids can be happy too :-)

Comment by Denise_Melchin on Possible misconceptions about (strong) longtermism · 2021-04-06T15:15:41.817Z · EA · GW

When this post went up, I wrote virtually the same comment, but never sent it! Glad to see you write it up, as well as your below comments. I have the impression that in each supposed example of 'simple cluelessness' people just aren't being creative enough to see the 'complex cluelessness' factors, as you clarify with chairs in other comment.

My original comment even included saying how Phil's example of simple cluelessness is false, but it's false for different reasons than you think: If you try to conceive a child a day later, this will not in expectancy impact when the child will be born. The impact is actually much stronger than that. It will affect whether you are able to conceive in this cycle at all, since eggs can only be fertilized during a very brief window of time (12-24 hours). If you are too late, no baby.

Comment by Denise_Melchin on Jakob_J's Shortform · 2021-04-04T17:21:49.012Z · EA · GW

This depends on where you live. But for Europe and the US, usually the biggest expense factors are housing (bigger place required, particularly in the long term) and childcare (both in terms of paid childcare for young children as well as lost wages). In some countries, childcare is subsidized however, sometimes heavily so, reducing the costs.

If just having lots of time was most important for being "successful" in raising a family, it would still cost a lot of money - it is time you cannot spend working.

When I lived in Germany with heavily subsidized childcare, I never felt like I needed to earn a lot of money to have children. Living in the UK now, particularly in London, with very little subsidized childcare, I feel more forced to have a higher earning job.

Julia has written about her experience here.

Comment by Denise_Melchin on Some quick notes on "effective altruism" · 2021-03-29T12:47:22.591Z · EA · GW

I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community. We should decide whether we want to grow more than 1000-fold once we've grown 100-fold and have more information.

I meant this slightly differently than you interpreted it I think. My best guess is that less than 10% of the Western population are capable of entering potentially high impact career paths and we already have plenty of people in the EA community for whom this is not possible. This can be for a variety of reasons: they are not hard-working enough, not smart enough, do not have sufficient educational credentials, are chronically ill, etc. But maybe you think that most people in the current EA community are very well qualified to enter high impact career paths and our crux is there?

While I agree that government jobs are easier to get into than other career paths lauded as high impact in the EA Community (at least this seems to be true for the UK civil service), my impression is that I am a lot more skeptical than other EAs that government careers are a credible high impact career path. I say this as someone who has a government job. I have written a bit about this here, but my thinking on the matter is currently very much a work in progress and the linked post does not include most reasons why I feel skeptical. To me it seems like a solid argument in favour has just not been made.

I would feel excited about a project that tries to find out why donation rates are low (lack of money? lack of room for more funding? saving to give later and make donations more well-reasoned by giving lump sums? a false perception that money won't do much good anymore? something else?) and how we might increase them. (What's your guess for the reasons? I'd be very interested in more discussion about this, it might be worth a separate EA Forum post if that doesn't exist already.)

I completely agree with this (and I think I have mentioned this to you before)! I'm afraid I only have wild guesses why donation rates are low. More generally, I'd be excited about more qualitative research into understanding what EA community members think their bottlenecks to achieving more impact are.

Comment by Denise_Melchin on Some quick notes on "effective altruism" · 2021-03-26T20:48:59.335Z · EA · GW

The EA community would probably greatly increase its impact if it focused a bit less on personal donations and a bit more on spending ODA budgets more wisely, improving developing-world health policy, funding growth diagnostics research, vastly increasing government funding for clean meat research, etc.

I think I disagree with this given what the community currently looks like. (This might not be the best place to get into this argument, since it's pretty far from the original points you were trying to make, but here we go.)

Two points of disagreement:

i) The EA Survey shows that current donation rates by EAs are extremely low. From this I conclude that there is way too little focus on personal donations within the EA community. That said, if we get some of the many EAs which are donating very little to work on the suggestions you mention, that is plausibly a net improvement, as the donation rates are so low anyway.

Relatedly, personal donations are one of the few things that everyone can do. In the post, you write that "The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc.", but as I understand the terms you use, this is probably less than 10% of the Western population. But maybe you disagree with that?

Accordingly, I do not view this as the longer-term goal of the EA community, but only one of them. Most of the other people who cannot have high-flying high-impact careers, which is most people, should focus on maximizing donations instead.

ii) I think the EA community currently does not have the expertise to reliably have a positive impact on developing world policy. It is extremely easy to do harm in this area. Accordingly, I am also sceptical of the idea to introduce a hits-based global development fund, though I would need to understand better with what you are intending there. I would be very keen for the EA community to develop expertise in this area and some of the suggestions you make e.g growth diagnostics research should help with that. But we are very far from having expertise right now and should act accordingly.

Comment by Denise_Melchin on Progress Open Thread: March 2021 · 2021-03-24T10:09:04.980Z · EA · GW

So despite the fact that I spent quite a while thinking about adopting vs. having biological children a few years ago and came out in favour of having biological children for now based on similar concerns you (and Dale) raise about more adverse outcomes in adopted children, I find your conclusion to strongly dissuade from adopting very surprising.

You thinking that adopting might likely be a life-destroying mistake does not seem to line up with the adoption satisfaction data Aaron linked. Maybe you meant this specifically for adopting teenagers? It was not clear from your comment.

In many ways, I prefer more awareness about difficulties in adopting over the naive 'why don't you just adopt?!' do-gooders who want to have children often hear. So I am grateful that this topic is brought up, I would just prefer a more clear pointing out of trade-offs, as well as more emotional sensitivity on the topic.

When I looked into this, I looked more at qualitative accounts (and also tried to answer a slightly different question - does demand from potential parents for low-risk adopted children outstrip how many such children there are?) and less at quantitative data. While this seems like a clear oversight in retrospect, apparently this led me to a more negative impression than is warranted now looking at the data you linked. If you had told me that 1,5% of biological children have substantial drug abuse problems as defined in the paper and asked me to guess the percentage for adopted children, I would have guessed way more than 3,5%. I was also surprised by the adoption satisfaction data Aaron linked. So if anything, you are leaving me with a more positive impression of adoption, and are motivating me to look into the topic again.

Thus I am surprised you are so confident in your position you would be willing to spend so much time on dissuading people to adopt. (You are welcome to try to dissuade me that this is even worth my time looking into it!) To me, the outcomes do not seem to be 'very predictably bad'.

While on average adopted children have worse outcomes than biological children, this really does not need to be true for each individual making this choice. It is also not the only factor which matters. To name some examples which can tilt the decision: infertility, same-sex couples, previous difficult pregnancies or birth trauma, family history of genetic diseases like Huntington's, more garden-variety heritable risk factors for issues like ADHD, autism and depression, how high risk the potential adoptive children actually are, e.g. based on their age and previous history, etc.

Comment by Denise_Melchin on Please stand with the Asian diaspora · 2021-03-20T09:39:19.218Z · EA · GW

Despite the lack of good data, I suspect that it is indeed the case that anti-asian crimes have risen significantly this year. We known that violent crime in general has increased significantly since the BLM protests/riots of last summer, and that attacks on asians are disproportionately caused by blacks (28% for 2018, the last year we have data, vs just 15% for white and hispanic victims). So my guess is that reductions in policing as a result of the protests have left many asians exposed. Most races are primarily victimised by others of the same race (62% for whites, 70% for blacks), but this is far less true for asians (24%). Presumably it is these inter-racial crimes that asians disproportionately suffer from which either are, or at least are reported as, hate crimes.

Given the one source you give, I am wondering whether you are talking about the US only? If so, this is something you should clarify in this paragraph, as I would not necessarily expect patterns like this to generalize to other countries.

Comment by Denise_Melchin on AMA: Toby Ord @ EA Global: Reconnect · 2021-03-17T21:41:29.265Z · EA · GW

+1, very interested in this. I didn't find the reasons in the Precipice that compelling/not detailed enough, so I'd be curious for more.