Why do content blockers still suck? 2021-01-15T22:57:36.480Z
How high impact are UK policy career paths? 2020-12-17T15:04:08.621Z
My mistakes on the path to impact 2020-12-04T22:13:30.309Z
Brief book review 2020 2020-12-03T21:30:07.843Z
Denise_Melchin's Shortform 2020-09-03T06:11:42.046Z
Doing good is as good as it ever was 2020-01-22T22:09:03.527Z
EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th 2019-09-13T19:34:24.347Z
EA Meta Fund: we are open to applications 2019-01-05T13:32:03.778Z
When causes multiply 2018-08-06T15:51:45.619Z
Against prediction markets 2018-05-12T12:08:35.307Z
Comparative advantage in the talent market 2018-04-11T23:48:56.176Z
Meta: notes on EA Forum moderation 2018-03-16T21:14:20.570Z
Causal Networks Model I: Introduction & User Guide 2017-11-17T14:51:50.396Z
Request for Feedback: Researching global poverty interventions with the intention of founding a charity. 2015-05-06T10:22:15.298Z
Meetup : How can you choose the best career to do the most good? 2015-03-23T13:17:00.725Z
Meetup : Frankfurt: "Which charities should we donate to?" 2015-02-27T20:42:24.786Z
What we learned from our donation match 2015-02-07T23:13:32.758Z
How can people be persuaded to give more (and more effectively)? 2014-10-14T09:49:42.426Z


Comment by denise_melchin on CEA update: Q4 2020 · 2021-01-21T22:19:10.471Z · EA · GW

(I'm German, but have lived in the UK for 4.5 years now.)

My best guess is that you are both right, and large cultural differences are at play. I found this really bizarre when I moved to the UK. In Germany, you are an ambitious overachiever if you have a 'career plan' at 22. In the UK this is standard.

Among educated Germans, people take longer to finish their degrees, are more likely to take gap years, change degrees. Internships seem to be much rarer. The 'summer internship' system does not seem to exist as much in Germany, and just is not considered necessary in the same way. Most Germans do Master's (which take 2 years in Germany) as only a Bachelor's degree is taken less seriously. Having children during your degree is more common.

Educated Germans just start full-time employment much later. This is so extreme that in my friendship circle I do not know any German non-EA who has finished their education (all including Master's) and started a full-time job before the age of 27 (!).

Comment by denise_melchin on Why do content blockers still suck? · 2021-01-21T17:32:15.945Z · EA · GW

Thanks for the response! Freedom unfortunately just stopped working for me many times. After I uninstalled and reinstalled it for the fifth time (which makes it work again for a while) and the customer service had no idea what was going on, I gave up. I still use it for my phone however.

I don't think there is anything on the market which blocks things by default, which is the primary feature I am looking for, plus much more fine grained blocking (e.g. inability to access or google content containing specific phrases).

Comment by denise_melchin on The ten most-viewed posts of 2020 · 2021-01-14T09:47:08.147Z · EA · GW

I'd still be curious how many unique views there are - I'm pretty surprised at the high view counts above. I had expected the discrepancy between unique views and upvotes to be smaller.

Are there just a lot of silent readers who never upvote or do the same readers who already upvoted click on the post again and again (to read the comments)?

Comment by denise_melchin on New infographic based on "The Precipice". any feedback? · 2021-01-14T09:02:50.064Z · EA · GW

This looks great! And I agree with Aslan that the minesweeper edition feels very different and I am glad you created it.

One note: existential risks are a distinct concept to both extinction risks and global catastrophic risks. Table 6.1 in Toby's book describes existential risks which is what you are depicting here - existential risks include extinction risk but also the risk that humanity will turn into a permanent dystopia as well as permanent civilisational collapse (but humanity lives on).

Global catastrophic risks are different again: they are risks that kill at least 10% of the human population.

Comment by denise_melchin on The Folly of "EAs Should" · 2021-01-13T08:44:05.700Z · EA · GW

Agree with all of the above!

Comment by denise_melchin on The Folly of "EAs Should" · 2021-01-12T11:17:14.704Z · EA · GW

I don't currently know of a reliable way to actually do a lot of good as a doctor.

I do know of such a way, but that might be because we have different things in mind when we say 'reliably do a lot of good'.

Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy. If they might also do a great job as a quant trader, I would also suggest checking that out. But I doubt most doctors make good quant traders, so it might still be one of the best opportunities for them.

I am less familiar with this and therefore not confident, but there are also some specialisations Doctors without Borders have a hard time filling (while for some, there is an over-supply). I think this would be worth looking into, as well as other paths to deliver medical expertise in developing countries.

Comment by denise_melchin on Thoughts on being mortal · 2021-01-01T19:38:36.834Z · EA · GW

I appreciated this post.

Comment by denise_melchin on What are some potential coordination failures in our community? · 2020-12-20T18:44:35.782Z · EA · GW

I wrote a post on the subject here!

Comment by denise_melchin on Careers Questions Open Thread · 2020-12-17T22:27:50.572Z · EA · GW

Hi Ana,

It's great to hear you are so passionate about learning and doing research! My best guess would be that you should focus on getting some real world job experience for a year or so. While you may not have as much statistical knowledge yet as you might want, I suspect it is better for you to learn them in a supportive 'real work' environment than on your own. Given that you have a PhD and soon two Master's (impressive!) I expect employers will trust they can train you up in the skills you need, so you don't have to learn them outside of a job first.

Something employers will often want to see is some evidence that you can solve their problems outside of a research/academic context. I expect it will be a lot easier for you to find a role you are really passionate about once you have some job experience, even if that means doing something that is not your dream job yet in the meantime.

Good luck!

Comment by denise_melchin on Careers Questions Open Thread · 2020-12-17T18:57:56.183Z · EA · GW

This is not only relevant to my career, but I asked a couple of questions here about the impact of UK civil service careers.

Comment by denise_melchin on How high impact are UK policy career paths? · 2020-12-17T15:16:30.199Z · EA · GW

Personal context that I did not add to the main body (as I want it to be helpful for other people too): I am currently a civil servant, just starting in a new role which I expect to stay in for a year or so.

In my previous role, my main goal was to gain generic career capital and become more optimistic about having an impact through my career. In my free time, I have been trying to think about my values, and am currently still thinking about what I believe about cause prioritisation as well as how to practically have an impact in the world (see the above questions).

If I don't find it plausible that the UK civil service has particularly good leverage compared to other options (e.g. earning to give), I will likely still focus on generic career capital in my role until I have a better sense of what my general views on how to best have an impact in the world are. If I do find it plausible that the UK civil service is a very promising path to have a high impact compared to other options, I will try harder to find out how to specifically have a high impact within the civil service and what my personal fit is, given that I am already there anyway.

I am not a UK national and thanks to Brexit unfortunately this will not change, so a few paths are not possible for me: e.g. Dfid now having been merged in the Foreign Office rules it out as well as options related to national security.

Comment by denise_melchin on What myths or misconceptions prevent people from supporting EA organizations that work on animal welfare or long-termist causes? · 2020-12-17T13:16:17.928Z · EA · GW

For instance, "the state of the world in 100 years does not affect me so I don't need to give to long-term causes."

This is not answering your question (and probably not very important), but I am a bit confused why you think this is an example of a myth or misconception?

Is this because you think there is a good chance of curing aging within the next 100 years, or because you might interpret the claim non-literally (e.g. people often do care that their grandkids have a good life, even though this still does not affect them personally), or something else?

Comment by denise_melchin on richard_ngo's Shortform · 2020-12-16T09:17:37.675Z · EA · GW

Hi Richard, I just wanted to say that I appreciate you asking these questions! Based on the number of upvotes you have received, other people might be wondering the same, and it's always useful to propagate knowledge like Alex has written up further.

I would have appreciated it even more if you had not directly jumped to accusing EA of being misleading (without any references) before waiting for any answers to your question.

Comment by denise_melchin on 80k hrs #88 - Response to criticism · 2020-12-14T14:54:12.194Z · EA · GW

Thank you all for your responses, I really appreciated them. Your perspectives make more sense to me now, though I have to say I still feel really confused.

[Following comment not exhaustively responding to everything you said.]

I hadn't intended to communicate in my first comment that Mark deliberately intended to violate the forum guidelines, but that he deliberately decided against being kind and curious. (Thank you for pointing that out, I did not think of the alternative reading.) I didn't provide any evidence for this because I thought Mark said this very explicitly at the start of his post:

To play by gentlemans rules is to their advantage - curtailing the tools in at my disposal to makes bullshit as costly as possible.

I acknowledge there are some negative costs to this (e.g. polluting the information commons with avoidable conflict), and good people can disagree about if the tradeoff is worth it. But I believe it is.

Gentleman's rules usually include things like being kind and curious I would guess, and Mark says explicitly that he ignores them because the tradeoff is worth it to him. I don't understand how these lines can be interpreted in any other way, this seems like the literal reading to me.

I have to admit that even after all your kind elaborate explanations I struggle to understand how anything in the section 'Conflict can be an effective tactic for good' could be read as tongue-in-cheek, as it reads very openly hostile to me ('s right there in the title?) .

I don't think it is that unlikely that interviewees on the 80k podcast would respond to a kind thoughtful critique on the EA Forum. That said, this is not just about Tristan, but everyone who might disagree with Mark, as the 'Conflict can be an effective tactic for good' section made me doubt they would be treated with curiosity and kindness.

I will take from this that people can have very different interpretations of the same content, even if I think the content is is very explicit and straightforward.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-14T14:18:58.460Z · EA · GW

Hi Michelle,

Sorry for being a bit slow to respond. I have been thinking about your question on how the EA community can be more supportive in situations I experienced, but struggled to come up with answers I feel particularly confident in. I might circle back to this question at a later point.

For now, I am going to answer instead what I suspect would have made me feel better supported while I was struggling to figure out what I should be doing, but I don't feel particularly confident:

i) Mentorship. Having a soundboard to talk through my decisions (and possibly letting me know that I was being silly when I felt like I wasn't allowed to make my own decisions) might have helped a lot.

ii) Having people acknowledge that I maneuvered myself into a position that wasn't great from the perspective of expected impact, and that this all kind of sucked.

That said, for the latter one, the bottleneck might have been me. I had quite a few people who I thought knew me well express surprise at how miserable I felt at the time, so apparently this was not as obvious as I thought.

I would expect my first suggestion to generalise, mentorship is likely very useful for a lot of people!

I had a lot of contact with local and global EAs, and without that I probably would have done worse. I particularly appreciated people's support when I was actually applying to 'real jobs' last year. Both when I was trying to decide whether I should accept a low-ball offer from a tech startup (which I rejected) as well as the wide support I received from civil servants in how to navigate the civil service application process.

In the post I mentioned that I mentally distanced myself from EA a bit, but I wouldn't say that I distanced myself from EA. This was a purely mental shift in how I relate to the community and doing as much good as I can. Please don't kick me out ;-)

Comment by denise_melchin on Careers Questions Open Thread · 2020-12-12T17:18:39.344Z · EA · GW

I just wanted to thank you for starting this thread Ben. I have recently been thinking about how useful it would be to have a more casual EA space to discuss how to have an impact in you career than the options we currently have, and this thread seems like a great step in that direction.

Comment by denise_melchin on 80k hrs #88 - Response to criticism · 2020-12-12T09:00:40.985Z · EA · GW

Sure. I am pretty baffled by the response to my comments. I agree the first was insufficiently careful about the fact that Mark is a new user, but even the second got downvotes.

In the past, users of the forum have said many times that posting on the EA Forum gives them anxiety as they are afraid of hostile criticism. So I think it is good to be on the lookout for posts and comments that might have this effect. Being 'kind' and 'approaching disagreements with curiosity' should protect against this risk. But I ask the question: Is Tristan going to feel comfortable engaging in the Forum, in particular as a response to this post? I don't think so.

Quotes I thought were problematic in that I think they would upset Tristan or put him off responding (or others who might work with him or agree with him):

I have a mini Nassim Taleb inside me that I let out for special occasions 😠. I'm sometimes rude to Tristan, Kevin Roose and others.

I read this as Mark proudly announcing that he likes to violate good discourse norms.

Others which I think will make feel Tristan accused and unwelcome (not 'kind' and not 'approaching disagreements with curiosity'):

It is because he has been one of the most influential people in building a white hot moral panic, and frequently bends truth for the cause.

Tristan's hyperbole sets the stage for drastic action.

Generally hostile:

To play by gentlemans rules is to their advantage - curtailing the tools in at my disposal to makes bullshit as costly as possible.

If the 'Conflict can be an effective tactic for good' section had not been written, I would not have downvoted, as it seems to add little to the content, while making Tristan likely feel very unwelcome.

There was a post which was similar in style to Mark's post arguing against Will here and the response to that was pretty negative, so I am surprised that Mark's post is being perceived so differently.

I only rarely downvote. There have been frequent requests in the past that it would be good if users generally explained why they downvoted. This has not come up before, but I took from that that the next time I downvote, it would be good if I explained why. So I did. And then got heavily downvoted myself for it. I am not sure what to make of this - are the people requesting for downvoters to generally explain themselves just different people than the ones who downvoted my comment (apparently so, otherwise they would have explained themselves)? Whatever is the reason, I doubt I will explain my downvotes again in the future.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-11T15:14:37.548Z · EA · GW

Thanks David, this is more or less what I was trying to express with my response to Stefan in that thread.

I want to add that "making intellectual progress" has two different benefits: One is the obvious one, figuring out more true things so they can influence our actions to do more good. As you say, we may actually be doing better on that one.

The other one is to attract people to the community by it being an intellectually stimulating place. We might be losing the kind of people who answered 'stagnation' in the poll above, as they are not able to participate in the professionalised debates, if they happen in public at all.

On the other hand, this might mean that we are not deterring people anymore who may have felt like they need to be into intellectual debates to join the EA community. I don't know what the right trade-off is, but I suspect it's actually more important not to put latter group off.

Comment by denise_melchin on 80k hrs #88 - Response to criticism · 2020-12-11T12:08:50.865Z · EA · GW

I did not realise you are a new user and probably would have framed my comment differently if I had, I am sorry about that!

To familiarise yourself with our writing guidelines, you can find them on the left bar under 'About the Forum', or just click.

In the past, other users have stated they prefer when people who downvote give explanations for their downvotes. This does seem particularly helpful if you are new and don't know the ins and outs of our forum guidelines and norms yet.

It is great to see you engage with your expertise, and I think it would be a shame if users are put off from engaging with your writing because your content is framed antagonistically.

Comment by denise_melchin on 80k hrs #88 - Response to criticism · 2020-12-11T10:30:41.103Z · EA · GW

I downvoted this post. Some of our writing guidelines here are to approach disagreements with curiosity as well as trying to be kind. You are clearly deciding against both of these.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-10T21:09:56.664Z · EA · GW

This comment made me very happy! If you think you would benefit from talking through your career thoughts with someone and/or be accountable to someone, feel free to get in touch.

Comment by denise_melchin on Careers Questions Open Thread · 2020-12-10T10:37:09.177Z · EA · GW

First of all, you have shown an impressive amount of stamina! Well done.

My guess is that if you want to pursue this path, you should focus on getting more political contacts, for example get involved in party politics. I know a lot of people who worked for MPs (albeit in a different country) who got these roles via party political work.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-08T17:21:57.390Z · EA · GW

Something I want to add here:

I am not sure whether my error was how much I was deferring in itself. But the decision to defer or not should be made on well defined questions and clearly defined 'experts' you might be deferring to. This is not what I was doing. I was deferring on a nebulous question ('what should I be doing?') to an even more nebulous expert audience (a vague sense of what 'the community' wanted).

What I should have been doing instead first is to define the question better: Which roles should I be pursuing right now?

This can then be broken down further into subquestions on cause prioritisation, which roles are promising avenues within causes I might be interested in, which roles I might be well suited for, etc, whose information I need to aggregate in a sensible fashion to answer the question which roles I should be pursuing right now.

For each of these subquestions I need to make a separate judgement. For some it makes more sense to defer, for others, less so. Disappointingly, there is no independent expert panel investigating what kind of jobs I might excel at.

But then who to defer to, if I think this is a sensible choice for a particular subquestion, also needs to be clearly defined: for example, I might decide that it makes sense to take 80k at their word about which roles in a particular cause area are particularly promising right now, after reading what they actually say on their website on the subject, perhaps double-checking by asking them via email and polling another couple of people in the field.

'The community' is not a well defined expert panel, while the careful aggregation of individual opinions can be, who again, need to be asked well defined questions. Note that this can true even if I gave equal weight to every EA's opinion: sometimes it can seem like 'the community' has an opinion that only few individual EAs hold if actually asked, if any. This is especially true if messaging is distorted and I am not actually asking a well defined question.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-07T11:38:30.189Z · EA · GW

Thank you for pointing that out, I agree candidates should not consider such a low number of applications not resulting in full-time offers strong evidence against them having a chance.

I am not sure whether the question of whether one has a chance at an 'EA job' is even a good one however. 'EA jobs' are actually lots of different roles which are not very comparable to one another. They do not make for a good category one should be thinking about - but rather about which field someone might be interested in working in, and what kind of role might be a good fit. Some of which may so happen to be at EA orgs, but most will not.

Also, I appreciate I did not clarify this in the post, but I did not get rejected from all 7 roles I applied to in 2018 - I got rejected from 5, dropped out of 1 and could not do the 3-month trial stage I was invited to for visa reasons 1 time.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-07T11:02:34.841Z · EA · GW

I think I probably agree with the general thrust of this comment, but disagree on various specifics.

'Intelligent people disagree with this' is a good reason against being too confident in one's opinion. At the very least, it should highlight there are opportunities to explore where the disagreement is coming from, which should hopefully help everyone to form better opinions.

I also don't feel like moral uncertainty is a good example of people deferring too much.

A different way to look at this might be that if 'good judgement' is something that lots of people need in their careers, especially if they don't follow any of the priority paths (as argued here), this is something that needs to be trained - and you don't train good judgement by always blindly deferring.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-07T10:12:14.875Z · EA · GW

Definitely in 2017, possibly earlier, although I am not sure. I went to the main national event of my political organisation in autumn 2017, after not having been a few years.

I could not generally re-engage as I moved countries in 2016. Unfortunately, political networks don't cross borders very well.

Comment by denise_melchin on My mistakes on the path to impact · 2020-12-06T10:58:35.311Z · EA · GW

And applying for jobs in EA orgs also doesn't have to come at great personal expense

I want to push back against this point a bit. Although I completely agree that you shouldn't treat working at non-EA orgs as a failure!

In my experience, applying for jobs in EA orgs has been very expensive compared to applying to other jobs, even completely ignoring any mental costs. There was a discussion about this topic here as well, and my view on the matter has not changed much - except I now have some experience applying to jobs outside EA orgs, backing up what I previously thought.

To get to the last stage of a process in the application processes I went through at EA orgs routinely took a dozen hours, and often dozens. This did not happen once when I applied to jobs outside of EA orgs. Application processes were just much shorter. I don't think applying to EA jobs as I did in 2018 would have been compatible with having a full-time job, or only with great difficulty.

Something I also encountered only in EA org application processes were them taking several months or being very mismanaged - going back and forth on where someone was in the application process, or having an applicant invest dozens of hours only to inform them that the org was actually unable to provide visas.

Comment by denise_melchin on Brief book review 2020 · 2020-12-04T09:26:12.878Z · EA · GW

I don't finish most books which I don't think are worth reading or don't even get properly started on them, so there are not that many anti-recommendations. Hopefully that is true for most readers?

Weakest on this list is 21st Lessons for the 21st Century from a learning perspective and I probably also hit diminishing returns on reading both the Tim Harford books and How we got to now, both of which are on the history of objects and inventions. (I am also just realising that I forgot to include Exactly, a book on the history of precision engineering which I abandoned halfway through and is on a similar theme.)

For a lot of other books on this list I would only recommend reading them if that sort of book sounds like your cup of tea.

Comment by denise_melchin on A new, cause-general career planning process · 2020-12-03T17:35:07.583Z · EA · GW

From Ben's post:

"Later, we hope to release a ‘just the key messages’ version that aims to quickly communicate the key concepts, without as much detail on why or how to apply them. We realise the current article is very long – it’s not aimed at new readers but rather at people who might want to spend days or more making a career plan. "

[Edit: Ben said the same thing at the same time, but much more kindly!]

Comment by denise_melchin on A new, cause-general career planning process · 2020-12-03T13:34:39.711Z · EA · GW

That sounds excellent, thank you so much for the detailed response!

Comment by denise_melchin on A new, cause-general career planning process · 2020-12-03T12:37:11.403Z · EA · GW

Exciting! I would be curious whether you could give more detail on how the 2020 career planning process differs from the general advice in the 2017 career guide?

Comment by denise_melchin on Using a Spreadsheet to Make Good Decisions: Five Examples · 2020-12-02T13:33:45.105Z · EA · GW

You can also do a Monte Carlo analysis to see how outcomes might differ when you are not confident how strongly you want to prioritise and/or weight. Maybe some of these research investigations are not actually necessary!

Comment by denise_melchin on Should effective altruists have children? · 2020-11-17T08:58:15.667Z · EA · GW

Strong +1. I was thinking of writing a very similar comment.

A good strategy to me seems to divide your resources into altruistic and personal buckets, decide on their respective sizes and optimise within those buckets. That having children will be one of the best options in the altruistic bucket is pretty unlikely, but it could be much closer to the top in the personal bucket.

Comment by denise_melchin on What are some quick, easy, repeatable ways to do good? · 2020-11-15T20:40:49.634Z · EA · GW

My best guess for something that most people can easily do that is still an ordinary day to day action but particularly high impact in this class is to get in touch with an elderly socially isolated relative. They will be very happy about your phone call.

Comment by denise_melchin on Please Take the 2020 EA Survey · 2020-11-12T09:27:54.757Z · EA · GW

I think this was the first time I completed it in years! Thank you for organising.

Some notes: I was confused why I was supposed to select only up to three responses for EA entities which might have positively influenced my impact, but had unlimited responses for negative influences. I also thought it was a bit odd that I was asked to estimate my giving for 2020 but not my income.

Comment by denise_melchin on [Link] "Where are all the successful rationalists?" · 2020-10-17T21:29:22.764Z · EA · GW

This post seems to fail to ask the fundamental question "winning at what?". If you don't want to become a leading politician or entrepeneur, then applying rationality skills obviously won't help you get there.

The EA community (which is distinct from the rationality community, which the author fails to note) clearly has a goal however: doing a lot of good. How much money GiveWell has been able to move to AMF clearly has improved a lot over the past ten years, but as the author says, that only proves they have convinced others of rationality. We still need to check whether deaths from malaria have actually been going down a corresponding amount due to AMF doing more distributions. I am not aware of any investigations of this question.

Some people in the rationalist community likely only have 'understand the world really well' as their goal, which is hard to measure the success of, though better forecasts can be one example. I think the rationality community stocking up on food in February before it was sold out everywhere is a good example of a success, but probably not the sort of shining example the author might be looking for.

If your goal is to have a community where a specific rationalist-ish cluster of people shares ideas, it seems like the rationalist community has done pretty well.

[Edit: redacted for being quickly written, and in retrospective failing to engage with the author's perspective and the rationality community's stated goals]

Comment by denise_melchin on What actually is the argument for effective altruism? · 2020-10-13T20:47:51.887Z · EA · GW

Thank you so much for the podcast Ben (and Arden!), it made me excited to see more podcasts and post of the format 'explain basic frameworks and/or assumptions behind your thinking'. I particularly appreciated that you mentioned that regression to the mean has a different meaning in a technical statistical context than the more colloquial EA one you used.

One thing that I have been thinking about since reading the podcast is that you are explicitly defining increasing the amount of doing good by spending more of your resources as not part of the core idea of the EA if I understood correctly, and only trying to increase the amount of doing good per unit of resources. It was not entirely clear to me how large a role you think increasing the amount of resources people spend on doing good should play in the community.

I think I have mostly thought of increasing or meeting an unusually high threshold of resources spend on doing good as an important part of EA culture, but I am not sure whether others view it the same. I'm also not sure whether considering it as such is conducive to maximizing overall impact.

Anyway, this is not an objection, my thoughts are a bit confused and I'm not sure whether I'm actually properly interacting with something you said. I just wanted to express a weak level of surprise and that this part of your definition felt notable to me.

Comment by denise_melchin on jackmalde's Shortform · 2020-10-13T18:51:05.528Z · EA · GW

I was thinking the same! I had to google Muzak, but that also seems like pretty nice music to me.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-10-01T21:17:43.635Z · EA · GW

Something I have been wondering about is how social/'fluffy' the EA Forum should be. Most posts just make various claims and then the comments are mostly about disagreement with those claims. (There have been various threads about how to handle disagreements, but this is not what I am getting at here.) Of course not all posts fall in this category: AMAs are a good example, and they encourage people to indulge in their curiosity about others and their views. This seems like a good idea to me.

For example, I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting. This would clutter the comment section more, but it might be worth it by people feeling more connected to the community if they hear more specific positive feedback.

I feel like Facebook groups used to do more online community fostering within EA than they do now, and the EA Forum hasn't quite assumed the role they used to play. I don't know whether it should. It is valuable to have a space dedicated to 'serious discussions'. Although having an online community space might be more important than usual while we are all stuck at home.

Comment by denise_melchin on Parenting: Things I wish I could tell my past self · 2020-10-01T19:46:29.106Z · EA · GW

Thank you so much for this post! It's one of these posts that gives the community a more community like feel which is nice.

To share my experience: I have two kids, they are 10 and 3.5. What I would tell my younger self before my first kid mostly revolves around "slack", everything else went very well! I think my predictions around what having a kid would be like were mostly pretty decent and mentally preparing for a lot of challenges paid off.

But one thing I did not fully account for is how having slack for my future plans matters and how having a child would reduce the amount of slack I had a lot. Slack would have been most relevant in case I wanted to change my future plans which I did not expect to change much (this is more of a young person error). I did not properly budget for opportunities opening up/maybe changing my mind. E.g. it had not occurred to me that going to university abroad might be a better option than in my home country, but that would have been very difficult with a child.

I think my predictions and mindset were actually more off before my second child. I think I was much less mentally prepared for challenges and did not budget for them in the same way as I had before my first child. Some of that was due to underestimating how different children can be and how much your experience can differ between different children. I had heard this from other parents, but did not really want it to be true, surely I knew what was up after one child already? As it turned out, my experiences were pretty different with both my children - with my first, sleep had never been that big of a deal, my second still does not quite properly sleep through the night at the age of 3.5 years. However, taking care of my second during daylight hours has been a lot easier than with my first, I didn't realise babies could be so easy!

Not mentally (and practically) preparing for challenges the same way for my second as I had before my first was partially the same mistake, but deserves its own mention. I find it a bit tricky to say how 'wrong' that was however, would I actually want to let my younger self before my second child know about the challenges I had? I was more engaged with wishful thinking, but babies are hard work, and maybe parents need a bit of wishful thinking to actually be willing to have another one. Otherwise hyperbolic discounting would stop them.

This is also the way I feel now - I'm hoping to have a third child soon-ish, but pretend to myself that everything will be easy peasy, because my tendency to hyperbolically discount might deter me. Deluding myself might just be correct.

I don't think I changed much as a person due to having children.

Comment by denise_melchin on Thomas Kwa's Shortform · 2020-09-30T17:15:13.261Z · EA · GW

Strong upvoted. Thank you so much for providing further resources, extremely helpful, downloading them all on my Kindle now!

Comment by denise_melchin on 5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities · 2020-09-29T11:21:53.540Z · EA · GW

I want to use the opportunity to point out that you can pledge more than 10%! This hasn't always been in my conscious awareness as much as it possibly should have been.

I pledged 10% in 2013, but changed my pledge to 20% a few months ago. :-)

Comment by denise_melchin on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-24T11:20:51.558Z · EA · GW

Thank you for writing this! I once failed a job interview because what I learned from the EA community as a 'confidence interval' was actually a credible interval. Pretty embarrassing.

Comment by denise_melchin on Buck's Shortform · 2020-09-24T09:12:32.272Z · EA · GW

It also looks like the post got a fair number of downvotes, and that its karma is way lower than for other posts by the same author or on similar topics. So it actually seems to me the karma system is working well in that case.

That's what I thought as well. The top critical comment also has more karma than the top level post, which I have always considered to be functionally equivalent to a top level post being below par.

Comment by denise_melchin on Thomas Kwa's Shortform · 2020-09-23T20:25:23.538Z · EA · GW

I have recently been thinking about the exact same thing, down to getting anthropologists to look into it! My thoughts on this were that interviewing anthropologists who have done fieldwork in different places is probably the more functional version of the idea. I have tried reading fairly random ethnographies to built better intuitions in this area, but did not find it as helpful as I was hoping, since they rarely discuss moral worldviews in as much detail as needed.

My current moral views seem to be something close to "reflected" preference utilitarianism, but now that I think this is my view, I find it quite hard to figure out what this actually means in practice.

My impression is that most EAs don't have a very preference utilitarian view and prefer to advocate for their own moral views. You may want to look at my most recent post on my shortform on this topic.

If you would like to set up a call sometime to discuss further, please PM!

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-21T21:08:00.774Z · EA · GW

Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.

I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.

I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.

Comment by denise_melchin on Stefan_Schubert's Shortform · 2020-09-19T12:35:33.056Z · EA · GW

People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

I'm not sure I agree with this, so it is not obvious to me that there is anything special about GP research. But it depends on who you mean by 'people' and what your evidence is. The reference class of research also matters - I expect people are more willing to believe physicists, but less so sociologists.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-19T11:42:50.772Z · EA · GW

[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]

I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.

The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.

If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA.

This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.

But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing. What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).

I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-17T20:54:29.842Z · EA · GW

Thank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post.

I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what').

The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.

Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?

Comment by denise_melchin on Denise_Melchin's Shortform · 2020-09-17T16:35:21.710Z · EA · GW

[epistemic status: musing]

When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').

I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.

Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem.

A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection.

These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what. EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits.

Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me.

Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.