Posts

New Cause area: The Meta-Cause [Cause Exploration Prize] 2022-08-11T17:21:33.935Z
How and why to turn everything into audio 2022-08-11T08:49:32.749Z
Longtermists Should Work on AI - There is No "AI Neutral" Scenario 2022-08-07T16:43:17.065Z
Three common mistakes when naming an org or project 2022-07-23T13:22:15.737Z
Four reasons I find AI safety emotionally compelling 2022-06-28T14:01:33.340Z
Doing good easier: how to have passive impact 2022-05-02T14:48:29.794Z

Comments

Comment by Amber Dawn (Amber) on $20K in Bounties for AI Safety Public Materials · 2022-08-05T08:45:56.842Z · EA · GW

I'm interested in collaborating on this with someone who knows a lot about AI safety, but doesn't have the time, ability or inclination to write a public-facing explainer - you could explain a topic to me over a call or calls, and I could write it up. I'm very much not an expert on AI safety, but in some ways that might be good for something like this - I don't have the curse of knowledge so I'll have a better sense of what people who are new to the subject will or will not understand. 

Comment by Amber Dawn (Amber) on Another call for EA distillers · 2022-08-04T09:32:02.182Z · EA · GW

I'm interested in doing something like this. What do people think are some specific topics or ideas that urgently need distilling? This could be something that you yourself would like to understand, but there seems to be an intimidating quantity of writing on it. Or it could be something that you do understand and people keep asking you about, and you wish there was a quick explainer you could link them to. 

Comment by Amber Dawn (Amber) on AGI Safety Needs People With All Skillsets! · 2022-07-25T20:10:59.374Z · EA · GW

Related question: I have strong writing skills, but no technical background. If I wanted to learn more technical stuff so I could help technical researchers write things, what should I learn? Where should I start? (bearing in mind that I'd be starting at the 101 level for most things). 
 

Comment by Amber Dawn (Amber) on On Elitism in EA · 2022-07-24T16:21:18.917Z · EA · GW

This post makes me uncomfortable. I think elitism is straightforwardly pretty bad, and it should be discouraged in EA. To be clear, what I strongly object to is a preference for hiring/funding people from elite institutions. If EA orgs hire/fund people in a way that's blind to their backgrounds and they end up hiring/funding disproportionately many people from elite institutions, that's more complicated. 

Why is elitism bad?

-I have strong emotional, not-really-utilitarian intuitions that it's bad to not give people a fair chance because of their background

-elite institutions maybe select for competence and intelligence, but highly imperfectly - many competent/intelligent people have no association with elite institutions, and many people at elite institutions are there because of privilege or having learnt how to blag and bluff very well, rather than competence. (I've attended 2 elite universities, so I know this to be true, lol!) I think there are just many more things hirers/funders could do to work out who's the best fit. 

-your 2nd 'pro' - that financial stability is necessary for getting involved - could actually be a con in some areas. If a project has to receive funding from the community, that's obviously more (financially) costly for the community, but the advantage is that the project has passed some quality filter - someone has decided to fund it, so it's more likely to be good. Whereas if someone is self-funding, they might be doing something that is misguided. E.g., I know a couple of people who have got FTX grants to do projects. Without that grant, would have found it hard to quit their day job and do the project. I think (a) it's good that they got this funding, and (b) the fact that they got the grants is a strong vote of confidence in them/their project.

To be clear, grantmaking of this kind also has flaws. I just think it would be bad if the only people heading up projects in EA were people who happened to be wealthy, either because of their family background or because they happened to have the skills/inclination to get a high-paying job. 





 

Comment by Amber Dawn (Amber) on Fellowship alternative: idea synthesis scenarios · 2022-07-22T13:08:51.736Z · EA · GW

This sounds like a really interesting idea! I'm ambivalent about 'reading group'-style fellowships, so I'm glad people are coming up with other ideas. I'd love to hear more about how you envisage this working in practice. 

Comment by Amber Dawn (Amber) on Confused about "making people happy" vs. "making happy people" · 2022-07-16T23:36:29.076Z · EA · GW

I'm also in favour of making-people-happy over making-happy-people.

I said this below in a reply, but I just want to flag that: some people assume that if you're in the making-people-happy/person-affecting camp, you must not care about future people. This isn't true for me - I do care about future people and hope they have good lives. Because there almost-certainly will be people in the future, for me, improving the future counts as making-people-happy! But I'm indifferent about how many happy people there are. 

Comment by Amber Dawn (Amber) on Confused about "making people happy" vs. "making happy people" · 2022-07-16T23:31:53.990Z · EA · GW

As someone with some sort of person-affecting view, I think there's a relevant distinction to be made between (1) not caring about potential/future people, and (2) being neutral about how many potential/future people exist. Personally, I do care about future people, so I wouldn't sign the binding oath. In 50 years, if we don't go extinct, there will be lots of people existing who don't exist now - I want those people to have good lives, even though they are only potential people now. For me, taking action so that future people are happy falls under 'making people happy'. 

 

Comment by Amber Dawn (Amber) on EA for dumb people? · 2022-07-13T21:00:25.299Z · EA · GW

I didn't mean to imply that intelligence doesn't matter - more that there are different types of intelligence (some of which are actually underrepresented in EA), or to put it another way, strengths other than IQ can also be very useful. 

Comment by Amber Dawn (Amber) on Recommendations for non-technical books on AI? · 2022-07-13T12:29:08.828Z · EA · GW

Thanks, this is really helpful! 

Comment by Amber Dawn (Amber) on Someone should create an 'EA Blinkist' book summary service · 2022-07-13T12:26:09.344Z · EA · GW

I might be up for doing something like this! I might DM you about it.

Comment by Amber Dawn (Amber) on Recommendations for non-technical books on AI? · 2022-07-13T09:21:08.724Z · EA · GW

Annoyingly, I'm not going to answer your question, but I'm going to ask you a question: having read all of those books, which would you most recommend to a person who was only going to read one book about AI? 

If your answer is 'depends what they're looking for', imagine I'm the one person: my priorities are:
 -a very clear case for why AI might be dangerous, with all the steps laid-out and strongly argued-for, such that I can easily pick out parts where I'm confused or disagree 
-includes relatable everyday examples, both because that will help me understand, and because I'd like some of these at my fingertips so that I can more easily explain AI risk to non-EAs who aren't familiar with it (or aren't familiar with the sorts of risks that EAs worry about). 

Comment by Amber Dawn (Amber) on EA for dumb people? · 2022-07-12T09:43:06.834Z · EA · GW

What are the implications you disagree with? 

Comment by Amber Dawn (Amber) on EA for dumb people? · 2022-07-12T09:42:36.521Z · EA · GW

Yeah absolutely! And it's not always worth experts' time to optimize for accessibility to all possible readers (if it's most important that other experts read it). But this does mean that sometimes things can seem more "advanced" or complex than they are.

Comment by Amber Dawn (Amber) on One Million Missing Children · 2022-07-11T20:41:47.204Z · EA · GW

Thank you for framing this in terms of wanting to support women have children that they desire - often when people talk about wanting to 'increase the birth rate' they don't disentangle 'helping people have kids that they want to have' from more coercive measures, which makes me nervous. 

'The primary interventions I think a funder could make to support women achieving their fertility goals are through political advocacy and research. I don’t think any philanthropic funder, no matter how rich, is capable of directly moving this issue by, for example, offering financial support to families.'
-why wouldn't offering financial support be effective?

Does the research on 'missing children' ask why  the respondents didn't have as many children as they wanted? Because this would be useful to know, and would help determine what interventions might be most effective. For example, if most people say that they didn't have as many children as they wanted because they couldn't afford it, then financial support would be the best intervention; if they say that they didn't find the right partner in time, maybe the best intervention is ?trying to make dating sites better?; if they say they waited too long and were then unable to conceive, then the fertility education you suggested might be very effective. Other reasons I can think of might be: lack of maternity leave, lack of social support, or their partner didn't want more kids. 

Comment by Amber Dawn (Amber) on EA for dumb people? · 2022-07-11T20:27:38.134Z · EA · GW

This is such a good post + I agree so much! I'm sorry you feel like you don't fit in :( and I'm also worried about the alienating effect EA can have on people. Fwiw, I've also had worries like this in the past - not so much that I wasn't smart enough, but that there wasn't a place for me in EA because I didn't have a research background in any of the major cause areas (happy to DM about this). 

 A couple of points, some echoing what others have said:

-there's a difference between 'smart' and 'has fancy credentials'
-some stuff that's posted on the Forum is written for a niche audience of experts and is incomprehensible to pretty much everyone
-imo a lot of EA stuff is written in an unnecessarily complicated/maths-y/technical way (and the actual ideas are less complicated than they seem)
-maybe you have strengths other than "intellectual" intelligence, e.g. emotional intelligence, people skills, being organized, conscientiousness...

I really do think this is a problem with EA, not with you - EAs should offer more resources to people who are excited to contribute but don't fit into the extremely narrow demographic of nerdy booksmart STEM graduates. 

Comment by Amber Dawn (Amber) on EA for dumb people? · 2022-07-11T20:07:44.672Z · EA · GW

Yes, agree 100%! In general, I think EA neglects humanities skills and humanistic ways of solving problems. 

Comment by Amber Dawn (Amber) on Person-affecting intuitions can often be money pumped · 2022-07-08T08:30:30.486Z · EA · GW

I don't understand this - why would someone with this view want to receive $0.01 to move from World 1 to World 2, and World 3 to World 1, rather than being neutral either way? 

Comment by Amber Dawn (Amber) on What alternatives to intro fellowships have groups actually tried? · 2022-06-30T16:14:55.811Z · EA · GW

This is cool! I'm already into EA but would love to do something like this

Comment by Amber Dawn (Amber) on david_reinstein's Shortform · 2022-06-18T18:23:10.607Z · EA · GW

This is a really interesting idea! I'm very fond of charity shops so I love the idea of making ones for EA charities. I have no idea how easy or hard it is to do and how it compares to other fundraising tactics, but it seems like it could have a big impact both from profits and from raising awareness. It could be a good thing to do for people with experience starting or running shops. 

Comment by Amber Dawn (Amber) on What is the right ratio between mentorship and direct work for senior EAs? · 2022-06-15T11:09:24.085Z · EA · GW

Good question! One consideration: in many cases, mentorship may not trade off directly against direct work. Many people report that there is a limited number of hours of research/writing/'deep work'/hard thinking that they can do in a day (people often say 2-5 hours);  but they can do other, not so focussed work on top of that. This is certainly the case for me! (Not that I'm a senior researcher). I suspect this is why in academia, it's customary for professors to both research and teach - they wouldn't spend all their time researching anyway. 

So, while it's certainly possible for mentorship responsibilities to be distracting and seriously trade off against research, I suspect that with the right balance, many researchers will be able to do research at their full capacity and also do a limited amount of mentorship. 
 

Comment by Amber Dawn (Amber) on Skinny Dip for EA · 2022-06-15T10:48:04.350Z · EA · GW

fwiw I disagree with this. People often 'advertize' or argue for things on the Forum - e.g. promoting some new EA project, saying 'come work for us at X org!', or arguing strongly that certain cause areas should be considered. The main difference with this post is that the language is more 'advertizing-esque' than normal - but this seems to me an aesthetic consideration. I'm not sure what would be gained by OP rewriting it with more caveats. 

Re "one of the most effective charities", OP does immediately justify this in the bullet points below - it's recommended by The Life You Can Save, and Givewell says it 'may be in the range of cost-effectiveness of our top charities'. 

Comment by Amber Dawn (Amber) on Existential Hope Hoodie and T-Shirt · 2022-06-14T10:14:58.359Z · EA · GW

This is so cool! I'm definitely going to order something :)

Just FYI, the link to the site seems to be broken - it just links back to this post! 

Comment by Amber Dawn (Amber) on The 'Internal Family Systems (IFS)' Therapy Model. Sociological, and psychological potentials. · 2022-06-14T10:09:16.450Z · EA · GW

I also think IFS is a great paradigm and could be really helpful for lots of people, and I know lots of other EAs who are into it - maybe we should have an "EA IFS fans" Facebook group or Discord or something? (If you'd be interested in such a thing, reply to this comment)

I'm not sure what to suggest about how to use your abilities to promote IFS. You could train as a counsellor (if you're not one already). You could write popular books about IFS. Or you could try to get involved in mental health policy and promote it in health systems. I don't know where you're from, but in the UK where I am, the 'go-to' psychotherapeutic treatment offered by the health service is CBT. I'm not against CBT and I think it's very helpful for some people, but it's not useful for everyone and for all issues, so I think a person could have a big positive impact if they (for example) successfully persuaded the NHS to be more willing to fund and offer different therapy modalities, including IFS. 

Comment by Amber Dawn (Amber) on New cause area: Violence against women and girls · 2022-06-09T13:41:09.883Z · EA · GW

+ 1 to the point that it doesn't really make sense to compare FGM and male circumcision.
I support bodily autonomy and lean towards believing that parents should not circumcise male infants. I'm also not claiming that there are no negative effects to male circumcision. And as Henry said, some forms of FGM are indeed quite minor (a symbolic 'nicking' or small cut). 

That said, other forms of FGM are...horrifying and just seem way  worse than male circumcision.  I'm going to drop the wikipedia article here - considered yourself content-warned. https://en.wikipedia.org/wiki/Female_genital_mutilation#Types

Some types involve cutting out the clitoris (which is more equivalent to the whole penis than to the foreskin);  other types involve sewing up the vagina. Because of its relative rarity I'm not sure it qualifies as a sensible EA cause area, but I think the horror and outcry against it seems very  merited and it makes sense that more countries have outlawed it than have outlawed male circumcision (though as I say, I'd tentatively support making that illegal also and don't want to ignore the fact that that's also a harm). 

On a meta level, I'm surprised by how unpopular Sjlver and DukeGartzea's comments are in this discussion relative to others'. It doesn't seem that controversial to argue that women face more violence, particularly of certain types, than men (though it's fair to argue the other side, of course). 

Comment by Amber Dawn (Amber) on EA can sound less weird, if we want it to · 2022-05-25T11:13:26.651Z · EA · GW

Strong agree.

I've seen some discourse on Twitter along the lines of "EA's critics never seem to actually understand what we actually believe!" In some ways, this is better than critics understanding EA well and strongly opposing the movement anyway! But it does suggest to me that EA has a problem with messaging, and part of this might be that some EAs are more concerned with making technically-defensible and reasonable statements - which, to be clear, is important! - than with meeting non-EAs (or not-yet-EAs) where they're at and empathizing with how weird some EA ideas seem at first glance. 

Comment by Amber Dawn (Amber) on I read Johannes Ackva's post on climate change which mentions three areas that can create global leverage. Is there a (possibly career-oriented) site that provides a higher level of detail and comparison between issues within climate, in a similar way that 80,000 Hours does for global existential problems? · 2022-05-25T11:02:13.282Z · EA · GW

I'm not sure if either of these were mentioned in Johannes' post, but:

https://drawdown.org/ (explores and compares different solutions for climate change)

or

https://www.effectiveenvironmentalism.org/careers-advice (EA-informed advice for climate change careers)

Comment by Amber Dawn (Amber) on Death to 1 on 1s · 2022-05-21T15:42:48.162Z · EA · GW

I think 1-on-1s have their uses, but at the EA conferences I went to this Spring, I did find myself  wishing that there was more space for unstructured group conversations (e.g., possibly physical spaces where you could go and sit if you were open to conversations with strangers). 1-on-1s can be very intense, and since my aims were somewhat vague, I think I could have gotten value out of meeting and chatting to more people casually.

Comment by Amber Dawn (Amber) on DeepMind’s generalist AI, Gato: A non-technical explainer · 2022-05-17T10:37:02.396Z · EA · GW

As a non-technical person struggling to wrap my head around AI developments, I really appreciated this post! I thought it was a good length and level of technicality, and would love to read more things like it! 

Comment by Amber Dawn (Amber) on I burnt out at EAG. Let's talk about it. · 2022-04-23T18:24:16.159Z · EA · GW

Thank you so much for posting this! I really appreciate it when EAs talk about their mental health and emotional wellbeing struggles. What we are doing is super intense and a lot of us go through stuff like this. I missed most of my Sunday conference plans because of my mental health, and I think this was a good decision since I organized one of the afterparties and I wouldn't have made it through that if I hadn't rested. I've been pretty tired this whole week.  

I've had lots of situations where, like you, I felt bad enough that I needed to cancel my plans, but, because I felt so emotionally distressed, cancelling those plans felt like the worst thing in the world. Over the years I've become better at realising that lots of the time, missing things is either completely fine, or (at most) an inconvenience to others. 

Take care of yourself and get lots of rest! I hope you feel better soon.

Comment by Amber Dawn (Amber) on We're announcing a $100,000 blog prize · 2022-03-10T14:25:52.285Z · EA · GW

I’m really glad that you want to support EA-adjacent writers and spread EA ideas to a wider audience. I think this is crucially important work and I’m really happy that you’re taking it seriously.  This prize has given me a nudge to take my own EA-adjacent blogging more seriously! 

Like many others, I have concerns about the amount.  I think it’s overkill and, as others have said, it may be easier for the privileged to take a gamble on winning the prize, while great writers who don’t have the option of cutting down their working hours will still be neglected.  
Another concern that others haven’t mentioned is PR. I don’t think EAs always need to be super ‘image focussed’ and paranoid about PR, and indeed sometimes we skew too far in that direction. But it seems some concern is appropriate here because part of the aim of the project is to spread EA ideas to people who are not already in the movement. I think if one of the first things I heard about EA was ‘this is a movement whose stated aim is to spend money super efficiently to do the most good, and they just spent $500,000 paying people in/adjacent to their community to write blogs that are vaguely supportive of their community’, that would seem suss to me. It seems cronyish. Of course, *I* can easily believe that good blogs could create way more than $500,000 of value by bringing people into the movement, improving decision-making, etc. But that involves *already* thinking in very EA ways and trusting the community to be acting in good faith and not just trying to enrich their friends.

As an alternative way of incentivizing good writing: a thought I’ve often had is making a google doc of all the blog posts that “live rent free” in my head - blogs whose main idea has seeped into my consciousness, blogs that I constantly recommend when certain topics come up. I bet many EAs, if they introspect, have an internal list of blog posts like this. You could ask a large-ish number of trusted people about which specific blog posts have been most influential for them, and grant awards for blogs that are cited by many people (or offer to pay those bloggers to do it full-time for a while, if they want). If you are interested in funding more popularizing writings, you could choose people who are newer to the movement or more ‘adjacent’, rather than hardcore EAs who will choose something niche. 

Comment by Amber Dawn (Amber) on Bounty for your best 2 minute answer to an EA 'frequently asked question' · 2022-02-09T13:00:42.012Z · EA · GW

[this a comment about the post/project, not an answer to the question about moral discounting] 

I'm curious - when talking to people new to EA, have you heard that question a lot, in those words and terms?

I'm asking because - and I might be typical-minding here - I'd be surprised if most people who are new to longtermism have the explicit belief 'people in the future have less moral value than people in the present'. In particular, the language of moral discounting sounds very EA-ish to me. I imagine that if you ask most people who are sceptical to longtermism 'so do future people have less moral value than present people?', they'd be like 'of course not, but [insert other argument for why it nonetheless makes more sense to focus on the present.' 

(Analogously, imagine an EA having a debate with someone who thinks that we should focus on helping people in our local communities. At one point the EA says 'so, do you think that people in other countries have less moral value than people in your community?' 

I find it hard to imagine that the local-communitarian would say 'yeah! Screw people in other countries!' [even if from an EA perspective, their actions and beliefs would seem to entail this attitude] 

I find it more likely that they would say something like 'of course people everywhere have moral value, but it's my job to help people in my community, and people in other countries should be helped by people in their own communities'. And they might give further reasons for why they think this.)

Comment by Amber Dawn (Amber) on Native languages in the EA community (and issues with assessing promisingness) · 2021-12-30T16:22:55.831Z · EA · GW

I really enjoyed this post, thank you! As a non-STEM-y person in EA, I relate to lots of this.  I've picked up a lot of the 'language of EA' - and indeed one of things I like about EA is that I've learnt lots of STEM-y concepts from it! - but I did in fact initially 'bounce off' EA, and may never have got involved if I hadn't continued to hear about it. I've also worried about people unfairly dismissing me because the 'point' of my field (ancient philosophy) is not obvious to STEM-y EAs. 

A note on 'assessing promisingness': a recent forum post on Introductory Fellowships mentioned that at some universities, organizers sort fellows into cohorts according to perceived 'promisingness'. This bothered me. I think part of what bothered me was egalitarian intuitions, but part of it was a consciousness that I might be unfairly assessed as 'unpromising' because my capacities and background are less legibly useful to EAs than others. 


 

Comment by Amber Dawn (Amber) on Is pain just a signal to enlist altruists? · 2021-12-14T12:41:46.430Z · EA · GW

This is a fascinating idea! I have a question though. I'm not exactly sure why (2) (more women have chronic pain, less pain tolerance etc) is evidence for this. Is the idea that women in the ancestral environment were more in need of assistance (eg because they were physically weaker, or made more vulnerable by bearing/raising children), and therefore evolved more capacity to feel (and thus express) pain? 
 

Comment by Amber Dawn (Amber) on Any good organizations fighting racism? · 2020-06-07T19:13:53.941Z · EA · GW

Thank you for asking this! I'm afraid I don't have any answers, but I also think that it would be great if EAs researched this question (and I'm happy Open Phil seems to be doing some of this). I also think that how 'fighting racism' or 'US criminal justice reform' compare against other cause areas on neglectedness, tractability and impact is somewhat beside the point. There is a huge amount of enthusiasm to tackle these problems at the moment, and people are eager to donate to organizations that combat them, but I've not seen much discussion or reflection on which are most effective. Most of these people would never be persuaded to donate to (e.g.) AI risk prevention or animal rights orgs, but they might be persuaded to donate to more-effective anti-racism/criminal-justice-reform organizations. If EAs can find out which orgs are more effective in this area, and promote them, that could create a lot of impact compared to the counterfactual.