Posts

Erin Braid's Shortform 2022-03-05T22:09:27.880Z

Comments

Comment by Erin Braid on Critiques of EA that I want to read · 2022-06-20T01:05:59.422Z · EA · GW

Something I personally would like to see from this contest is rigorous and thoughtful versions of leftist critiques of EA, ideally translated as much as possible into EA-speak. For example, I find "bednets are colonialism" infuriating and hard to engage with, but things like "the reference class for rich people in western countries trying to help poor people in Africa is quite bad, so we should start with a skeptical prior here" or "isolationism may not be the good-maximizing approach, but it could be the harm-minimizing approach that we should retreat to when facing cluelessness" make more sense to me and are easier to engage with.

That's an imaginary example -- I myself am not a rigorous and thoughtful leftist critic and I've exaggerated the EA-speak for fun. But I hope it points at what I'd like to see!

Comment by Erin Braid on ‘EA Architect’: Updates on Civilizational Shelters & Career Options · 2022-06-08T20:25:02.999Z · EA · GW

I for one would listen to a podcast about shelters and their precedents! That's not to say you should definitely make it, since I'm not sure an audience of mes would be super impactful (I don't see myself personally working on shelters), but if you're just trying to judge audience enthusiasm, count me in!

Podcasts I've enjoyed on this topic (though much less impact-focused and more highly produced than I imagine you'd aim for): "The Habitat" from Gimlet Media; the Biosphere 2 episode of "Nice Try!"

Comment by Erin Braid on [deleted post] 2022-06-05T04:42:09.076Z

Interesting. Thanks for sharing your findings and experiences!

Comment by Erin Braid on Michael Nielsen's "Notes on effective altruism" · 2022-06-05T04:39:52.170Z · EA · GW

I see [EA] as a key question of "how can we do the most good with any given unit of resource we devote to doing good" and then taking action upon what we find when we ask that.

I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it's too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:

  1. It's probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they're often the same people, and in the sense that even when they're different people, they'll share a lot of interests and it might make sense to share a movement.
  2. Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don't just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively - I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more - but negative framings are available too.

So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.

Comment by Erin Braid on [deleted post] 2022-06-03T15:36:29.431Z

I understand that this is no longer relevant to your plans, but I'm curious about this:

Unfortunately, the result of the vooroverleg was that the charity as described above cannot be registered in the Netherlands. The main reason for this is that those who would directly benefit directly from the charity (the donors) are relatively well-off.

I'm used to the US landscape, where lots of organizations serving the well-off, from private schools to symphony orchestras, are nonprofits that take tax-deductible donations and have tax-exempt status. Is that not the case in the Netherlands?

Comment by Erin Braid on What are the coolest topics in AI safety, to a hopelessly pure mathematician? · 2022-05-07T21:53:52.900Z · EA · GW

Love this question! I too would identify as a hopelessly pure mathematician (I'm currently working on a master's thesis in category theory), and I too spent some time trying to relate my academic interests to AI safety. I didn't have much success; in particular, nothing ML-related ever appealed. I hope it works out better for you!

Comment by Erin Braid on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-05T19:34:01.374Z · EA · GW

Thanks for this post Julia! I really related to some parts of it, while other parts were very different from my experience. I'll take this opportunity to share a draft I wrote sometime last year, since I think it's in a similar spirit:

I used to be pretty uncomfortable with, and even mad about, the prominence of AI safety in EA. I always saw the logic – upon reading the sequences circa 2012, I quickly agreed that creating superintelligent entities not perfectly aligned with human values could go really, really badly, so of course AI safety was important in that sense – but did it really have to be such a central part of the EA movement, which (I felt) could otherwise have much wider acceptance and thus save more children from malaria? Of course, it would be worth allowing some deaths now to prevent a misaligned AI from killing everyone, so even then I didn’t object exactly, but I was internally upset about the perception of my movement and about the dead kids. 

I don’t feel this way anymore. What changed?

  1. [people aren’t gonna like EA anyways – I’ve gotten more cynical and no longer think that AI was necessarily their true objection]
  2. [AI safety more concrete now – the sequences were extremely insistent but without much in the way of actual asks, which is an unsettling combo all by itself. Move to Berkeley? Devote your life to blogging about ethics? Spend $100k on cryo? On some level those all seemed like the best available ways to prove yourself a True Believer! I was willing to lowercase-b believe, but wary of being a capital-B Believer, which in the absence of actual work to do is the only way to signal that you understand the Most Important Thing In The World]
  3. [practice thinking about the general case, longtermism]

Unfortunately I no longer remember exactly what I was thinking with #3, though I could guess. #1 and #2 still make sense to me and I could try to expand on them if they're not clear to others. 

Thinking about it now, I might add something like:

4. [better internalization of the fact that EA isn't the only way to do good lol – people who care about global health and wouldn't care about AI are doing good work in global health as we speak]

Comment by Erin Braid on Don’t think, just apply! (usually) · 2022-04-12T17:39:26.122Z · EA · GW

To support people in following this post's advice, employers (including Open Phil?) need to make it even quicker for applicants to submit the initial application materials

From my perspective as an applicant, fwiw, I would urge employers to reduce the scope of questions in the initial application materials, more so than the time commitment. EA orgs have a tendency to ask insanely big questions of their early-stage job applicants, like "How would you reason about the moral value of humans vs. animals?" or "What are the three most important ways our research could be improved?" Obviously these are important questions, but to my mind they have the perverse effect that the more an applicant has previously thought about EA ideas, the more daunting it seems to answer a question like that in 45 minutes. Case in point, I'm probably not going to get around to applying for some positions at this post's main author's organization, because I'm not sure how best to spend $10M to improve the long-term future and I have other stuff to do this week. 

Open Phil scores great on this metric by the way - in my recent experience, the initial screening was mostly an elaborate word problem and a prompt to explain your reasoning. I'd happily do as many of those as anyone wants me to.

Comment by Erin Braid on EA group community service projects, good or bad idea? · 2022-04-11T21:47:20.404Z · EA · GW

Maybe the process of choosing a community service project could be a good exercise in EA principles (as long as you don't spend too long on it)? 

I like this idea and would even go further -- spend as much time on it as people are interested in spending, the decision-making process might prove educational!

I can't honestly say I'm excited about the idea of EA groups worldwide marching out to pick up litter. But it seems like a worthwhile experiment for some groups, to get buy-in on the idea of volunteering together, brainstorm volunteering possibilities, decide between them based on impact, and actually go and do it. 

Comment by Erin Braid on I feel anxious that there is all this money around. Let's talk about it · 2022-03-31T19:55:32.843Z · EA · GW

The subquestion of high salaries at EA orgs is interesting to me. I think it pushes on an existing tension between  a conception of the EA community as a support network for people who feel the weight of the world's problems and are trying to solve them, vs. a conception of the EA community as the increasingly professional project of recruiting the rest of the world to work on those problems too. 

If you're thinking of the first thing, offering high salaries to people "in the network" seems weird and counterproductive. After all, the truly committed people will just donate the excess, minus a bunch of transaction costs,  and meanwhile you run the risk of high salaries attracting people who don't care about the mission at all, who will unhelpfully dilute the group.

Whereas if you're thinking of the second thing, it seems great to offer high salaries. Working on the world's biggest problems should pay as much as working at a hedge fund! I would love to be able to whole-heartedly recommend high-impact jobs to, say, college acquaintances who feel some pressure to go into high-earning careers, not just to the people who are already in the top tenth of a percentile for commitment to altruism. 

I really love the EA-community-as-a-support-network-for-people-who-feel-the-weight-of-the-world's-problems-and-are-trying-to-solve-them. I found Strangers Drowning a very moving read in part for its depiction of pre-EA-movement EAs, who felt very alone, struggled to balance their demanding beliefs with their personal lives, and probably didn't have as much impact as they would have had with more support. I want to hug them and tell them that it's going to be okay, people like them will gather and share their experiences and best practices and coping skills and they'll know that they aren't alone. (Even though this impulse doesn't make a lot of logical sense in the case of, say, young Julia Wise, who grew up to be a big part of the reason why things are better now!) I hope we can maintain this function of the EA community alongside the EA-community-as-the-increasingly-professional-project-of-recruiting-the-rest-of-the-world-to-work-on-those-problems-too. But to the extent that these two functions compete, I lean towards picking the second one, and paying the salaries to match. 

Comment by Erin Braid on Erin Braid's Shortform · 2022-03-25T19:10:24.554Z · EA · GW

Why I Apply to EA Orgs

There's been a lot of handwringing about people's obsession with getting the relatively few jobs at the relatively few explicitly EA-branded organizations. The discussions have been interesting, but they tend to miss the essential reason for this phenomenon in my experience: when you're an EA applicant, EA orgs may like you more than non-EA orgs do. A lot more.

Personally, I never felt much pressure, or even necessarily desire, to work only at explicitly EA organizations. I want to work as an analyst or researcher in an EA or EA-adjacent cause area, but that hardly restricts me to EA organizations! Outside the EA-sphere, there are think tanks, philanthropic foundations, consultancies that work in the public interest, and departments of government, among others, that I would be delighted to work for. Over the past ~year, while looking for my first job out of grad school, I have submitted 32 applications to non-EA organizations, alongside 6 applications to EA orgs. 

And how is that working out for me? 

Of the 6 applications to EA orgs, 5 got back to me asking for at least one interview or test task, and 4 asked me to do multiple rounds of interviews and/or test tasks. Typically, they expressed enthusiasm about me as an applicant, seemed genuinely sorry to be unable to hire me right then, and encouraged me to try again in the future. This is sweet of them, but it's a disheartening experience overall.

Meanwhile, of 32 applications to non-EA orgs, 3 got back to me asking for at least one interview or test task, and exactly 1 asked me to do multiple rounds of interviews and/or test tasks.* Typically, I got no response of any kind. Sometimes form rejections drift in, pursuant to applications I submitted months ago. This is disheartening in a totally different way.

(*From my perspective, even this exception proves the rule: the one non-EA application that got me multiple rounds of consideration was when I applied specifically to the Charity Navigator subteam that used to be ImpactMatters; a previous application to Charity Navigator as a whole got no response. But from a more neutral, preregistration-demanding perspective, you should probably ignore this line of argument.)

Obviously, I have limited access to the reasoning behind the hiring decisions here. But for what it's worth, here's my personal speculation as to what's going on:

At this time in my life, my CV consists of (a) academic accomplishment in formal, abstract fields, and (b) some student jobs, a couple of which reflect my longstanding interest in effective altruism. From the perspective of an EA org, this shows a reasonable amount of competence and commitment, enough that they're happy to toss me a test task and see how I do. But to a mainstream org in a similar cause area, I just seem like kind of a weird fit. Why is someone who studies "category theory" and "formal semantics" applying for this job in policy/development/climate/etc? They don't have a test task culture, and they do have a stack of candidates whose degrees are in policy/development/climate/etc, so they simply go with one of them.

I'm not saying it's easy to get a job at an EA org; it's definitely not, and I haven't. But for some of us, getting a job anywhere else can feel even harder.

Comment by Erin Braid on Erin Braid's Shortform · 2022-03-05T22:09:28.038Z · EA · GW

Cash by Default       

Once upon a time, I found myself with a bunch of unconditional $25 charity gift cards from an every.org promotion. This seemed like a great opportunity to encourage the people in my life to pick charities to donate to, without the awkwardness of talking directly about how they should spend their own money. So I sent four gift card links and an explanation to a group chat with my four closest friends from college.

The first thing that happened was that one friend expressed enthusiasm, claimed a gift card, and donated it to the Florence Project, an organization that gives legal aid to detained migrants in Arizona. The second thing that happened was that the other three friends said nothing, and never claimed or used the gift cards. 

I felt disappointed with this response rate. I mean, why wouldn't someone want a free $25 for charity?? I know on an intellectual level that I'm much more excited about picking charities to donate to than most people are, but do I really have to internalize that knowledge in order to make accurate predictions? Darn. 

I didn't want to pressure anyone, so I didn't say any more about it. As time passed and the gift cards drew close to their expiration date, I figured I should just use them myself. But here I ran into an issue. If I just directed this unclaimed money to the organizations that I would have chosen in the first place, wasn't I kind of retroactively giving myself reason to prefer that my friends not participate?  I didn't want that! At the time I had genuinely wanted my friends to participate, and I didn't want to ruin that after the fact!

Of course, I also didn't want to waste $75 of donations for no reason. Or for dubious decision-theoretic reasons that definitely had not been relevant to the actual behavior of any of the actual people involved. 

And so, this has been the convoluted story how I came to donate a particular $75 to GiveDirectly. Usually, I don't donate to GiveDirectly, because I'm on board with GiveWell's view that donations to AMF and other top charities are something like ten times as cost-effective as unconditional cash transfers. (See e.g. this funding report.) But I'm a big fan of cash benchmarking - the idea that all global development interventions should be compared to the baseline option of just giving the beneficiaries unconditional cash transfers instead. Similarly, it seemed to me that if money was earmarked for charity, and no one stepped in to make a case for anything else, cash was a sensible default. 

I think this approach is potentially useful in a much more common situation: the matching campaign. Matching campaigns are popular, but misleading  (see e.g. discussion here, here, and here). The misleadingness could be fixed if the matching donor agreed to set fire to any money that wasn't used to match other donations, but they typically won't agree to that (and it wouldn't be a great look). The idea here is, instead of fire, the matching donor picks a "floor" donation option that they're okay with their money going to, but not maximally excited about. For best results, the audience of potential donors should largely agree with this assessment, and also the floor option should have some kind of ontological basicness to it, such that it feels reasonable and not insulting to designate it as the "floor".  Obviously I'm describing cash transfers here, but I think other interventions with these properties would work for the same reasons; direct air capture comes to mind.

I'd be interested to hear if something like this has been tried. 

Comment by Erin Braid on Trading time in an EA relationship · 2022-02-24T18:44:14.220Z · EA · GW

I really appreciate this post Rose! My partner and I have noticed some of the same cross-pressures, though they stack up a little differently in our case. I'll say a bit more about my experience, in case anyone's interested, but mostly I wanted to say that I appreciated reading about yours. 

My partner and I are still young and only just starting our careers, so there's a lot of uncertainty, but we basically expect that my partner will have a lucrative and stable career in big tech, while I will have an erratic but potentially impactful career in EA. Currently, he has a full-time job and  I don't. 

 From the perspective of most of society, including our families and friends, we fall into an obvious pattern: my partner, male, has the real career, and I, rounded to female, have a supporting role - for example, I do more of the household management. As you alluded to, people have a lot of feelings about those gender roles!

Meanwhile from our perspective, which is more EA-flavored, there's a sense in which I'm the one who's trying to have a 'real', i.e. impactful, career, and my partner is the one in the supporting role of earning the money (and health insurance!) that enables me to try that. We're expecting to move, possibly move continents, for my career, not his. From the inside it feels like we're in danger of completely prioritizing my career over his, while from the outside people judge us for the exact opposite. 

Weird stuff!

Comment by Erin Braid on Two Podcast Opportunities · 2022-01-02T20:52:58.464Z · EA · GW

I'd be happy to contribute by reading aloud! However, I don't have any specialty recording equipment, so you might not want to include me if you're going for high sound quality.