Posts

[Linkpost] Criticism of Criticism of Criticism, ACX 2022-08-04T19:25:01.155Z
[AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships 2022-07-29T18:38:53.322Z
Update from Open Philanthropy’s Longtermist EA Movement-Building team 2022-03-10T19:37:27.283Z
Open Phil’s longtermist EA movement-building team is hiring 2022-02-25T21:43:02.788Z
Funds are available to fund non-EA-branded groups 2021-07-21T01:08:10.308Z
Open Philanthropy is seeking proposals for outreach projects 2021-07-16T20:34:52.023Z
Information security careers for GCR reduction 2019-06-20T23:56:58.275Z
Talk about donations earlier and more 2016-02-10T18:14:55.224Z
Ethical offsetting is antithetical to EA 2016-01-05T17:49:01.191Z
Impossible EA emotions 2015-12-21T20:06:02.912Z
How we can make it easier to change your mind about cause areas 2015-08-11T06:21:09.211Z

Comments

Comment by ClaireZabel on [Linkpost] Criticism of Criticism of Criticism, ACX · 2022-08-04T21:16:15.955Z · EA · GW

No worries, appreciate ppl checking  :) 

Comment by ClaireZabel on [Linkpost] Criticism of Criticism of Criticism, ACX · 2022-08-04T21:12:45.451Z · EA · GW

As noted in the post, I got Scott's permission before posting this. 

Comment by ClaireZabel on Applications are open for CFAR workshops in Prague this fall · 2022-07-20T20:30:34.465Z · EA · GW

I strongly disagree with Greg. I think CFAR messed up very badly, but I think the way they messed up is totally consistent with also being able to add value in some situations. 

We have data I find convincing suggesting a substantial fraction of top EAs got value from CFAR. ~ 5 years have passed since I went to a CFAR workshop, and I still value what I learned and think it's been useful for my work. I would encourage other people who are curious to go (again, with the caveat that I don't know much about the new program), if they feel like they're in a place of relative strength and can take a discerning eye to what they're taught.    

If I, with (mostly) admirable candour, describe a series of grossly incompetent mistakes during my work as a doctor, the appropriate response may still be to disqualify me from future medical practice (there are sidelines re. incentives, but they don't help)

I think doctor is a really disanalogous example to use; doctors are in one of the relatively few professions where screwups regularly lead to death; we want to some somewhat risk-averse, with respect to doctors (and e.g. pilots or school bus drivers), at least if the screwups are the very dangerous kind (as opposed to like, being terrible at filing one's paperwork), and aren't based on a reasonable CBA (e.g. enrolling patients in a clinical trial with a drug that looked promising but turned out to be dangerous). For lots of other professions, this example looks way less compelling; e.g. I doubt people would think that a startup founder or movie director or author who had a bunch of failures but also some big wins should be banned from their profession or ostracized in their community. I think in-person overnight events about psychology are in a pretty in-between risk category.

Comment by ClaireZabel on Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments · 2022-07-11T21:13:33.895Z · EA · GW

You said you wouldn’t tell anyone about your friend’s secret, but this seems like a situation where they wouldn’t mind, and it would be pretty awkward to say nothing…etc.

 

This isn't your main point, and I agree there's a lot of motivated cognition people can fall prey to. But I think this gets a bit tricky, because people often ask for vague commitments, that are different from what they actually want and intend. For example, I think sometimes when people say "don't share this" they actually mean something more like "don't share this with people that know me personally" or "keep it in our small circle of trusted friends and advisors" or "you can tell your spouse and therapist, but no one else" (and often, this is borne out when I try to clarify). Sometimes, I think they are just trying to convey "this info is sensitive, tread with care". Or, they might mean something more intense, like "don't share this, and aim not to reveal any information that updates others substantially towards thinking it's true". 

Clarification can often be useful here (and I wish there were more verbal shorthands for different levels of intensity of commitment) but sometimes it doesn't happen and I don't think, in its absence, all agreements should be taken to be maximally strict (though I think it's extremely important to have tools for conveying when a requested agreement is very strict, and being the kind of person that can honor that). And I think some EAs get intense and overly scrupulous about obeying unimportant agreements, which can be pretty unwieldy and divorced from what anyone intended.  

I think "do you keep actually-thoughtful promises you think people expected you to interpret as real commitments" and "do you take all superficially-promise-like-things as serious promises" are fairly different qualities (though somewhat correlated), and kinda often conflated in a way that I think is unhealthy and even performative. 

Comment by ClaireZabel on Announcing the Harvard AI Safety Team · 2022-06-30T22:05:33.497Z · EA · GW

This seems really exciting, and I agree that it's an underexplored area. I hope you share resources you develop and things you learn to make it easier for others to start groups like this.

PSA for people reading this thread in the future: Open Phil is also very open to and excited about supporting AI safety student groups (as well as other groups that seem helpful for longtermist priority projects); see here for a link to the application form.

Comment by ClaireZabel on Let's not have a separate "community building" camp · 2022-06-30T08:34:29.699Z · EA · GW

I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated. 

[Below written super fast, and while a bit sleep deprived]

An overly crude summary of my current picture is "if you do community-building via spoken interactions, it's somewhere between "helpful" and "necessary" to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, being interested in direct work is correlated with a bunch of positive qualities that help with community-building (like being intellectually-curious and having interesting and informed things to say on many topics). But not a ton of it is actually needed for many kinds of extremely valuable community building, in my experience (which seems to differ from e.g. Oliver's). And I think people who emphasize the value of keeping up with direct work sometimes conflate the value of e.g. knowing about new directions in AI safety research vs.  broader value adds from becoming a more informed person and gaining various intellectual benefits from practice engaging with object-level rather than social problems. 

Earlier on in my role at Open Phil, I found it very useful to spend a lot of time thinking through cause prioritization, getting a basic lay of the land on specific causes, thinking through what problems and potential interventions seemed most important and becoming emotionally bought-in on spending my time and effort on them. Additionally, I think the process of thinking through who you trust, and why, and doing early audits that can form the foundation for trust, is challenging but very helpful for doing EA CB work well. And I'm wholly in favor of that, and would guess that most people that don't do this kind of upfront investment are making an important mistake. 

But on the current margin, the time I spend keeping up with e.g. new directions in AI safety research feels substantially less important than spending time on implementation on my core projects, and almost never directly decision-relevant (though there are some exceptions, e.g. I could imagine information that would (and, historically, has) update(d) me a lot about AI timelines, and this would flow through to making different decisions in concrete ways). And examining what's going on with that, it seems like most decisions I make as a community-building grantmaker are too crude to be affected much by additional info at the relevant level of granularity intra-cause, and when I think about lots of other community-building-related decisions, the same seems true. 

For example, if I ask a bunch of AI safety researchers what kinds of people they would like to join their teams, they often say pretty similar versions of "very smart, hardworking people who grok our goals, who are extremely gifted in a field like math or CS".  And I'm like "wow , that's very intuitive, and has been true for years, without changing". Subtle differences between alignment agendas do not, in my experience, bear out enough in people's ideas about what kinds of recruitment are good that I've found it to be a good use of time to dig in on. This is especially true given that places where informed, intelligent people who have various important-to-me markers of trustworthiness differ are places where I find that it's particularly difficult for an outsider to gain much justified confidence.

Another testbed is that I spend a few years spending a lot of time on Open Phil's biosecurity strategy, and I formed a lot of my own, pretty nuanced and intricate views about it. I've never dived as deep on AI. But I notice that I didn't find my own set of views about biosecurity that helpful for many broader community-building tradeoffs and questions, compared to the counterfactual of trusting the people who seemed best to me to trust in the space (which I think I could have guessed using a bunch of proxies that didn't involve forming my own models of biosecurity) and catching up with them or interviewing them every 6mo about what it seems helpful to know (which is more similar to what I do with AI).  Idk, this feels more like 5-10% of my time, though maybe I absorb additional context via osmosis from social proximity to people doing direct work, and maybe this helpful in ways that aren't apparent to me. 

Comment by ClaireZabel on Let's not have a separate "community building" camp · 2022-06-29T18:01:38.845Z · EA · GW

>It's fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn't be the ones making the calls about what kind of community-building work needs to happen

I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren't trying to make calls about what kinds of community-building work needs to happen.

Comment by ClaireZabel on What We Owe the Past · 2022-05-07T01:37:50.618Z · EA · GW

I put a bunch of weight on  decision theories which support 2. 

A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldn't have that happiness now if I didn't believe he would actually do it, and it'd be hard for him (a person who lives with me and who I've dated for many years) to make me believe that he actually would pursue them even if it weren't true (as well as seeming sketchy from a deontological perspective). 

And, +1 to Austin's example of funders; funders occasionally have people ask for retroactive funding, and say that they only did the thing because their model of the funders suggested the funder would pay. 

Comment by ClaireZabel on Why CEA Online doesn’t outsource more work to non-EA freelancers · 2022-05-04T22:51:14.984Z · EA · GW

Thanks for this! Most of what you wrote here matches my experience and what I've seen grantees experience. It often feels weird and frustrating (and counter to econ 101 intuitions) to be like "idk, you just can't exchange money for good and services the obvious way, sorry, no, you can't just pay more money to get out of having to manage that person and have them still do their work well" and I appreciate this explanation of why.
 

Comment by ClaireZabel on Good practices for changing minds · 2022-04-08T06:57:10.299Z · EA · GW

Riffing off of the alliance mindset point, one shift I've personally found really helpful (though I could imagine it backfiring for other people) in decision-making settings is switching from thinking "my job is to come up with the right proposal or decision" to "my job is to integrate the evidence I've observed (firsthand, secondhand, etc.) and reason about it as clearly and well as I'm able". 

The first framing made me feel like I was failing if other people contributed; I was "supposed" to get to the best decision, but instead I came to the wrong one that needed to be, humiliatingly, "fixed". The frame is more individualistic, and has more of a sense of some final responsibility that increases emotional heat and isn't explained just by bayesian reasoning.  

The latter frame evokes thoughts like "of course, what I'm able to observe and think of is only a small piece of the puzzle, of course others have lots of value of add" and shifts my experience of changing decisions from embarrassing or a sign of failure to natural and inevitable, and my orientation towards others from defensiveness to curiosity and eagerness to elicit their knowledge. And it shifts my orientation towards myself from a stakesy attempt to squeeze out an excellent product via the sheer force of emotional energy, to something more reflective, internally quiet, and focused on the outer world, not what my proposals will say about me. 

I could imagine this causing people to go easy on themselves or try less hard, but for me it's been really helpful. 

Comment by ClaireZabel on [deleted post] 2022-04-07T18:40:08.426Z

This is a cool idea! It feels so much easier to me to get myself started reading a challenging text if there's a specified time and place with other people doing the same, especially if I know we can discuss right after. 

Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-25T07:07:05.185Z · EA · GW

I'm interested in and supportive of people running different experiments with meta-meta efforts, and I think they can be powerful levers for doing good. I'm pretty unsure right now if we're erring too far in the meta and meta-meta direction (potentially because people neglect the meta effects of object-level work) or should go farther, but hope to get more clarity on that down the road. 

Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-17T05:32:07.356Z · EA · GW

So to start, that comment was quite specific to my team and situation, and I think historically we've been super cautious about hiring (my sense is, much moreso than the average EA org, which in turn is more cautious than the next-most-specific reference class org). 

Among the most common and strongest pieces of advice I give grantees with inexperienced executive teams is to be careful about hiring (generally, more careful than I think they'd have been otherwise), and more broadly to recognize that differences in people's skills and interests leads to huge differences in their ability to produce high-quality versions of various relevant outputs. Often I find that new founders underestimate those differences and so e.g. underestimate how much a given product might decline in quality when handed from one staff member to a new one. 

They'll say things like "oh, to learn [the answer to complicated question X] we'll have [random-seeming new person] research [question X]" in a way that feels totally insensitive to the fact that the question is difficult to answer, that it'd take even a skilled researcher in the relevant domain a lot of time and trouble, that they have no real plan to train the new person or evidence the new person is unusually gifted at the relevant kind of research, etc., and I think that dynamic is upstream of a lot of project failures I see. I.e. I think a lot of people have a kind of magical/non-gears-level view of hiring, where they sort of equate an activity being someone's job with that activity being carried out adequately and in a timely fashion, which seems like a real bad assumption with a lot of the projects in EA-land. 

But yeah, I think we were too cautious nonetheless. 

Cases where hiring more aggressively seems relatively better: 

  • The upside is large (an important thing is bottlenecked on person-power, and that bottleneck is otherwise excessively challenging to overcome) 
  • The work you need done is:
    •  Well scoped,
    • Easy to evaluate 
    • Something people train in effectively outside your org
    • Trainable
    • Has short feedback loops
  • You are 
    • An experienced manager 
    • Proficient with the work in question
    • Emotionally ready to fire an employee if that seems best 
  • This is taking place in a country where it's legally and culturally easier to fire people
  • Your team culture and morale is such that a difficult few months with someone who isn't working out is unlikely to deal permanent damage. 
Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-17T00:47:19.317Z · EA · GW

Thanks Miranda, I agree these are things to watch really closely for. 

Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-15T03:14:41.983Z · EA · GW

Thanks Akash. I think you're right that we can learn as much from successes and well-chosen actions as mistakes, and also it's just good to celebrate victories. A few things I feel really pleased about (on vacation so mostly saying what comes to mind, not doing a deep dive): 

  • My sense is that our (published and unpublished) research has been useful for clarifying my picture of the meta space, and helpful to other organizations (and led to some changes I think are pretty promising, like increased focus on engaging high schoolers who are interested in longtermist-related ideas, and some orgs raising salaries), though I think some of that is still TBD and I wish I had a more comprehensive picture.
  •  We've funded just a bunch of new initiatives I'm quite excited about, and I'm happy we were there to find worthy projects with funding needs and encourage founding new projects in the space, and to support their growth. My best guess is that projects we fund will lead to a substantial increase in the EA/longtermist community. 
  • When I look back at both my portfolio of grants made, and anti-portfolio (grants explicitly considered but not made), I mostly feel very satisfied. As far as I can tell were far more false positives (grants we made that had meh results) than negatives (grants I think we should have made but didn't), but roughly similar false-negatives-that-seem-like-big-misses to false-positives-that-were-actively-meaningfully-harmful (the sample size in both of those categories is pretty small). 
  • I like and respect everyone on my team, they are all sincerely aimed at the real goals we share, and I think they all bring different important focuses and strengths to the table. 

If you look back in a year, and you feel really excited/proud of the work that your team has done, what are some things that come to mind? What would a 95th+ percentile outcome look like? (Maybe the answer is just "we did everything in the Looking Forward" section, but I'm curious if some other things come to mind.)

A mixture of "not totally sure" and "don't want to do a full reveal" but the "Looking Forward" section above lists a bunch of components. In addition: 

  • We or other funders seize most of the remaining the remaining obvious-and-important-seeming opportunities for impactful giving (that I currently know of in our space) that are lying fallow. 
  • We complete a few pieces of research/analysis I think could give us a better sense of how overall-effective EA/LT "recruiting" work has been over the last few years and how it compares to more object-level work (and we do indeed get a better sense and disseminate it to people who will find it useful). 
  • We gather and vet more resources for giving grantees that want it more non-financial support (e.g. referrals for support for various kinds of legal advice, executive and management coaching.)
Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-12T01:42:27.543Z · EA · GW

Thanks for the kinds words, James!

Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-12T01:41:57.788Z · EA · GW

Thoughtful and well-informed criticism is really useful, and I'd be delighted for us to support it;  criticism that successfully changes minds and points to important errors is IMO among the most impactful kinds of writing. 

In general, I think we'd evaluate it similarly to other kinds of grant proposals, trying to gauge how relevant the proposal is to the cause area and how good a fit the team is to doing useful work. In this case, I think part of being a good fit for the work is having a deep understanding of EA/longtermism, having really strong epistemics, and buying into the high-level goal of doing as much good as possible.

Comment by ClaireZabel on Responsible Transparency Consumption · 2022-03-11T23:02:31.550Z · EA · GW

I think a problem here is when people don't know if someone is being fully honest/transparent/calibrated or using more conventional positive-slanted discourse norms. E.g. a situation where this comes up sometimes is taking and giving references for a job applicant. I think the norm with references is that they should be very positive, and you're supposed to do downward adjustments on the positivity to figure out what's going on (e.g. noticing if someone said someone was "reliable" versus "extremely reliable"). If an EA gives a reference for a job applicant using really transparent and calibrated languages, and then the reference-taker doesn't realize different discourse norms are in use and does their normal downward adjustment, they will end up with a falsely negative picture of the applicant. 

Similarly, I think in a community where some people or orgs are fully transparent and honest, and others are using more conventional pitch-like language, there's a risk of disadvantaging the honest and generally sowing a lot of confusion. 

Also, the more everyone expects everyone else to be super honest and transparent, in some ways, the more benefit to the first defector (since people might be more trusting and not suspect they're being self-promotional). 

Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-11T22:45:39.481Z · EA · GW

No, that's not what I'd say (and again, sorry that I'm finding it hard to communicate about this clearly). This isn't necessarily making a clear material difference in what we're willing to fund in many cases (though it could in some), it's more about what metrics we hold ourselves to and how that leads us to prioritize.  

I think we'd fund at least many of the scholarships from a pure cost-effectiveness perspective. We think they meet the bar of beating the last dollar, despite being on average less cost-effective than 80k advising, because 80k advising doesn't have enough room for funding. If 80k advising could absorb a bunch more orders of magnitude of funding with no diminishing returns, then I could imagine us not wanting to fund these scholarships from a cost-effectiveness perspective but wanting to fund them from a time-effectiveness perspective.

A place where it could make a material difference is if I imagine a hypothetical generalist EA asking what they should work on. I can imagine them noting that a given intervention (e.g. mentoring a few promising people while taking a low salary) is more cost-effective (and I think cost-effectiveness is often the default frame EAs think in), and me encouraging them to investigation whether a different intervention allows them to accomplish more with their time while being less cost-effective (e.g. setting up a ton of digital advertising of a given piece of written work), and saying that right now, the second intervention might be better. 

Comment by ClaireZabel on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-10T23:30:43.099Z · EA · GW

Hm yeah, I can see how this was confusing, sorry!

I actually wasn't trying to stake out a position about the relative value of 80k vs. our time. I was saying that with 80k advising, the basic inputs per career shift are a moderate amount of funding from us and a little bit of our time and a lot of 80k advisor time, while with scholarships, the inputs per career shift are a lot of funding and a moderate amount of our time, and no 80k time. So the scholarship model is, according to me, more expensive in dollars per career shift, but less time-consuming of dedicated longtermist time per career shift. 

I think the scholarships are more time-consuming for us per dollar disbursed than giving grants to 80k, but less time-consuming in aggregate because there's effectively no grantee "middle man" also spending time. 

Of course, some of the scholarships directly fund people to do object-level valuable things, this argument just concerns their role in making certain career paths more attractive and accessible. 

Does that make more sense? 

Comment by ClaireZabel on Help CEA plan future events and conferences · 2021-12-09T23:55:59.336Z · EA · GW

Agree. If possible, also, lots of private rooms people can grab for sensitive conversations, and/or places outside they can easily and pleasantly walk together, side-by-side, for same. 

Comment by ClaireZabel on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T02:33:34.788Z · EA · GW

I haven't looked closely, but from a fairly-but-not-completely uninformed perspective, Tim's allocation of part of his donor lottery winnings to the Czech Association for Effective Altruism looks prescient and potentially unusually counterfactually impactful.

Comment by ClaireZabel on [deleted post] 2021-11-18T22:32:33.114Z

You should adjust your estimate, this only took me 1 minute :) 

Comment by ClaireZabel on Concerns with ACE's Recent Behavior · 2021-04-22T19:00:36.881Z · EA · GW

[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]

I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just as hostile) and by some of the other instances of ACE behavior documented in the OP. I used to be a board member at ACE, but one of the reasons I didn't seek a second term was because I was concerned about ACE drifting away from focusing on just helping animals as effectively as possible, and towards integrating/compromising between that and human-centered social justice concerns, in a way that I wasn't convinced was based on open-minded analysis or strong and rigorous cause-agnostic reasoning. I worry about this dynamic leading to an unpleasant atmosphere for those with different perspectives, and decreasing the extent to which ACE has a truth-seeking culture that would reliably reach good decisions about how to help as many animals as possible. 

I think one can (hopefully obviously) take a very truth-seeking and clear-minded approach that leads to and involves doing more human-centered social justice activism, but I worry that that isn't what's happening at ACE; instead, I worry that other perspectives (which happen to particularly favor social justice issues and adopt some norms from certain SJ communities) are becoming more influential via processes that aren't particularly truth-tracking. 

Charity evaluators have a lot of power over the norms in the spaces they operate in, and so I think that for the health of the ecosystem it's particularly important for them to model openness in response to feedback, and rigorous, non-partisan, analytical approaches to charity evaluation/research in general, and general encouragement of truth-seeking, open-minded discourse norms. But I tentatively don't think that's what's going on here, and if it is, I more confidently worry that charities looking on may not interpret things that way; I think the natural reaction of a charity (that values a current or future possible ACE Top or Standout charity designation) to the situation with Anima is to feel a lot of pressure to adopt norms, focuses, and diversity goals it may not agree it ought to prioritize, and that don't seem intrinsically connected to the task of helping animals as effectively as possible, and for that charity worry that pushback might be met with aggression and reprisal (even if that's not what would in fact happen). 

This makes me really sad. I think ACE has one of the best missions in the world, and what they do is incredibly important. I really hope I'm wrong about the above and they are making the best possible choices, and are on the path to saving as many animals as possible, and helping the rest of the EAA ecosystem do the same.

Comment by ClaireZabel on What does failure look like? · 2021-04-09T23:39:47.259Z · EA · GW

I like this question :) 

One thing I've found pretty helpful in the context of my failures is to try to separate out (a) my intuitive emotional disappointment, regret, feelings of mourning, etc. (b) the question of what lessons, if any, I can take from my failure, now that I've seen the failure take place (c) the question of whether, ex ante, I should have known the endeavor was doomed, and perhaps something more meta about my decision-making procedure was off and ought to be corrected. 

I think all these things are valid and good to process, but I used to conflate them a lot more, which was especially confusing in the context of risky bets I knew before I started had a substantial chance of failure. 

I also noticed that I sometimes used to flinch away from the question of whether someone else predicted the failure (or seems like they would have), especially when I was feeling sad and vulnerable because of a recent failure. Now I try to do a careful manual scan for anyone that was especially foresightful/outpredicted me in a way that seemed like the product of skill rather than chance, and reflect on that until my emotions shift more towards admiration for their skill and understanding, and curiosity/a desire to understand what they saw that I missed. I try to get in a mood where I feel almost greedy for their models, and feel a deep visceral desire to hear where they're coming from (which reminds me a bit of this talk). I envision how I will be more competent and able to achieve more for the world if I take the best parts of their model and integrate it into my own

Comment by ClaireZabel on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-23T21:21:25.657Z · EA · GW

I’ll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset next time I check.

It's not out yet, so I expect you will get your wish if you check a bit after it's released :) 

Comment by ClaireZabel on Early Alpha Version of the Probably Good Website · 2021-03-02T05:28:31.993Z · EA · GW

Seems to be working now!

Comment by ClaireZabel on Early Alpha Version of the Probably Good Website · 2021-03-01T22:10:36.741Z · EA · GW

The website isn't working for me, screenshot below:

Comment by ClaireZabel on Resources On Mental Health And Finding A Therapist · 2021-02-22T23:39:49.393Z · EA · GW

Just a personal note, in case it's helpful for others: in the past, I thought that medications for mental health issues were likely to be pretty bad, in terms of side effects, and generally associated them with people in situations of pretty extreme suffering.  And so I thought it would only be worth it or appropriate to seek psychiatric help if I were really struggling, e.g. on the brink of a breakdown or full burn-out. So I avoided seeking help, even though I did have some issues that were bothering me.  In my experience, a lot of other people seem to feel similarly to past-Claire.

Now, I also think about things from an upside-focused perspective: even if I'm handling my problems reasonably well, I'm functioning and stable and overall pretty happy, etc., would medication further improve things overall, or help make certain stressful situations go better/give me more affordance to do things I find stressful? Would it cause me to be happier, more productive, more stable? Of course, some medications do have severe side effects and aren't worth it in less severe situations, but I (and some other EAs I know) have been able to improve my life a lot by addressing things that weren't so bad to start with, but still seemed like they could be improved on. So yeah, I tentatively suggest people think about this kind of thing not just for crisis-management, but also in case things are fine but there's still a lot of value on the table.  

Comment by ClaireZabel on Resources On Mental Health And Finding A Therapist · 2021-02-22T23:28:48.980Z · EA · GW

Scott's new practice, Lorien Psychiatry, also has some resources that I (at least) have found helpful. 

Comment by ClaireZabel on Some thoughts on EA outreach to high schoolers · 2021-01-20T22:32:37.942Z · EA · GW

Also, I believe it's much easier to become a teacher for high schoolers at top high schools than a teacher for students at top universities, because most teachers at top unis are professors, or at least lecturers with PhDs, while even at fancy high schools, most teachers don't have PhDs, and I think it's generally just much less selective. So EAs might have an easier time finding positions teaching high schoolers than uni students of a given eliteness level. (Of course, there are other ways to engage people, like student groups, for which different dynamics are at play.) 

Comment by ClaireZabel on EA Uni Group Forecasting Tournament! · 2020-09-20T18:56:49.501Z · EA · GW

Me too!

Comment by ClaireZabel on Asking for advice · 2020-09-09T19:00:38.562Z · EA · GW

Huh, this is great to know. Personally, I'm the opposite, I find it annoying when people ask to meet and don't  include a calendly link or similar, I am slightly annoyed by the time it takes to write a reply email and generate a calendar invite, and the often greater overall back-and-forth and attention drain from having the issue linger. 

Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy. 

Comment by ClaireZabel on avacyn's Shortform · 2020-07-13T06:09:09.847Z · EA · GW

Some people are making predictions about this topic here.

On that link, someone comments:

Berkeley's incumbent mayor got the endorsement of Bernie Sanders in 2016, and Gavin Newsom for 2020. Berkeley also has a strong record of reelecting mayors. So I think his base rate for reelection should be above 80%, barring a JerryBrownesque run from a much larger state politician.
https://www.dailycal.org/2019/08/30/berkeley-mayor-jesse-arreguin-announces-campaign-for-reelection/
Comment by ClaireZabel on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-09T00:29:13.905Z · EA · GW

I just wanted to say I thought this was overall an impressively thorough and thoughtful comment. Thank you for making it!

Comment by ClaireZabel on Information security careers for GCR reduction · 2020-02-18T01:25:55.417Z · EA · GW

I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people. 

The survey is here: https://docs.google.com/forms/d/e/1FAIpQLScEwPFNCB5aFsv8ghIFFTbZS0X_JMnuquE3DItp8XjbkeE6HQ/viewform?usp=sf_link. Open Philanthropy and 80,000 Hours staff members will be able to see the results.  I expect it to take around 5-25 minutes to take the survey, depending on how many answers are skipped. 

I’ll leave the survey open until EOD March 2nd. 

Comment by ClaireZabel on Some personal thoughts on EA and systemic change · 2019-09-27T19:31:01.425Z · EA · GW

[meta] Carl, I think you should consider going through other long, highly upvoted comments you've written and making them top-level posts. I'd be happy to look over options with you if that'd be helpful.

Comment by ClaireZabel on What book(s) would you want a gifted teenager to come across? · 2019-08-05T21:18:52.887Z · EA · GW

Cool project. I went to maybe-similar type of school and I think if I had encountered certain books earlier, it would have had a really good effect on me. The book categories I think I would most have benefitted from when I was that age:

  • Books about how the world very broadly works. A lot of history felt very detail-oriented and archival, but did less to give me a broad sense of how things had changed over time, what kinds of changes are possible, and what drives them. Top rec in that category: Global Economic History: A Very Short Introduction. Other recs: The Better Angels of Our Nature, Sapiens, Moral Mazes (I've never actually read the whole thing, just quotes),
  • Books about rationality, especially how it can cause important things to go awry, how that has happened historically and might be happening now. Reading these was especially relief-inducing because I already had concerns along those lines that I didn't see people articulate, and finally reading them was a hugely comforting experience. Top recs: Harry Potter and the Methods of Rationality, Rationality: From AI to Zombies (probably these were the most positively transformative books I've read, but Eliezer books are polarizing and some might have parts that people think are inappropriate for minors, and I can't remember which), Thinking Fast and Slow. Other recs: Inadequate Equilibria,
  • Some other misc recs I'm not going to explain: Permutation City, Animal Liberation, Command and Control, Seeing like a State, Deep Work, Nonviolent Communication

Comment by ClaireZabel on EA is vetting-constrained · 2019-05-15T03:13:59.050Z · EA · GW

I would guess the bottleneck is elsewhere too, think the bottleneck is something like managerial capacity/trust/mentorship/vetting of grantmakers. I recently started thinking about this a bit, but am still in the very early stages.

Comment by ClaireZabel on EA is vetting-constrained · 2019-05-11T02:03:34.391Z · EA · GW

(Just saw this via Rob's post on Facebook) :)

Thanks for writing this up, I think you make some useful points here.

Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct. It's more like there's a distribution of projects, and we've picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the case is confusing, we don't know the people involved), to advise (e.g. the team is inexperienced), to refocus (e.g. we think they aren't focusing on interventions that would meet our goals, and so we need to work on sharing models until one of us is moved), or to find. 

Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about". 

Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Comment by ClaireZabel on In defence of epistemic modesty · 2017-10-30T00:52:40.490Z · EA · GW

I'm not sure where I picked it up, though I'm pretty sure it was somewhere in the rationalist community.

E.g. from What epistemic hygiene norms should there be?:

Explicitly separate “individual impressions” (impressions based only on evidence you've verified yourself) from “beliefs” (which include evidence from others’ impressions)

Comment by ClaireZabel on In defence of epistemic modesty · 2017-10-29T22:43:21.579Z · EA · GW

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T04:17:22.963Z · EA · GW

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T02:32:31.698Z · EA · GW

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T02:19:14.078Z · EA · GW

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T00:47:00.463Z · EA · GW

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T21:46:00.355Z · EA · GW

Kelly, I don't think the study you cite is good or compelling evidence of the conclusion you're stating. See Scott's comments on it for the reasons why.

(edited because the original link didn't work)

Comment by ClaireZabel on Effective Altruism Grants project update · 2017-10-03T20:18:04.279Z · EA · GW

Ah, k, thanks for explaining, I misinterpreted what you wrote. I agree 25 hours is in the right ballpark for that sum (though it varies a lot).

Comment by ClaireZabel on [deleted post] 2017-10-03T20:14:28.827Z

Personally, I downvoted because I guessed that the post was likely to be of interest to sufficiently few people that it felt somewhat spammy. If I imagine everyone posting with that level of selectivity I would guess the Forum would become a worse place, so it's the type of behavior I think should probably be discouraged.

I'm not very confident about that, though.

Comment by ClaireZabel on Effective Altruism Grants project update · 2017-10-03T05:49:37.614Z · EA · GW

An Open Phil staff member made a rough guess that it takes them 13-75 hours per grant distributed. Their average grant size is quite a bit larger, so it seems reasonable to assume it would take them about 25 hours to distribute a pot the size of EA Grants.

My experience making grants at Open Phil suggests it would take us substantially more than 25 hours to evaluate the number of grant applications you received, decide which ones to fund, and disburse the money (counting grant investigator, logistics, and communications staff time). I haven't found that time spent scales completely linearly with grant size, though it generally scales up somewhat. So while it seems about right that most grants take 13-75 hours, I don't think it's true that grants that are only a small fraction of the size of most OP grants would take an equally small fraction of that amount of time.