Posts

Information security careers for GCR reduction 2019-06-20T23:56:58.275Z
Talk about donations earlier and more 2016-02-10T18:14:55.224Z
Ethical offsetting is antithetical to EA 2016-01-05T17:49:01.191Z
Impossible EA emotions 2015-12-21T20:06:02.912Z
How we can make it easier to change your mind about cause areas 2015-08-11T06:21:09.211Z

Comments

Comment by ClaireZabel on Concerns with ACE's Recent Behavior · 2021-04-22T19:00:36.881Z · EA · GW

[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]

I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just as hostile) and by some of the other instances of ACE behavior documented in the OP. I used to be a board member at ACE, but one of the reasons I didn't seek a second term was because I was concerned about ACE drifting away from focusing on just helping animals as effectively as possible, and towards integrating/compromising between that and human-centered social justice concerns, in a way that I wasn't convinced was based on open-minded analysis or strong and rigorous cause-agnostic reasoning. I worry about this dynamic leading to an unpleasant atmosphere for those with different perspectives, and decreasing the extent to which ACE has a truth-seeking culture that would reliably reach good decisions about how to help as many animals as possible. 

I think one can (hopefully obviously) take a very truth-seeking and clear-minded approach that leads to and involves doing more human-centered social justice activism, but I worry that that isn't what's happening at ACE; instead, I worry that other perspectives (which happen to particularly favor social justice issues and adopt some norms from certain SJ communities) are becoming more influential via processes that aren't particularly truth-tracking. 

Charity evaluators have a lot of power over the norms in the spaces they operate in, and so I think that for the health of the ecosystem it's particularly important for them to model openness in response to feedback, and rigorous, non-partisan, analytical approaches to charity evaluation/research in general, and general encouragement of truth-seeking, open-minded discourse norms. But I tentatively don't think that's what's going on here, and if it is, I more confidently worry that charities looking on may not interpret things that way; I think the natural reaction of a charity (that values a current or future possible ACE Top or Standout charity designation) to the situation with Anima is to feel a lot of pressure to adopt norms, focuses, and diversity goals it may not agree it ought to prioritize, and that don't seem intrinsically connected to the task of helping animals as effectively as possible, and for that charity worry that pushback might be met with aggression and reprisal (even if that's not what would in fact happen). 

This makes me really sad. I think ACE has one of the best missions in the world, and what they do is incredibly important. I really hope I'm wrong about the above and they are making the best possible choices, and are on the path to saving as many animals as possible, and helping the rest of the EAA ecosystem do the same.

Comment by ClaireZabel on What does failure look like? · 2021-04-09T23:39:47.259Z · EA · GW

I like this question :) 

One thing I've found pretty helpful in the context of my failures is to try to separate out (a) my intuitive emotional disappointment, regret, feelings of mourning, etc. (b) the question of what lessons, if any, I can take from my failure, now that I've seen the failure take place (c) the question of whether, ex ante, I should have known the endeavor was doomed, and perhaps something more meta about my decision-making procedure was off and ought to be corrected. 

I think all these things are valid and good to process, but I used to conflate them a lot more, which was especially confusing in the context of risky bets I knew before I started had a substantial chance of failure. 

I also noticed that I sometimes used to flinch away from the question of whether someone else predicted the failure (or seems like they would have), especially when I was feeling sad and vulnerable because of a recent failure. Now I try to do a careful manual scan for anyone that was especially foresightful/outpredicted me in a way that seemed like the product of skill rather than chance, and reflect on that until my emotions shift more towards admiration for their skill and understanding, and curiosity/a desire to understand what they saw that I missed. I try to get in a mood where I feel almost greedy for their models, and feel a deep visceral desire to hear where they're coming from (which reminds me a bit of this talk). I envision how I will be more competent and able to achieve more for the world if I take the best parts of their model and integrate it into my own

Comment by ClaireZabel on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-23T21:21:25.657Z · EA · GW

I’ll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset next time I check.

It's not out yet, so I expect you will get your wish if you check a bit after it's released :) 

Comment by ClaireZabel on Early Alpha Version of the Probably Good Website · 2021-03-02T05:28:31.993Z · EA · GW

Seems to be working now!

Comment by ClaireZabel on Early Alpha Version of the Probably Good Website · 2021-03-01T22:10:36.741Z · EA · GW

The website isn't working for me, screenshot below:

Comment by ClaireZabel on Resources On Mental Health And Finding A Therapist · 2021-02-22T23:39:49.393Z · EA · GW

Just a personal note, in case it's helpful for others: in the past, I thought that medications for mental health issues were likely to be pretty bad, in terms of side effects, and generally associated them with people in situations of pretty extreme suffering.  And so I thought it would only be worth it or appropriate to seek psychiatric help if I were really struggling, e.g. on the brink of a breakdown or full burn-out. So I avoided seeking help, even though I did have some issues that were bothering me.  In my experience, a lot of other people seem to feel similarly to past-Claire.

Now, I also think about things from an upside-focused perspective: even if I'm handling my problems reasonably well, I'm functioning and stable and overall pretty happy, etc., would medication further improve things overall, or help make certain stressful situations go better/give me more affordance to do things I find stressful? Would it cause me to be happier, more productive, more stable? Of course, some medications do have severe side effects and aren't worth it in less severe situations, but I (and some other EAs I know) have been able to improve my life a lot by addressing things that weren't so bad to start with, but still seemed like they could be improved on. So yeah, I tentatively suggest people think about this kind of thing not just for crisis-management, but also in case things are fine but there's still a lot of value on the table.  

Comment by ClaireZabel on Resources On Mental Health And Finding A Therapist · 2021-02-22T23:28:48.980Z · EA · GW

Scott's new practice, Lorien Psychiatry, also has some resources that I (at least) have found helpful. 

Comment by ClaireZabel on Some thoughts on EA outreach to high schoolers · 2021-01-20T22:32:37.942Z · EA · GW

Also, I believe it's much easier to become a teacher for high schoolers at top high schools than a teacher for students at top universities, because most teachers at top unis are professors, or at least lecturers with PhDs, while even at fancy high schools, most teachers don't have PhDs, and I think it's generally just much less selective. So EAs might have an easier time finding positions teaching high schoolers than uni students of a given eliteness level. (Of course, there are other ways to engage people, like student groups, for which different dynamics are at play.) 

Comment by ClaireZabel on EA Uni Group Forecasting Tournament! · 2020-09-20T18:56:49.501Z · EA · GW

Me too!

Comment by ClaireZabel on Asking for advice · 2020-09-09T19:00:38.562Z · EA · GW

Huh, this is great to know. Personally, I'm the opposite, I find it annoying when people ask to meet and don't  include a calendly link or similar, I am slightly annoyed by the time it takes to write a reply email and generate a calendar invite, and the often greater overall back-and-forth and attention drain from having the issue linger. 

Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy. 

Comment by ClaireZabel on avacyn's Shortform · 2020-07-13T06:09:09.847Z · EA · GW

Some people are making predictions about this topic here.

On that link, someone comments:

Berkeley's incumbent mayor got the endorsement of Bernie Sanders in 2016, and Gavin Newsom for 2020. Berkeley also has a strong record of reelecting mayors. So I think his base rate for reelection should be above 80%, barring a JerryBrownesque run from a much larger state politician.
https://www.dailycal.org/2019/08/30/berkeley-mayor-jesse-arreguin-announces-campaign-for-reelection/
Comment by ClaireZabel on Is the Buy-One-Give-One model an effective form of altruism? · 2020-06-09T00:29:13.905Z · EA · GW

I just wanted to say I thought this was overall an impressively thorough and thoughtful comment. Thank you for making it!

Comment by ClaireZabel on Information security careers for GCR reduction · 2020-02-18T01:25:55.417Z · EA · GW

I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people. 

The survey is here: https://docs.google.com/forms/d/e/1FAIpQLScEwPFNCB5aFsv8ghIFFTbZS0X_JMnuquE3DItp8XjbkeE6HQ/viewform?usp=sf_link. Open Philanthropy and 80,000 Hours staff members will be able to see the results.  I expect it to take around 5-25 minutes to take the survey, depending on how many answers are skipped. 

I’ll leave the survey open until EOD March 2nd. 

Comment by ClaireZabel on Some personal thoughts on EA and systemic change · 2019-09-27T19:31:01.425Z · EA · GW

[meta] Carl, I think you should consider going through other long, highly upvoted comments you've written and making them top-level posts. I'd be happy to look over options with you if that'd be helpful.

Comment by ClaireZabel on What book(s) would you want a gifted teenager to come across? · 2019-08-05T21:18:52.887Z · EA · GW

Cool project. I went to maybe-similar type of school and I think if I had encountered certain books earlier, it would have had a really good effect on me. The book categories I think I would most have benefitted from when I was that age:

  • Books about how the world very broadly works. A lot of history felt very detail-oriented and archival, but did less to give me a broad sense of how things had changed over time, what kinds of changes are possible, and what drives them. Top rec in that category: Global Economic History: A Very Short Introduction. Other recs: The Better Angels of Our Nature, Sapiens, Moral Mazes (I've never actually read the whole thing, just quotes),
  • Books about rationality, especially how it can cause important things to go awry, how that has happened historically and might be happening now. Reading these was especially relief-inducing because I already had concerns along those lines that I didn't see people articulate, and finally reading them was a hugely comforting experience. Top recs: Harry Potter and the Methods of Rationality, Rationality: From AI to Zombies (probably these were the most positively transformative books I've read, but Eliezer books are polarizing and some might have parts that people think are inappropriate for minors, and I can't remember which), Thinking Fast and Slow. Other recs: Inadequate Equilibria,
  • Some other misc recs I'm not going to explain: Permutation City, Animal Liberation, Command and Control, Seeing like a State, Deep Work, Nonviolent Communication

Comment by ClaireZabel on EA is vetting-constrained · 2019-05-15T03:13:59.050Z · EA · GW

I would guess the bottleneck is elsewhere too, think the bottleneck is something like managerial capacity/trust/mentorship/vetting of grantmakers. I recently started thinking about this a bit, but am still in the very early stages.

Comment by ClaireZabel on EA is vetting-constrained · 2019-05-11T02:03:34.391Z · EA · GW

(Just saw this via Rob's post on Facebook) :)

Thanks for writing this up, I think you make some useful points here.

Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct. It's more like there's a distribution of projects, and we've picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the case is confusing, we don't know the people involved), to advise (e.g. the team is inexperienced), to refocus (e.g. we think they aren't focusing on interventions that would meet our goals, and so we need to work on sharing models until one of us is moved), or to find. 

Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about". 

Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Comment by ClaireZabel on In defence of epistemic modesty · 2017-10-30T00:52:40.490Z · EA · GW

I'm not sure where I picked it up, though I'm pretty sure it was somewhere in the rationalist community.

E.g. from What epistemic hygiene norms should there be?:

Explicitly separate “individual impressions” (impressions based only on evidence you've verified yourself) from “beliefs” (which include evidence from others’ impressions)

Comment by ClaireZabel on In defence of epistemic modesty · 2017-10-29T22:43:21.579Z · EA · GW

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T04:17:22.963Z · EA · GW

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T02:32:31.698Z · EA · GW

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T02:19:14.078Z · EA · GW

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T00:47:00.463Z · EA · GW

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

Comment by ClaireZabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T21:46:00.355Z · EA · GW

Kelly, I don't think the study you cite is good or compelling evidence of the conclusion you're stating. See Scott's comments on it for the reasons why.

(edited because the original link didn't work)

Comment by ClaireZabel on Effective Altruism Grants project update · 2017-10-03T20:18:04.279Z · EA · GW

Ah, k, thanks for explaining, I misinterpreted what you wrote. I agree 25 hours is in the right ballpark for that sum (though it varies a lot).

Comment by ClaireZabel on [deleted post] 2017-10-03T20:14:28.827Z

Personally, I downvoted because I guessed that the post was likely to be of interest to sufficiently few people that it felt somewhat spammy. If I imagine everyone posting with that level of selectivity I would guess the Forum would become a worse place, so it's the type of behavior I think should probably be discouraged.

I'm not very confident about that, though.

Comment by ClaireZabel on Effective Altruism Grants project update · 2017-10-03T05:49:37.614Z · EA · GW

An Open Phil staff member made a rough guess that it takes them 13-75 hours per grant distributed. Their average grant size is quite a bit larger, so it seems reasonable to assume it would take them about 25 hours to distribute a pot the size of EA Grants.

My experience making grants at Open Phil suggests it would take us substantially more than 25 hours to evaluate the number of grant applications you received, decide which ones to fund, and disburse the money (counting grant investigator, logistics, and communications staff time). I haven't found that time spent scales completely linearly with grant size, though it generally scales up somewhat. So while it seems about right that most grants take 13-75 hours, I don't think it's true that grants that are only a small fraction of the size of most OP grants would take an equally small fraction of that amount of time.

Comment by ClaireZabel on EA Survey 2017 Series: Community Demographics & Beliefs · 2017-08-30T06:44:38.648Z · EA · GW

I think it would be useful to frontload info like 1) the number of people to took this vs. previous surveys, 2) links to previous surveys.

I think I would also prefer mildly strongly if all of the survey results were in one blog post (to make them easier to find), and prefer it strongly to have all the results for the demographic info in the demographics post. But is seems like this post doesn't include information that was requested on the survey and that seems interesting, like race/ethnicity and political views.

The proportion of atheist, agnostic or non-religious people is less than the 2015 survey. Last year that number was 87% compared to 80.6% this year. That metric hadn’t changed over the last two surveys, so this could be an indicator that inclusion of people of faith in the EA community is improving. (bolding mine)

I would recommend changing "improving" to "increasing", since I don't think the opinion that increasing the proportion of people in EA that is religious is good is universal.

Comment by ClaireZabel on Students for High-Impact Charity Interim Report · 2017-04-05T02:14:02.407Z · EA · GW

[minor] In the sentence, "While more pilot testing is necessary in order to make definitive judgements on SHIC as a whole, we feel that we have gathered enough data to guide strategic changes to this exceedingly novel project." "exceedingly novel" seems like a substantial exaggeration to me. There have been EA student groups, and LEAN, before (as you know), as well as inter-school groups for many different causes.

Comment by ClaireZabel on Advisory panel at CEA · 2017-03-09T17:37:21.309Z · EA · GW

Note though that ACE was originally a part of 80k Hours, which was a part of CEA. The organizations now feel quite separate, at least to me.

Additionally, I am not paid by ACE or CEA. Being on the ACE Board is a volunteer position, as is this.

Generally, I don't feel constrained in my ability to criticize CEA, outside a desire to generally maintain collegial relations, though it seems plausible to me that I'm in an echo chamber too similar to CEAs to help as much as I could if I was more on the outside. Generally, trying to do as much good as possible is the motivation for how I spend most of the hours in my day. I desperately want EA to succeed and increasing the chances that CEA makes sound decisions seems like a moderately important piece of that. That's what's been driving my thinking on this so far and I expect it'll continue to do so.

That all said (or rambled about) here's a preview of a criticism I intend to make that's not related to my role on the advisory board panel: I don't think it's appropriate to encourage students and other very young people to take the GWWC pledge, or to encourage student groups to proselytize about it. I think the analogy to marriage is helpful here; it wouldn't be right to encourage young people who don't know much about themselves or their future life situations to get married (especially if you didn't know them or their situation well yourself) and I likewise think GWWC should not encourage them to take the pledge.

Views totally my own and not my employer's (the Open Philanthropy Project).

Comment by ClaireZabel on EA essay contest for <18s · 2017-01-22T23:53:00.197Z · EA · GW

I found the formatting of this post difficult to read. I would recommend making it neater and clearer.

Comment by ClaireZabel on My 5 favorite posts of 2016 · 2017-01-06T00:45:42.323Z · EA · GW

I would prefer if the title of this post was something like "My 5 favorite EA posts of 2016". When I see "best" I expect a more objective and comprehensive ranking system (and think "best" is an irritatingly nonspecific and subjective word), so I think the current wording is misleading.

Comment by ClaireZabel on Futures of altruism special issue? · 2016-12-19T06:10:41.161Z · EA · GW

For EAs that don't know, if might be helpful to provide some information about the journal, such as the size and general characteristics of the readership, as well as information about writing for it, such as what sort of background is likely helpful and how long the papers would probably be. Also hopes and expectations for the special issue, if you have any.

Comment by ClaireZabel on What is the expected value of creating a GiveWell top charity? · 2016-12-18T03:06:39.725Z · EA · GW

This gets very tricky very fast. In general, the difference in EV between people's first and second choice plan is likely to be small in situations with many options, if only because their first and second choice plans are likely to have many of the same qualities (depending on how different a plan has to be to be considered a different plan). Subtracting the most plausible (or something) counterfactual from almost anyone's impact makes it seem very small.

Comment by ClaireZabel on EAs write about where they give · 2016-12-09T23:04:37.592Z · EA · GW

Nice idea, Julia. Thanks for doing this!

Comment by ClaireZabel on Concerns with Intentional Insights · 2016-10-30T22:58:26.084Z · EA · GW

Thanks Kathy!

Comment by ClaireZabel on Concerns with Intentional Insights · 2016-10-29T05:16:50.813Z · EA · GW

No shame if you lose, so much glory if you win

Comment by ClaireZabel on Concerns with Intentional Insights · 2016-10-28T07:21:37.374Z · EA · GW

I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.

Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.

Views my own, not my employer's.

Comment by ClaireZabel on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T00:29:35.419Z · EA · GW

I would recommend linking to Jeff's post at the beginning of this one.

Comment by ClaireZabel on Should you switch away from earning to give? Some considerations. · 2016-08-26T06:29:47.832Z · EA · GW

But many of those people aren't earning to give. If they were, they would probably give more. So the survey doesn't indicate you are in the top 15% in comparative advantage just because you could clear $8k.

Comment by ClaireZabel on Why Animals Matter for Effective Altruism · 2016-08-24T01:24:38.523Z · EA · GW

Have you experienced downvoting brigades? How do you distinguish them from sincere negative feedback?

Comment by ClaireZabel on June 2016 GiveWell board meeting · 2016-08-19T23:41:08.614Z · EA · GW

To be clear, I'm saying that I think sometimes an organization's practices usefully reflect a community's values and that Linch was being overly dismissive of this possibility, not making a claim about this specific case.

Comment by ClaireZabel on June 2016 GiveWell board meeting · 2016-08-18T06:36:40.011Z · EA · GW

If the "you" here is the Effective Altruism community, then the hiring practices of a single organization shouldn't be a significant sign that the community as a whole is elitist.

I don't think that's entirely right. I think that given that the community includes relatively few organizations (of which GiveWell is one of the larger and older ones) GiveWell's practices may be but aren't always a significant (and relatively concrete) reflection of and on the community's views.

(views are my own, not my employer's)

Comment by ClaireZabel on Effective Altruists really love EA: Evidence from EA Global · 2016-08-15T03:16:59.914Z · EA · GW

In fact, the team most likely to be growing EA, the Effective Altruism Outreach team was cautioning against growth. It seems reasonably clear that EA is growing virally and organically -- exactly what you want in the early days of a project.

Why do you want a project to grow virally and organically in the early days of a project? That seems like the opposite of what I'd guess; when a project is young you want to steer it thoughtfully and deliberately and encourage it to grow slowly, so that it doesn't get off track or hijacked, and so you have time to onboard and build capacity in the new members. Has the EAO team come to think that fast growth is good?

Comment by ClaireZabel on EA database/reading list: Why it might be useful · 2016-07-27T18:07:53.805Z · EA · GW

And: http://effective-altruism.com/ea/r5/threads_on_facebook_worth_being_able_to_refer/

Comment by ClaireZabel on EA database/reading list: Why it might be useful · 2016-07-27T18:04:43.390Z · EA · GW

Also: http://www.benkuhn.net/ea-reading

Comment by ClaireZabel on EA database/reading list: Why it might be useful · 2016-07-27T18:00:52.204Z · EA · GW

There is this: http://effective-altruism.com/ea/5f/effective_altruism_reading_list/

Comment by ClaireZabel on EA != minimize suffering · 2016-07-14T03:43:07.824Z · EA · GW

That's deeply kind of you to say, and the most uplifting thing I've heard in a while. Thank you very much.

Comment by ClaireZabel on EA != minimize suffering · 2016-07-14T02:42:37.510Z · EA · GW

You see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things.

Now people can say "but look at Clockwork Orange!" as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it's in line with their preexisting beliefs and ethics.

Comment by ClaireZabel on EA != minimize suffering · 2016-07-14T02:35:42.134Z · EA · GW

Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?

I think generalizing from these examples (and especially from fictional examples in general) is dangerous for a few reasons.

Fiction is not designed to be maximally truth-revealing. Its function is as art and entertainment, to move the audience, persuade them, woo them, etc. Doing this can and often does involve revealing important truths, but doesn't necessarily. Sometimes, fiction is effective because it affirms cultural beliefs/mores especially well (which makes it seem very true and noble). But that means it's often (though certainly not always) a reflection of its time (it's often easy, for example, to see how fiction from the past affirmed now-outdated beliefs about gender and race). So messages in fiction are not always true.

Fiction has a lot of qualities that bias the audience in specific useful ways that don't relate to truth. For example, it's often beautiful, high-status, and designed to play on emotions. That means that relative to a similar non-fictional but true thing, it may seem more convincing, even when the reasoning is equally or less sound. So messages in fiction are especially powerful.

For example, I think the Giver reflect the predominant (but implicit) belief of our time and culture, that intense happiness is necessarily linked to suffering, and that attempts to build utopias generally fail in obvious ways by arbitrarily excluding our most important values. Iirc, the folks in the Giver can't love. Love is one of our society's highest values; not loving is a clear sign they've gone wrong. But the story doesn't explain why love had to be eliminated to create peace, it just establishes a connection in the readers' minds without providing any real evidence.

Consider further that if it was true that extreme bad wasn't a necessary cost of extreme good, we would probably still not have a lot of fiction reflecting that truth. This is simply because fiction about everything going exceedingly well for extended periods of time would likely be very boring for the reader (wonderful for the characters, if they experienced it). People would not read that fiction. Perhaps if you made them do so they would project their own boredom onto the story, and say the story is bad because it bored them. This is a fine policy for picking your entertainment, but a dangerous habit to establish if you're going to be deciding real-world policy on others' behalf.