Posts

Information security careers for GCR reduction 2019-06-20T23:56:58.275Z · score: 154 (63 votes)
Talk about donations earlier and more 2016-02-10T18:14:55.224Z · score: 30 (24 votes)
Ethical offsetting is antithetical to EA 2016-01-05T17:49:01.191Z · score: 24 (32 votes)
Impossible EA emotions 2015-12-21T20:06:02.912Z · score: 17 (21 votes)
How we can make it easier to change your mind about cause areas 2015-08-11T06:21:09.211Z · score: 34 (31 votes)

Comments

Comment by clairezabel on Information security careers for GCR reduction · 2020-02-18T01:25:55.417Z · score: 16 (7 votes) · EA · GW

I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people. 

The survey is here: https://docs.google.com/forms/d/e/1FAIpQLScEwPFNCB5aFsv8ghIFFTbZS0X_JMnuquE3DItp8XjbkeE6HQ/viewform?usp=sf_link. Open Philanthropy and 80,000 Hours staff members will be able to see the results.  I expect it to take around 5-25 minutes to take the survey, depending on how many answers are skipped. 

I’ll leave the survey open until EOD March 2nd. 

Comment by clairezabel on Some personal thoughts on EA and systemic change · 2019-09-27T19:31:01.425Z · score: 66 (24 votes) · EA · GW

[meta] Carl, I think you should consider going through other long, highly upvoted comments you've written and making them top-level posts. I'd be happy to look over options with you if that'd be helpful.

Comment by clairezabel on What book(s) would you want a gifted teenager to come across? · 2019-08-05T21:18:52.887Z · score: 22 (12 votes) · EA · GW

Cool project. I went to maybe-similar type of school and I think if I had encountered certain books earlier, it would have had a really good effect on me. The book categories I think I would most have benefitted from when I was that age:

  • Books about how the world very broadly works. A lot of history felt very detail-oriented and archival, but did less to give me a broad sense of how things had changed over time, what kinds of changes are possible, and what drives them. Top rec in that category: Global Economic History: A Very Short Introduction. Other recs: The Better Angels of Our Nature, Sapiens, Moral Mazes (I've never actually read the whole thing, just quotes),
  • Books about rationality, especially how it can cause important things to go awry, how that has happened historically and might be happening now. Reading these was especially relief-inducing because I already had concerns along those lines that I didn't see people articulate, and finally reading them was a hugely comforting experience. Top recs: Harry Potter and the Methods of Rationality, Rationality: From AI to Zombies (probably these were the most positively transformative books I've read, but Eliezer books are polarizing and some might have parts that people think are inappropriate for minors, and I can't remember which), Thinking Fast and Slow. Other recs: Inadequate Equilibria,
  • Some other misc recs I'm not going to explain: Permutation City, Animal Liberation, Command and Control, Seeing like a State, Deep Work, Nonviolent Communication

Comment by clairezabel on EA is vetting-constrained · 2019-05-15T03:13:59.050Z · score: 5 (3 votes) · EA · GW

I would guess the bottleneck is elsewhere too, think the bottleneck is something like managerial capacity/trust/mentorship/vetting of grantmakers. I recently started thinking about this a bit, but am still in the very early stages.

Comment by clairezabel on EA is vetting-constrained · 2019-05-11T02:03:34.391Z · score: 28 (10 votes) · EA · GW

(Just saw this via Rob's post on Facebook) :)

Thanks for writing this up, I think you make some useful points here.

Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct. It's more like there's a distribution of projects, and we've picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the case is confusing, we don't know the people involved), to advise (e.g. the team is inexperienced), to refocus (e.g. we think they aren't focusing on interventions that would meet our goals, and so we need to work on sharing models until one of us is moved), or to find. 

Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about". 

Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Comment by clairezabel on In defence of epistemic modesty · 2017-10-30T00:52:40.490Z · score: 1 (1 votes) · EA · GW

I'm not sure where I picked it up, though I'm pretty sure it was somewhere in the rationalist community.

E.g. from What epistemic hygiene norms should there be?:

Explicitly separate “individual impressions” (impressions based only on evidence you've verified yourself) from “beliefs” (which include evidence from others’ impressions)

Comment by clairezabel on In defence of epistemic modesty · 2017-10-29T22:43:21.579Z · score: 28 (22 votes) · EA · GW

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment by clairezabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T04:17:22.963Z · score: 2 (2 votes) · EA · GW

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Comment by clairezabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T02:32:31.698Z · score: 2 (2 votes) · EA · GW

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Comment by clairezabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T02:19:14.078Z · score: 13 (10 votes) · EA · GW

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outcomes. This is the intellectual equivalent of catching the flu and then purposefully vomiting into the town water supply. People that do this are acting in a harmful manner, and they should be asked to cease and desist.

the best scrutinizer is someone who feels motivated to disprove a paper's conclusion

The best scrutinizer is someone that feels motivated to actually find the truth. This should be obvious.

For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion.

Yet EAs are mostly liberal. The 2017 Survey had 309 EAs identifying as Left, 373 as Centre-Left, 4 identifying as Right, 31 as Centre Right. My contention is that this is not about the conclusions being liberal. It's about specific studies and analyses of studies being terrible. E.g. (and I hate that I have to say this) I lean very socially liberal on most issues. Yet I claim that the article Kelly cited is not good support for anyone's beliefs. Because it is terrible, and does not track the truth. And we don't need writings like that, regardless of whose conclusions they happen to support.

Comment by clairezabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T00:47:00.463Z · score: 22 (18 votes) · EA · GW

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

Comment by clairezabel on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T21:46:00.355Z · score: 10 (10 votes) · EA · GW

Kelly, I don't think the study you cite is good or compelling evidence of the conclusion you're stating. See Scott's comments on it for the reasons why.

(edited because the original link didn't work)

Comment by clairezabel on Effective Altruism Grants project update · 2017-10-03T20:18:04.279Z · score: 0 (0 votes) · EA · GW

Ah, k, thanks for explaining, I misinterpreted what you wrote. I agree 25 hours is in the right ballpark for that sum (though it varies a lot).

Comment by ClaireZabel on [deleted post] 2017-10-03T20:14:28.827Z

Personally, I downvoted because I guessed that the post was likely to be of interest to sufficiently few people that it felt somewhat spammy. If I imagine everyone posting with that level of selectivity I would guess the Forum would become a worse place, so it's the type of behavior I think should probably be discouraged.

I'm not very confident about that, though.

Comment by clairezabel on Effective Altruism Grants project update · 2017-10-03T05:49:37.614Z · score: 1 (1 votes) · EA · GW

An Open Phil staff member made a rough guess that it takes them 13-75 hours per grant distributed. Their average grant size is quite a bit larger, so it seems reasonable to assume it would take them about 25 hours to distribute a pot the size of EA Grants.

My experience making grants at Open Phil suggests it would take us substantially more than 25 hours to evaluate the number of grant applications you received, decide which ones to fund, and disburse the money (counting grant investigator, logistics, and communications staff time). I haven't found that time spent scales completely linearly with grant size, though it generally scales up somewhat. So while it seems about right that most grants take 13-75 hours, I don't think it's true that grants that are only a small fraction of the size of most OP grants would take an equally small fraction of that amount of time.

Comment by clairezabel on EA Survey 2017 Series: Community Demographics & Beliefs · 2017-08-30T06:44:38.648Z · score: 8 (10 votes) · EA · GW

I think it would be useful to frontload info like 1) the number of people to took this vs. previous surveys, 2) links to previous surveys.

I think I would also prefer mildly strongly if all of the survey results were in one blog post (to make them easier to find), and prefer it strongly to have all the results for the demographic info in the demographics post. But is seems like this post doesn't include information that was requested on the survey and that seems interesting, like race/ethnicity and political views.

The proportion of atheist, agnostic or non-religious people is less than the 2015 survey. Last year that number was 87% compared to 80.6% this year. That metric hadn’t changed over the last two surveys, so this could be an indicator that inclusion of people of faith in the EA community is improving. (bolding mine)

I would recommend changing "improving" to "increasing", since I don't think the opinion that increasing the proportion of people in EA that is religious is good is universal.

Comment by clairezabel on Students for High-Impact Charity Interim Report · 2017-04-05T02:14:02.407Z · score: 1 (3 votes) · EA · GW

[minor] In the sentence, "While more pilot testing is necessary in order to make definitive judgements on SHIC as a whole, we feel that we have gathered enough data to guide strategic changes to this exceedingly novel project." "exceedingly novel" seems like a substantial exaggeration to me. There have been EA student groups, and LEAN, before (as you know), as well as inter-school groups for many different causes.

Comment by clairezabel on Advisory panel at CEA · 2017-03-09T17:37:21.309Z · score: 7 (7 votes) · EA · GW

Note though that ACE was originally a part of 80k Hours, which was a part of CEA. The organizations now feel quite separate, at least to me.

Additionally, I am not paid by ACE or CEA. Being on the ACE Board is a volunteer position, as is this.

Generally, I don't feel constrained in my ability to criticize CEA, outside a desire to generally maintain collegial relations, though it seems plausible to me that I'm in an echo chamber too similar to CEAs to help as much as I could if I was more on the outside. Generally, trying to do as much good as possible is the motivation for how I spend most of the hours in my day. I desperately want EA to succeed and increasing the chances that CEA makes sound decisions seems like a moderately important piece of that. That's what's been driving my thinking on this so far and I expect it'll continue to do so.

That all said (or rambled about) here's a preview of a criticism I intend to make that's not related to my role on the advisory board panel: I don't think it's appropriate to encourage students and other very young people to take the GWWC pledge, or to encourage student groups to proselytize about it. I think the analogy to marriage is helpful here; it wouldn't be right to encourage young people who don't know much about themselves or their future life situations to get married (especially if you didn't know them or their situation well yourself) and I likewise think GWWC should not encourage them to take the pledge.

Views totally my own and not my employer's (the Open Philanthropy Project).

Comment by clairezabel on EA essay contest for <18s · 2017-01-22T23:53:00.197Z · score: 2 (4 votes) · EA · GW

I found the formatting of this post difficult to read. I would recommend making it neater and clearer.

Comment by clairezabel on My 5 favorite posts of 2016 · 2017-01-06T00:45:42.323Z · score: 13 (13 votes) · EA · GW

I would prefer if the title of this post was something like "My 5 favorite EA posts of 2016". When I see "best" I expect a more objective and comprehensive ranking system (and think "best" is an irritatingly nonspecific and subjective word), so I think the current wording is misleading.

Comment by clairezabel on Futures of altruism special issue? · 2016-12-19T06:10:41.161Z · score: 6 (6 votes) · EA · GW

For EAs that don't know, if might be helpful to provide some information about the journal, such as the size and general characteristics of the readership, as well as information about writing for it, such as what sort of background is likely helpful and how long the papers would probably be. Also hopes and expectations for the special issue, if you have any.

Comment by clairezabel on What is the expected value of creating a GiveWell top charity? · 2016-12-18T03:06:39.725Z · score: 2 (4 votes) · EA · GW

This gets very tricky very fast. In general, the difference in EV between people's first and second choice plan is likely to be small in situations with many options, if only because their first and second choice plans are likely to have many of the same qualities (depending on how different a plan has to be to be considered a different plan). Subtracting the most plausible (or something) counterfactual from almost anyone's impact makes it seem very small.

Comment by clairezabel on EAs write about where they give · 2016-12-09T23:04:37.592Z · score: 6 (6 votes) · EA · GW

Nice idea, Julia. Thanks for doing this!

Comment by clairezabel on Concerns with Intentional Insights · 2016-10-30T22:58:26.084Z · score: 2 (2 votes) · EA · GW

Thanks Kathy!

Comment by clairezabel on Concerns with Intentional Insights · 2016-10-29T05:16:50.813Z · score: 11 (11 votes) · EA · GW

No shame if you lose, so much glory if you win

Comment by clairezabel on Concerns with Intentional Insights · 2016-10-28T07:21:37.374Z · score: 23 (22 votes) · EA · GW

I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.

Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.

Views my own, not my employer's.

Comment by clairezabel on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T00:29:35.419Z · score: 3 (3 votes) · EA · GW

I would recommend linking to Jeff's post at the beginning of this one.

Comment by clairezabel on Should you switch away from earning to give? Some considerations. · 2016-08-26T06:29:47.832Z · score: 4 (4 votes) · EA · GW

But many of those people aren't earning to give. If they were, they would probably give more. So the survey doesn't indicate you are in the top 15% in comparative advantage just because you could clear $8k.

Comment by clairezabel on Why Animals Matter for Effective Altruism · 2016-08-24T01:24:38.523Z · score: 5 (5 votes) · EA · GW

Have you experienced downvoting brigades? How do you distinguish them from sincere negative feedback?

Comment by clairezabel on June 2016 GiveWell board meeting · 2016-08-19T23:41:08.614Z · score: 2 (2 votes) · EA · GW

To be clear, I'm saying that I think sometimes an organization's practices usefully reflect a community's values and that Linch was being overly dismissive of this possibility, not making a claim about this specific case.

Comment by clairezabel on June 2016 GiveWell board meeting · 2016-08-18T06:36:40.011Z · score: 4 (4 votes) · EA · GW

If the "you" here is the Effective Altruism community, then the hiring practices of a single organization shouldn't be a significant sign that the community as a whole is elitist.

I don't think that's entirely right. I think that given that the community includes relatively few organizations (of which GiveWell is one of the larger and older ones) GiveWell's practices may be but aren't always a significant (and relatively concrete) reflection of and on the community's views.

(views are my own, not my employer's)

Comment by clairezabel on Effective Altruists really love EA: Evidence from EA Global · 2016-08-15T03:16:59.914Z · score: 14 (13 votes) · EA · GW

In fact, the team most likely to be growing EA, the Effective Altruism Outreach team was cautioning against growth. It seems reasonably clear that EA is growing virally and organically -- exactly what you want in the early days of a project.

Why do you want a project to grow virally and organically in the early days of a project? That seems like the opposite of what I'd guess; when a project is young you want to steer it thoughtfully and deliberately and encourage it to grow slowly, so that it doesn't get off track or hijacked, and so you have time to onboard and build capacity in the new members. Has the EAO team come to think that fast growth is good?

Comment by clairezabel on EA database/reading list: Why it might be useful · 2016-07-27T18:07:53.805Z · score: 1 (1 votes) · EA · GW

And: http://effective-altruism.com/ea/r5/threads_on_facebook_worth_being_able_to_refer/

Comment by clairezabel on EA database/reading list: Why it might be useful · 2016-07-27T18:04:43.390Z · score: 2 (2 votes) · EA · GW

Also: http://www.benkuhn.net/ea-reading

Comment by clairezabel on EA database/reading list: Why it might be useful · 2016-07-27T18:00:52.204Z · score: 2 (2 votes) · EA · GW

There is this: http://effective-altruism.com/ea/5f/effective_altruism_reading_list/

Comment by clairezabel on EA != minimize suffering · 2016-07-14T03:43:07.824Z · score: 1 (1 votes) · EA · GW

That's deeply kind of you to say, and the most uplifting thing I've heard in a while. Thank you very much.

Comment by clairezabel on EA != minimize suffering · 2016-07-14T02:42:37.510Z · score: 1 (1 votes) · EA · GW

You see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things.

Now people can say "but look at Clockwork Orange!" as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it's in line with their preexisting beliefs and ethics.

Comment by clairezabel on EA != minimize suffering · 2016-07-14T02:35:42.134Z · score: 6 (6 votes) · EA · GW

Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?

I think generalizing from these examples (and especially from fictional examples in general) is dangerous for a few reasons.

Fiction is not designed to be maximally truth-revealing. It's function is as art and entertainment, to move the audience, persuade them, woo them, etc. Doing this can and often does involve revealing important truths, but doesn't necessarily. Sometimes, fiction is effective because it affirms cultural beliefs/mores especially well (which makes it seem very true and noble). But that means it's often (though certainly not always) a reflection of its time (it's often easy, for example, to see how fiction from the past affirmed now-outdated beliefs about gender and race). So messages in fiction are not always true.

Fiction has a lot of qualities that bias the audience in specific useful ways that don't relate to truth. For example, it's often beautiful, high-status, and designed to play on emotions. That means that relative to a similar non-fictional but true thing, it may seem more convincing, even when the reasoning is equally or less sound. So messages in fiction are especially powerful.

For example, I think the Giver reflect the predominant (but implicit) belief of our time and culture, that intense happiness is necessarily linked to suffering, and that attempts to build utopias generally fail in obvious ways by arbitrarily excluding our most important values. Iirc, the folks in the Giver can't love. Love is one of our society's highest values; not loving is a clear sign they've gone wrong. But the story doesn't explain why love had to be eliminated to create peace, it just establishes a connection in the readers' minds without providing any real evidence.

Consider further that if it was true that extreme bad wasn't a necessary cost of extreme good, we would probably still not have a lot of fiction reflecting that truth. This is simply because fiction about everything going exceedingly well for extended periods of time would likely be very boring for the reader (wonderful for the characters, if they experienced it). People would not read that fiction. Perhaps if you made them do so they would project their own boredom onto the story, and say the story is bad because it bored them. This is a fine policy for picking your entertainment, but a dangerous habit to establish if you're going to be deciding real-world policy on others' behalf.

Comment by clairezabel on EA != minimize suffering · 2016-07-14T02:13:27.358Z · score: 1 (1 votes) · EA · GW

I suspect that happiness and well-being are uncorrelated.

How are you defining wellbeing such that it's uncorrelated with happiness?

I am biased as I believe I have grown as a result of changes which were the result of suffering.

Perhaps you misunderstand me. I believe you. I think that probably every human and most animals have, at some point, learned something useful from an experience that involved suffering. I have, you have, all EAs have, everyone has. Negative subjective wellbeing arising from maladaptive behavior is evolutionarily useful. Natural selection favored those that responded to negative experiences, and did so by learning.

I just think it's sad and shitty that the world is that way. I would very much prefer a world where we could all have equally or more intense and diverse positive experiences without suffering for them. I know that is not possible (or close to it) right now, but I refuse to let the limitations of my capabilities drive me to self-deception.

(my views are my own, not my employer's)

Comment by clairezabel on EA != minimize suffering · 2016-07-14T02:02:00.181Z · score: 1 (1 votes) · EA · GW

Many altruists are activists (and vice versa) and many altruists are philanthropists (and vice versa) and some activists are philanthropists. These are not mutually exclusive categories. I also disagree with several claims.

The former also rejects social norms and seeks to change the world, while the latter is generally accepted within their social circles because they have so much excessive wealth.

I think most philanthropists want to change the world (for the better). I think activists vary a lot in how much they accept and reject social norms, and which ones they accept and reject.

Comment by clairezabel on EA != minimize suffering · 2016-07-14T01:56:53.255Z · score: 4 (4 votes) · EA · GW

I think your argument is actually two: 1) It is not obvious how to maximize happiness, and some obvious-seeming strategies to maximize happiness will not in fact maximize happiness. 2) you shouldn't maximize happiness

(1) is true, I think most EAs agree with it, most people in general agree with it, I agree with it, and it's pretty unrelated to (2). It means maximizing happiness might be difficult, but says nothing about whether it's theoretically the best thing to do.

Relatedly, I think a lot of EAs agree that it is sometimes indeed the fact that to maximize happiness, we must incur some suffering. To obtain good things, we must endure some bad. Not realizing that and always avoiding suffering would indeed have bad consequences. But the fact that that is true, and important, says nothing about whether it is good. It is the case now that eating the food I like most would make me sick, but doesn't tell me whether I should modify myself to enjoy healthier foods more, if I was able to do so.

Put differently, is the fact that we must endure suffering to get happiness sometimes good in itself, or is it an inconvenient truth we should (remember, but) change, if possible? That's a hard question, and I think it's easy to slip into the trap of telling people they are ignoring a fact about the world to avoid hard ethical questions about whether the world can and should be changed.

Comment by clairezabel on The morality of having a meat-eating pet · 2016-06-04T19:41:45.854Z · score: 1 (1 votes) · EA · GW

The meat may be non-human grade, basically waste products from factory farming that sold extremely cheaply. So I doubt it increases the number of animals killed as much as you say.

Comment by clairezabel on More Thoughts (and Analysis) on the Mercy For Animals Online Ads Study · 2016-05-27T19:57:16.284Z · score: 7 (7 votes) · EA · GW

The experimental group reported higher agreement with the claim that “that cows, pigs, and chickens are intelligent, emotional individuals with unique personalities”. Does that matter?

Likely no, for similar reasons as discussed earlier. Beliefs and attitudes are nice. They’re certainly better than nothing. Maybe they’ll even help create a societal shift or cause someone to go vegetarian many years down the road. However, they just as well might not.

I'm not sure about this. Some people that are funding online ads want to reduce animal product consumption now. Others are primarily interested in effecting long-term values shifts, and merely use animal product consumption as a weak proxy for this. I'd be pretty independently interested in answering the question "which intervention is most effective at convincing people that cows, pigs, and chickens are intelligent, emotional individuals with unique personalities?”

If I knew which intervention best did that, and which most reduced animal product consumption, and they were different, I'm not sure which I'd be more excited about funding (but I'd be interested if other people have a strong opinion about this).

Comment by clairezabel on Giving What We Can is Cause Neutral · 2016-04-22T18:25:46.469Z · score: 1 (1 votes) · EA · GW

Thanks for the writeup, Michelle. I thought it was a really clear introduction to the topic.

However, it makes me curious about why GWWC's materials are so focused on global poverty, given the organization's explicit cause neutrality. I can think about some possible reasons for it but don't have a strong intuition about which is driving your thinking.

Comment by clairezabel on What is up with carbon dioxide and cognition? An offer · 2016-04-15T06:24:56.099Z · score: 3 (3 votes) · EA · GW

Fwiw, I really enjoy the more specific posts on the forum. I find them more valuable that the broader comments on the movement posts, and I think the usefulness of the forum would increase if more posts were like this one.

Comment by clairezabel on Giving What We Can's 6 monthly update · 2016-02-10T05:58:12.959Z · score: 3 (3 votes) · EA · GW

You can have some of my points as well. This was super helpful and interesting to read.

Comment by clairezabel on The Valentine’s Day Gift That Saves Lives · 2016-02-02T00:27:41.547Z · score: 20 (26 votes) · EA · GW

Downvoted this.

I worry that you're basically shoehorning everything into an opportunity for EA. Like, "Halloween? The perfect time to do EA outreach! What's scarier than malaria, factory farming, and x-risks!" "Thanksgiving? How better to give thanks for your good fortune than to help the less fortunate!" "Fourth of July? Celebrate the birth of our great nation by, uh, helping with something that's not-so-great."

I doubt donations would be the most romantic gift for most people. They may be the most altruistic ones, but don't confuse altruism for everything else that's nice in the world. The idea that the most altruistic thing would also be the most romantic thing seems like a really obvious example of suspicious convergence. Either it's somewhat deceptive, or you two were damn lucky.

For people that aren't EAs, I think this seems spammy, which makes EA look bad. For people that are already hardcore EAs (i.e. most of the people on this forum) I think the connection between EA and romance seems contrived. For me, since I spend time with mostly EAs, a donation would be the most common, obvious, impersonal type of gift I could plausibly imagine being given (which is great from acquaintances and extended family and distant friends, less amazing from a romantic partner).

I am in favor of people for whom altruism feels romantic doing altruistic things on Valentine's Day or anytime else. I'm weakly in favor of people who want altruistic gifts asking for them (although I worry that people often fail to consider how this affects the gift-giver). But overall, the link here seems especially tenuous and irritating to people trying to enjoy the not-so-altruistic but romantic spirit of Valentine's Day.

Comment by clairezabel on Beware surprising and suspicious convergence · 2016-01-26T00:23:58.151Z · score: 2 (2 votes) · EA · GW

Michelle, I'm a little unclear about what you mean here. It didn't seem like the post was arguing against thinking that, all things considered, a given cause is most important. Rather, that the most important cause will still involve tradeoffs; it won't do everything best, and may be harmful in some ways. I don't see how that contradicts the need to defend global poverty as the most effective cause area, all things considered.

Also, I'm curious about who you steelman global poverty as the best far future cause against. For new/potential GWWC-ers, can't you just demonstrate why it's plausibly better than most things (I'm assuming these people aren't already into animal suffering alleviation or x-risk reduction. And if they are, why are you arguing with them?)? Or, better yet, say the issue is complicated and present the main important arguments?

Comment by clairezabel on Against segregating EAs · 2016-01-21T19:44:40.970Z · score: 4 (4 votes) · EA · GW

It seems like we really don't know whether a more hierarchical structure is good for EA or not. Some types of organizations/institutions have hierarchies (most religions, governments, companies), and some (like more social movements, communities, friend groups) don't, or have extremely informal and loose ones.

At best, the hierarchies provide valuable information about merit and dedication level, facilitate coordination, and incentivize high-quality work. At worst, they fuck everything up completely.

I don't think we have good information about what structure would be best for EA. The idea that other social movements don't seem to have hierarchies isn't particularly convincing to me, because I doubt they're using the optimal structure, and especially doubt that they're using the optimal structure for a movement like EA. But I don't know and it seems like no one else does. I don't like the terms "hardcore" and "softcore" but haven't seen a convincing argument about whether these sorts of distinctions in general increase or decrease movement impact.

Comment by clairezabel on Ethical offsetting is antithetical to EA · 2016-01-15T00:05:14.342Z · score: 1 (1 votes) · EA · GW

Cool, this mostly seems right.

I think the harmfulness of offsetting's focus on collectively anthropogenic sources of suffering is still being underestimated in these conversation. (I'm using "collectively anthropogenic" because there are potential sources of badness like UFAI that are anthropogenic, but only caused by a few people to the idea of offsetting would be useless to spread to most people to address the problem of UFAI. Also, offsetting the harm done by UFAI would be, uh, tricky.) I think offsetting might even reenforce a non-interventionist mindset that could prove extremely harmful for addressing problems like wild animal suffering.

One good aspect of offsetting that I think I initially underestimated is the way it can be used as a psychological tool for beginning to alieve that a cause area matters. For example, I can imagine an individual who is beginning to suspect animals suffering is important, but finds the idea of vegetarianism or veganism daunting, and shies away from it and thus doesn't want to think more about animal suffering. For them, offsetting could be a good bridge step. I don't think this conflicts with anything I said, but I don't want people to feel like it's shameful to use this tool.

I'd want to add on to:

Pro 3: If you're just offsetting, it's worth only as much as one additional vegan (if your numbers are right). I haven't seen evidence that ethical offsetting leads to big regular donors. It may, and if you just meant to bring up the possibility that seems reasonable.

Pro 4: People who eat animal products can donate to animal charities even if it's not offsetting. That's great! But you don't need offsetting to introduce that possibility. I think offsetting harmfully frames the discussion around them "making up" for their behavior, instead of possibly just making large donations that help lots of animals. Many vegetarians enthusiastically make large donations to animal charities, which is wonderful, without worrying about offsetting. I don't know what happened at your last meetup but I think it's awesome when nonvegans donate to animal charities. Pro 6: I'm not sure how offsetting helps bridge this schism well. I can imagine some arguments about how it would help, and others about how it would hurt.

Con 5: I'm not sure how offsetting signals a willingness to defect. Could you explain that more?