The Sense-Making Web 2021-01-04T23:21:57.226Z
Effective Altruism and Rationalist Philosophy Discussion Group 2020-09-16T02:46:19.168Z
Mike Huemer on The Case for Tyranny 2020-07-16T09:57:13.701Z
Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? 2020-07-01T23:32:22.016Z
Making Impact Purchases Viable 2020-04-17T23:01:53.273Z
The World According to Dominic Cummings 2020-04-14T23:52:37.334Z
The Hammer and the Dance 2020-03-20T19:45:45.706Z
Inward vs. Outward Focused Altruism 2020-03-04T02:05:01.848Z
EA should wargame Coronavirus 2020-02-12T04:32:02.608Z
Why were people skeptical about RAISE? 2019-09-04T08:26:52.654Z
casebash's Shortform 2019-08-21T11:17:32.878Z
Rationality, EA and being a movement 2019-06-22T05:22:42.623Z
Most important unfulfilled role in the EA ecosystem? 2019-04-05T11:37:00.294Z
A List of Things For People To Do 2019-03-08T11:34:43.164Z
What has Effective Altruism actually done? 2019-01-14T14:07:50.062Z
If You’re Young, Don’t Give To Charity 2018-12-24T11:55:42.798Z
Rationality as an EA Cause Area 2018-11-13T14:48:25.011Z
Three levels of cause prioritisation 2018-05-28T07:26:32.333Z
Viewing Effective Altruism as a System 2017-12-28T10:09:43.004Z
EA should beware concessions 2017-06-14T01:58:47.207Z
Reasons for EA Meetups to Exist 2016-07-20T06:22:39.675Z
Population ethics: In favour of total utilitarianism over average 2015-12-22T22:34:53.087Z


Comment by casebash on Buck's Shortform · 2021-06-06T22:23:57.722Z · EA · GW

I'd be interested in this. I've been posting book reviews of the books I read to Facebook - mostly for my own benefit. These have mostly been written quickly, but if there was a decent chance of getting $500 I could pick out the most relevant books and relisten to them and then rewrite them.

Comment by casebash on Complexity and the Search for Leverage · 2021-05-30T05:07:00.240Z · EA · GW

+1 - I also see this as an area deserving of investigation.

Comment by casebash on Harrison D's Shortform · 2021-04-19T04:26:41.286Z · EA · GW


Comment by casebash on Harrison D's Shortform · 2021-04-18T05:01:40.607Z · EA · GW

How would you feel about reposting this in EAs for Political Tolerance ( ? I'd also be happy to repost it for you if you'd prefer.

Comment by casebash on Concerns with ACE's Recent Behavior · 2021-04-18T01:39:12.862Z · EA · GW
I'm honestly a bit flummoxed here. Why would contributing to a Facebook group explicitly aligned with one side of this dispute help avoid a split?

The group is still new, so it's still unclear exactly how it'll turn out. But I don't think that's a completely accurate way of characterisating the group. I expect that there are two main strands of thought within the group - some see themselves as fighting against woke tendencies, whilst others are more focused on peace-making and want to avoid taking a side.

Comment by casebash on Concerns with ACE's Recent Behavior · 2021-04-17T01:45:55.688Z · EA · GW

"On the other hand, we've had quite a bit of anti-cancel-culture stuff on the Forum lately. There's been much more of that than of pro-SJ/pro-DEI content, and it's generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly"

Perhaps. However, this post makes specific claims about ACE. And even though these claims have been discussed somewhat informally on Facebook, this post provides a far more solid writeup. So it does seem to be making a signficantly new contribution to the discussion and not just rewarming leftovers.

It would have been better if Hypatia had emailed the organisation ahead of time. However, I believe ACE staff members might have already commented on some of these issues (correct me if I'm wrong). And it's more of a good practise than something than a strict requirement - I totally understand the urge to just get something out of there.

"I'm sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk"

On the contrary, now that this has been written up on the forum it gives people something to link to. So forum posts aren't just read by people who regularly read the forum. In any case, this kind of high quality write-up is unlikely to have a significnat effect on alienating people compared to some of the lower quality discussions on these topics that occur in person or on Facebook. So, from my perspective it doesn't really make any sense to be focusing on this post. If you want to avoid a split in the movement, I'd like to encourage you to join the Effective Altruists for Political Tolerance Facebook group and contribute there.

I would also suggest worrying less about PR risks. People who want to attack EA can already go around shouting about 'techno-capitalists', 'overwhelmingly white straight males', 'AI alarmists', ect. If someone wants to find something negative, they'll find something negative.

Comment by casebash on Rationality as an EA Cause Area · 2021-04-08T00:22:58.933Z · EA · GW

Part of my model is that there is decreasing marginal utility as you invest more effort in one form of outreach, so there can be significant benefit in investing some resources in investing small amounts of resources in alternate forms of outreach.

Comment by casebash on EA Debate Championship & Lecture Series · 2021-04-05T23:45:50.454Z · EA · GW

I hope you find finding to pay someone to organise this as I suspect this program could be extremely impactul.

I would also love to see some amount of prize money funded for this. I wouldn't be surprised if a relatively small amount of money by philanthropic standards could tempt more of the top debaters to enter.

Comment by casebash on EA Debate Championship & Lecture Series · 2021-04-05T23:13:21.380Z · EA · GW

I actually found the Facebook group very difficult to search for - link is here.

Comment by casebash on Our plans for hosting an EA wiki on the Forum · 2021-03-02T21:59:11.819Z · EA · GW

Making a Wiki successful is always about seeding content. There's a lot of past content that could be copied over and updated, but it's not pleasant work, so it's good that Pablo has a grant.

Comment by casebash on In diversity lies epistemic strength · 2021-02-10T00:56:12.162Z · EA · GW

As an addendum: First, suppose you compare a group of random people from the same demographic to a random groups of people from different demographics. Next suppose you compare a group of random lawyers to a group of random laywers of different demogaphics. I would suggest that in the second case the increase in diversity from adding demographic diversity would be significantly reduced as the bar to becoming a lawyer would filter out a lot of diversity of experiences from the first case. For example, a greater proportion of African Americans experience poverty than the general population, but the difference among those who become laywers would be much less.

Comment by casebash on In diversity lies epistemic strength · 2021-02-10T00:45:29.598Z · EA · GW

"They were founded under the premise that conservative viewpoints are underrepresented in scientific discourse" - that's definitely a possibility, although I suspect that for research into underrepresented groups in general almost all research will have been conducted by people withn strong pre-existing beliefs about whether or not such a group is underrepresented.

I think there's value in considering people's possible psychological motivations, but I find it more helpful to consider these for all parties. In such a conversation, the rich could very well be afraid of losing their privilege and the poor could very well be jealous or resentful.

Comment by casebash on In diversity lies epistemic strength · 2021-02-10T00:38:43.603Z · EA · GW

It was a general comment how this lens is often applied in practise, even though this isn't the only possible way for it to be applied.

Comment by casebash on In diversity lies epistemic strength · 2021-02-07T06:53:19.934Z · EA · GW

"As we cannot measure the diversity of perspectives of a person directly, our best proxy for it is demographic diversity"

Demographic diversity is a useful proxy and may add something additional even if we did have diversity of general philosophy. However, we can measure diversity of perspectives directly, ie. by running surveys like Heterodox Academy has.

"The answer here is that objectivity is not something that a single person has, but that objectivity is a social achievement of a diverse community"

Feminism offers some valuable lens, but I feel it often leads to a hyperfocus on the underprivileged. Suppose we're discussing raising taxes on the rich, it might be useful to have a rich guy in a room. They might share some useful perspectives like, "It won't change the behaviour of my friends one bit. Most of us won't even notice. Our accountants handled our taxes, so we have no idea how much we're paying" or "If the California tax law passes, I'm headed to Texas". They might lie, but that's true of everyone. They might be biased, but the poor are likely to be biased as well.

I'm not claiming this is equally important as representing the perspectives of the poor, just that we shouldn't be hyperfocused.

Comment by casebash on Possible gaps in the EA community · 2021-01-24T11:14:04.405Z · EA · GW

I also think charity science might have tried getting people to pledge in their wills.

Comment by casebash on The Sense-Making Web · 2021-01-24T02:37:28.490Z · EA · GW

Yeah, hopefully at some point I find time to make another post, linking to various aspects of what I'd define as the community. I guess who is in or not is not well-defined as it's not really a single community. Rather, it's a bunch of groups with similar kinds of people who seem to be talking to each and talking about similar kinds of things, most of whom I think would agree that they're doing something like sensemaking.

Regarding your second question, if you head over to the Stoa or listen to Both/And, you'll see people from across the spectrum, although not really many strong social justice proponents. I suppose my suspicion is mainly driven by the intuition that ending the culture wars requires a movement with positive content of its own and not merely a negative critique as Quillette and (to a lesser degree) Persuasion seem to do. People need a reason to join apart from simply being sick of the culture wars.

Comment by casebash on Possible gaps in the EA community · 2021-01-24T02:28:31.319Z · EA · GW

Yeah, I agree that there would be significant benefits to trying to set up another academic research institute at a university more focused on economics.

Comment by casebash on Charles_Dillon 's Shortform · 2020-12-04T11:36:02.812Z · EA · GW

This is full, but it's worth getting people to subscribe for the future

Comment by casebash on When you shouldn't use EA jargon and how to avoid it · 2020-10-26T23:33:28.240Z · EA · GW

Hmm... often I think it is nice to have a standard term for a phenomenon so that people don't have to figure out how to express a certain concept each time and then hope that everyone else can follow. Language also has the advantage that insofar as we convince people to adopt our language, we draw them into our worldview.

Comment by casebash on List of EA-related organisations · 2020-10-22T04:53:19.946Z · EA · GW

This should really be a Wiki page instead since these lists (I even made one myself in the past) always become outdated.

Comment by casebash on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T05:26:48.574Z · EA · GW

This is a really challenging situation - I could honestly see myself leaning either way on this kind of scenario. I used to lean a lot more towards saying whatever I thought was true and ignoring the consequences, but lately I've been thinking that it's important to pick your battles.

I think the key sentence is this one - "On many subjects EAs rightfully attempt to adopt a nuanced opinion, carefully and neutrally comparing the pros and cons, and only in the conclusion adopting a tentative, highly hedged, extremely provisional stance. Alas, this is not such a subject."

What seems more important to me is not necessarily these kinds of edge cases, but that we talk openly about the threat potentially posed. Replacing the talk with a discussion about cancel culture instead seems like it could have been a brilliant Jiu Jitsu move. I'm actually much more worried about what's been going on with ACE than anything else.

Comment by casebash on The emerging school of patient longtermism · 2020-08-09T22:09:55.760Z · EA · GW

Thanks, that was useful. I didn't realise that his argument involved 1+2 and not just 1 by itself. That said, if the hinge of history was some point in the past, then that doesn't affect our decisions as we can't invest in the past. And perhaps it's a less extraordinary coincidence that the forward-looking hinge of history (where we restrict the time period from now until the end of humanity) could be now, especially if in the average case we don't expect history to go on much longer.

Comment by casebash on The emerging school of patient longtermism · 2020-08-07T23:09:03.139Z · EA · GW

I've never found Will's objections to the hinge of history argument persuasive. Convincing me that there was a greater potential impact in past times than I thought, ie. that it would have been very influential to prevent the rise of Christianity, shouldn't make me disbelieve that arguments that AI or bio risks are likely to lead to catastrophe in the next few decades if we don't do anything about it. But maybe I just need to reread the argument again.

Comment by casebash on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-06T23:06:19.920Z · EA · GW

DNA engineering has some positive points, but imagine the power that having significant control its citizens personalities would give the government. That shouldn't be underestimated.

Comment by casebash on Long-term investment fund at Founders Pledge · 2020-07-06T03:32:47.074Z · EA · GW

The real hinge here is how much we should expect the future to be a continuation of the past and how much we update based on our best predictions. Given what we know about about existential risk and the likelihood that AI will dramatically change our economy, I don't think this idea makes sense in the current context.

Comment by casebash on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-05T00:25:44.982Z · EA · GW

I agree that such a system would be terrifying. But I worry that its absence would be even more terrifying. Limited surveillance systems work decently for gun control, but when we get to the stage where someone can kill tens of thousands or even millions instead of a hundred I suspect it'll break down.

Comment by casebash on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-02T03:00:15.861Z · EA · GW

Thanks for posting such a detailed answer!

Comment by casebash on New EA International Innovation Fellowship · 2020-06-28T13:34:23.580Z · EA · GW

It's great to hear that you are setting this up. However, the current post seems light on details. Why are these areas of particular interest? What kind of commitment are you hoping for from participants?

Comment by casebash on Slate Star Codex, EA, and self-reflection · 2020-06-27T00:19:54.769Z · EA · GW

I think people are quite reasonably deciding that this post isn't worth taking the time to engage with. I'll just make three points even though I could make more:

"A good rule of thumb might be that when InfoWars takes your side, you probably ought to do some self-reflection on whether the path your community is on is the path to a better world." - Reversed Stupidity is Not Intelligence

"In response, the Slate Star Codex community basically proceeded to harass and threaten to dox both the editor and journalist writing the article. Multiple individuals threatened to release their addresses, or explicitly threatened them with violence." - The author is completely ignoring the fact that Scott Alexander specifically told people to be nice, not to take it out on them and didn't name the journalist. This seems to suggest that the author isn't even trying to be fair.

"I have nothing to say to you — other people have demonstrated this point more clearly elsewhere" - I'm not going to claim that such differences exist, but if the author isn't open to dialog on one claim, it's reasonable to infer that they mightn't be open to dialog on other claims even if they are completely unrelated.

Quite simply this is a low quality post and "I'm going to write a low quality post on topic X and you have to engage with me because topic X is important regardless of the quality" just gives a free pass on low quality content. But doesn't it spur discussion? I've actually found that most often low quality posts don't even provide the claimed benefit. They don't change people's minds and tend to lead to low quality discussion.

Comment by casebash on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T01:28:57.337Z · EA · GW

It's worth remembering though that when people who paid for the book are much more likely to have read it

Comment by casebash on EA and tackling racism · 2020-06-16T23:20:08.915Z · EA · GW

What do you think about the fact that many in the field are pretty open that they are pursuing enquiry on how to achieve an ideology and not neutral enquiry (using lines like all fields are ideological whether they know it or not)?

Comment by casebash on EA and tackling racism · 2020-06-10T22:52:43.751Z · EA · GW

"It's a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average" - I do think there are valuable lessons to be drawn from the literature, unfortunately a) lots of the work is low quality or under-evidenced b) discussion of these issues often ends up being highly divisive, whilst not changing many people's minds

Comment by casebash on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T22:44:46.458Z · EA · GW

"If the self-oriented reasons for action leave it largely underdetermined how personal flourishing would look like" - If we accept pleasure and pain, then we can evaluate other actions in high likely they are to lead to pleasure/pain in the long term, so I don't see how actions are underdetermined.

Comment by casebash on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-08T03:07:34.847Z · EA · GW

I'm surprised that you put moral realism on the same tier as self-oriented reasons for action. It would seem much more astounding to claim that pain and pleasure are neither good nor bad *for me*, then to claim that there's no objective stance by which others should consider my pain good or bad. The Pascal's wager argument is also much stronger too.

Comment by casebash on Racial Demographics at Longtermist Organizations · 2020-05-04T22:51:07.344Z · EA · GW

I think percentages are misleading. In terms of influencing demographic X, what matters isn't so much how many people of demographic X there are in these organisations, but how well-respected they are.

Comment by casebash on Racial Demographics at Longtermist Organizations · 2020-05-02T21:51:53.072Z · EA · GW

I'm generally against evaluating diversity programs by how much diversity they create. It's definitely a relevant metric, but we don't evaluate AMF by how many bednets they hand out, but the impact of these bednets.

Comment by casebash on Making Impact Purchases Viable · 2020-04-21T23:15:38.669Z · EA · GW

I guess that seems so far off that I wasn't focusing on it. I'm more interesting in how to establish a working impact purchase in the mean time.

Comment by casebash on Making Impact Purchases Viable · 2020-04-20T01:17:48.707Z · EA · GW

"These people where encouraging me to use my last savings to retrain to a risky career, but putting in their money was out of the question" - Yeah, I'm sorry you had that experience, that seems unpleasant. Maybe they didn't understand the financial precariousness of your situation? Like many EAs wouldn't find it hard to land a cushy job in normal times and likely have parents who'd bail them out worst comes to worst and might have assumed that you were in the same position without realising they were making an assumption?

Comment by casebash on Making Impact Purchases Viable · 2020-04-20T01:09:51.545Z · EA · GW
Or if you think an outcome was mostly bad luck, fund them more than just impact purchase

Yeah, luck is another argument I considered covering, but didn't get into. Sometimes the impact of a project is just a matter of being at the right place in the right time. Of course, it's hard to tell; to a certain extent people make their own luck.

But in most cases I would suggest straight up impact purchase, because anything else is is really hard and you'll probably get the adjustments wrong.

I guess this would be a key point where we differ. I haven't thought deeply about this, but my intuition would be that adjustments would greatly improve impact. For example, a small project extremely competently implemented and a big project poorly implemented might have the exact same impact, but the former would be a stronger signal.

Comment by casebash on The Case for Impact Purchase | Part 1 · 2020-04-19T06:38:03.772Z · EA · GW

I wrote up my thoughts in this post: Making Impact Purchases Viable. Briefly, I argue that:

  • Restrictions on sellers are necessary to fix and imbalance between the size of the buyer's market and seller's market
  • Restrictions are also important for providing sufficient incentive for a reasonable proportion of the impact to be counterfactual
  • Another issue I don't have a solution to is the potential of impact purchases to lead to bitterness or demotivate people
Comment by casebash on Making Impact Purchases Viable · 2020-04-19T06:27:43.023Z · EA · GW
That line of reasoning also suggests that EA orgs should ask each of their employees whether they will work for free if they don't get a salary; and refuse to pay a salary to employees who answer "yes".

Maybe, but I imagine that the number of people who'd work just as hard long-term would be about zero, so more of the impact would be counterfactual.

Comment by casebash on Effective Altruism and Free Riding · 2020-04-15T10:31:31.594Z · EA · GW

Even though fair trade is ineffective on an individual level, it may be effective on a collective level because enough people find it appealing for broad adoption. Deciding to ignore it weakens any attempt to establish buying fairtrade as a society.

EAs don't arise out of a vacuum, but out of society. If society is doing well, then EAs are more likely to do well too and hence to have more impact. So by not donating to a local charity, you are refusing to invest in the society that provided you the chance to have an impact in the first place.

Not saying you should donate locally or buy fair trade, just pointing out one worry with ignoring them.

Comment by casebash on Effective Altruism and Free Riding · 2020-03-28T21:56:20.887Z · EA · GW

Thanks so much for writing this. I've had similar worries regarding local charity and things like fair trade for a while.

Comment by casebash on The Hammer and the Dance · 2020-03-21T10:24:39.769Z · EA · GW

It's hard to do a summary without encouraging people to read the summary instead of the article.

Comment by casebash on [Link] Updated Drawdown now available, incl. 2020 Review · 2020-03-08T01:11:08.402Z · EA · GW

My understanding from looking briefly was that Drawdown focused on total reduction potential, not cost-effectiveness

Comment by casebash on Causal diagrams of the paths to existential catastrophe · 2020-03-02T00:23:54.631Z · EA · GW

These diagrams look really useful for encouraging people to map out potential paths to existential risk and potential interventions more carefully!

Comment by casebash on Why aren't we talking about personal development? · 2020-03-02T00:17:00.091Z · EA · GW

I suspect that part of the reason why this is happened is that the EA community is closely associated with the rationality community, so it's often easiest just to have the personal development discussion online over there. Plus another reason that people mightn't feel a need for it online is that lots of discussion of personal development occurs informally at meetups.

Comment by casebash on Final update on EA Norway's Operations Project · 2020-01-12T12:32:31.026Z · EA · GW

What's Good Growth?

Comment by casebash on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T20:33:24.639Z · EA · GW

Yeah, EA is likely less compelling when this is defined as feeling motivating/interesting to the average person at the moment, although it is hard to judge since EA hasn't been around for anywhere near as long. Nonetheless, many of the issues EAs care about seem way too weird for the average person, then again if you look at feminism, a lot of the ideas were only ever present in an overly academic form. Part of the reason why they are so influential now is that they have filtered down into the general population in a simpler form (such as "girl power", "feeling good, rationality bad"). Plus social justice is more likely to benefit the people supporting it in the here and now than EA which focuses more on other countries, other species and other times which is always a tough sell.

SJ is an extremely inclusive movement (basically by definition)

I'm generally wary of argument by definition. Indeed, SJ is very inclusive to members of a racial minority or those who are LGBTI, but is very much not when it comes to ideological diversity. And some strands can be very unwelcoming to members of majorities. So it's much more complex than that.

Comment by casebash on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T17:14:04.571Z · EA · GW

"There are definitely many who see these more in the movement/tribe sense" - For modern social justice this tends to focus on who is a good or bad person, while for EA this tends to focus more on who to trust. (There's a less dominant strand of thought within social justice that says we shouldn't blame individuals for systematic issues, but it's relatively rare). EA makes some efforts towards being anti-tribal, while social justice is less worried about the downsides of being tribal.