Posts

Effective Altruism and Rationalist Philosophy Discussion Group 2020-09-16T02:46:19.168Z · score: 15 (4 votes)
Mike Huemer on The Case for Tyranny 2020-07-16T09:57:13.701Z · score: 24 (9 votes)
Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? 2020-07-01T23:32:22.016Z · score: 23 (11 votes)
Making Impact Purchases Viable 2020-04-17T23:01:53.273Z · score: 15 (8 votes)
The World According to Dominic Cummings 2020-04-14T23:52:37.334Z · score: 4 (7 votes)
The Hammer and the Dance 2020-03-20T19:45:45.706Z · score: 7 (2 votes)
Inward vs. Outward Focused Altruism 2020-03-04T02:05:01.848Z · score: 8 (3 votes)
EA should wargame Coronavirus 2020-02-12T04:32:02.608Z · score: 35 (16 votes)
Why were people skeptical about RAISE? 2019-09-04T08:26:52.654Z · score: 14 (6 votes)
casebash's Shortform 2019-08-21T11:17:32.878Z · score: 6 (1 votes)
Rationality, EA and being a movement 2019-06-22T05:22:42.623Z · score: 31 (22 votes)
Most important unfulfilled role in the EA ecosystem? 2019-04-05T11:37:00.294Z · score: 14 (4 votes)
A List of Things For People To Do 2019-03-08T11:34:43.164Z · score: 43 (30 votes)
What has Effective Altruism actually done? 2019-01-14T14:07:50.062Z · score: 29 (14 votes)
If You’re Young, Don’t Give To Charity 2018-12-24T11:55:42.798Z · score: 17 (10 votes)
Rationality as an EA Cause Area 2018-11-13T14:48:25.011Z · score: 22 (26 votes)
Three levels of cause prioritisation 2018-05-28T07:26:32.333Z · score: 9 (16 votes)
Viewing Effective Altruism as a System 2017-12-28T10:09:43.004Z · score: 21 (21 votes)
EA should beware concessions 2017-06-14T01:58:47.207Z · score: 1 (11 votes)
Reasons for EA Meetups to Exist 2016-07-20T06:22:39.675Z · score: 11 (11 votes)
Population ethics: In favour of total utilitarianism over average 2015-12-22T22:34:53.087Z · score: 0 (0 votes)

Comments

Comment by casebash on The emerging school of patient longtermism · 2020-08-09T22:09:55.760Z · score: 2 (1 votes) · EA · GW

Thanks, that was useful. I didn't realise that his argument involved 1+2 and not just 1 by itself. That said, if the hinge of history was some point in the past, then that doesn't affect our decisions as we can't invest in the past. And perhaps it's a less extraordinary coincidence that the forward-looking hinge of history (where we restrict the time period from now until the end of humanity) could be now, especially if in the average case we don't expect history to go on much longer.

Comment by casebash on The emerging school of patient longtermism · 2020-08-07T23:09:03.139Z · score: 7 (5 votes) · EA · GW

I've never found Will's objections to the hinge of history argument persuasive. Convincing me that there was a greater potential impact in past times than I thought, ie. that it would have been very influential to prevent the rise of Christianity, shouldn't make me disbelieve that arguments that AI or bio risks are likely to lead to catastrophe in the next few decades if we don't do anything about it. But maybe I just need to reread the argument again.

Comment by casebash on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-06T23:06:19.920Z · score: 2 (1 votes) · EA · GW

DNA engineering has some positive points, but imagine the power that having significant control its citizens personalities would give the government. That shouldn't be underestimated.

Comment by casebash on Long-term investment fund at Founders Pledge · 2020-07-06T03:32:47.074Z · score: 2 (1 votes) · EA · GW

The real hinge here is how much we should expect the future to be a continuation of the past and how much we update based on our best predictions. Given what we know about about existential risk and the likelihood that AI will dramatically change our economy, I don't think this idea makes sense in the current context.

Comment by casebash on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-05T00:25:44.982Z · score: 2 (1 votes) · EA · GW

I agree that such a system would be terrifying. But I worry that its absence would be even more terrifying. Limited surveillance systems work decently for gun control, but when we get to the stage where someone can kill tens of thousands or even millions instead of a hundred I suspect it'll break down.

Comment by casebash on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-02T03:00:15.861Z · score: 4 (2 votes) · EA · GW

Thanks for posting such a detailed answer!

Comment by casebash on New EA International Innovation Fellowship · 2020-06-28T13:34:23.580Z · score: 6 (4 votes) · EA · GW

It's great to hear that you are setting this up. However, the current post seems light on details. Why are these areas of particular interest? What kind of commitment are you hoping for from participants?

Comment by casebash on Slate Star Codex, EA, and self-reflection · 2020-06-27T00:19:54.769Z · score: 54 (20 votes) · EA · GW

I think people are quite reasonably deciding that this post isn't worth taking the time to engage with. I'll just make three points even though I could make more:

"A good rule of thumb might be that when InfoWars takes your side, you probably ought to do some self-reflection on whether the path your community is on is the path to a better world." - Reversed Stupidity is Not Intelligence

"In response, the Slate Star Codex community basically proceeded to harass and threaten to dox both the editor and journalist writing the article. Multiple individuals threatened to release their addresses, or explicitly threatened them with violence." - The author is completely ignoring the fact that Scott Alexander specifically told people to be nice, not to take it out on them and didn't name the journalist. This seems to suggest that the author isn't even trying to be fair.

"I have nothing to say to you — other people have demonstrated this point more clearly elsewhere" - I'm not going to claim that such differences exist, but if the author isn't open to dialog on one claim, it's reasonable to infer that they mightn't be open to dialog on other claims even if they are completely unrelated.

Quite simply this is a low quality post and "I'm going to write a low quality post on topic X and you have to engage with me because topic X is important regardless of the quality" just gives a free pass on low quality content. But doesn't it spur discussion? I've actually found that most often low quality posts don't even provide the claimed benefit. They don't change people's minds and tend to lead to low quality discussion.

Comment by casebash on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-18T01:28:57.337Z · score: 9 (3 votes) · EA · GW

It's worth remembering though that when people who paid for the book are much more likely to have read it

Comment by casebash on EA and tackling racism · 2020-06-16T23:20:08.915Z · score: 6 (4 votes) · EA · GW

What do you think about the fact that many in the field are pretty open that they are pursuing enquiry on how to achieve an ideology and not neutral enquiry (using lines like all fields are ideological whether they know it or not)?

Comment by casebash on EA and tackling racism · 2020-06-10T22:52:43.751Z · score: 5 (3 votes) · EA · GW

"It's a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average" - I do think there are valuable lessons to be drawn from the literature, unfortunately a) lots of the work is low quality or under-evidenced b) discussion of these issues often ends up being highly divisive, whilst not changing many people's minds

Comment by casebash on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T22:44:46.458Z · score: 2 (1 votes) · EA · GW

"If the self-oriented reasons for action leave it largely underdetermined how personal flourishing would look like" - If we accept pleasure and pain, then we can evaluate other actions in high likely they are to lead to pleasure/pain in the long term, so I don't see how actions are underdetermined.

Comment by casebash on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-08T03:07:34.847Z · score: 3 (2 votes) · EA · GW

I'm surprised that you put moral realism on the same tier as self-oriented reasons for action. It would seem much more astounding to claim that pain and pleasure are neither good nor bad *for me*, then to claim that there's no objective stance by which others should consider my pain good or bad. The Pascal's wager argument is also much stronger too.

Comment by casebash on Racial Demographics at Longtermist Organizations · 2020-05-04T22:51:07.344Z · score: 2 (1 votes) · EA · GW

I think percentages are misleading. In terms of influencing demographic X, what matters isn't so much how many people of demographic X there are in these organisations, but how well-respected they are.

Comment by casebash on Racial Demographics at Longtermist Organizations · 2020-05-02T21:51:53.072Z · score: 31 (16 votes) · EA · GW

I'm generally against evaluating diversity programs by how much diversity they create. It's definitely a relevant metric, but we don't evaluate AMF by how many bednets they hand out, but the impact of these bednets.

Comment by casebash on Making Impact Purchases Viable · 2020-04-21T23:15:38.669Z · score: 2 (1 votes) · EA · GW

I guess that seems so far off that I wasn't focusing on it. I'm more interesting in how to establish a working impact purchase in the mean time.

Comment by casebash on Making Impact Purchases Viable · 2020-04-20T01:17:48.707Z · score: 2 (1 votes) · EA · GW

"These people where encouraging me to use my last savings to retrain to a risky career, but putting in their money was out of the question" - Yeah, I'm sorry you had that experience, that seems unpleasant. Maybe they didn't understand the financial precariousness of your situation? Like many EAs wouldn't find it hard to land a cushy job in normal times and likely have parents who'd bail them out worst comes to worst and might have assumed that you were in the same position without realising they were making an assumption?

Comment by casebash on Making Impact Purchases Viable · 2020-04-20T01:09:51.545Z · score: 2 (1 votes) · EA · GW
Or if you think an outcome was mostly bad luck, fund them more than just impact purchase

Yeah, luck is another argument I considered covering, but didn't get into. Sometimes the impact of a project is just a matter of being at the right place in the right time. Of course, it's hard to tell; to a certain extent people make their own luck.

But in most cases I would suggest straight up impact purchase, because anything else is is really hard and you'll probably get the adjustments wrong.

I guess this would be a key point where we differ. I haven't thought deeply about this, but my intuition would be that adjustments would greatly improve impact. For example, a small project extremely competently implemented and a big project poorly implemented might have the exact same impact, but the former would be a stronger signal.

Comment by casebash on The Case for Impact Purchase | Part 1 · 2020-04-19T06:38:03.772Z · score: 8 (5 votes) · EA · GW

I wrote up my thoughts in this post: Making Impact Purchases Viable. Briefly, I argue that:

  • Restrictions on sellers are necessary to fix and imbalance between the size of the buyer's market and seller's market
  • Restrictions are also important for providing sufficient incentive for a reasonable proportion of the impact to be counterfactual
  • Another issue I don't have a solution to is the potential of impact purchases to lead to bitterness or demotivate people
Comment by casebash on Making Impact Purchases Viable · 2020-04-19T06:27:43.023Z · score: 2 (1 votes) · EA · GW
That line of reasoning also suggests that EA orgs should ask each of their employees whether they will work for free if they don't get a salary; and refuse to pay a salary to employees who answer "yes".

Maybe, but I imagine that the number of people who'd work just as hard long-term would be about zero, so more of the impact would be counterfactual.

Comment by casebash on Effective Altruism and Free Riding · 2020-04-15T10:31:31.594Z · score: 3 (2 votes) · EA · GW

Even though fair trade is ineffective on an individual level, it may be effective on a collective level because enough people find it appealing for broad adoption. Deciding to ignore it weakens any attempt to establish buying fairtrade as a society.

EAs don't arise out of a vacuum, but out of society. If society is doing well, then EAs are more likely to do well too and hence to have more impact. So by not donating to a local charity, you are refusing to invest in the society that provided you the chance to have an impact in the first place.

Not saying you should donate locally or buy fair trade, just pointing out one worry with ignoring them.

Comment by casebash on Effective Altruism and Free Riding · 2020-03-28T21:56:20.887Z · score: 4 (3 votes) · EA · GW

Thanks so much for writing this. I've had similar worries regarding local charity and things like fair trade for a while.

Comment by casebash on The Hammer and the Dance · 2020-03-21T10:24:39.769Z · score: 2 (1 votes) · EA · GW

It's hard to do a summary without encouraging people to read the summary instead of the article.

Comment by casebash on [Link] Updated Drawdown now available, incl. 2020 Review · 2020-03-08T01:11:08.402Z · score: 20 (7 votes) · EA · GW

My understanding from looking briefly was that Drawdown focused on total reduction potential, not cost-effectiveness

Comment by casebash on Causal diagrams of the paths to existential catastrophe · 2020-03-02T00:23:54.631Z · score: 4 (3 votes) · EA · GW

These diagrams look really useful for encouraging people to map out potential paths to existential risk and potential interventions more carefully!

Comment by casebash on Why aren't we talking about personal development? · 2020-03-02T00:17:00.091Z · score: 4 (3 votes) · EA · GW

I suspect that part of the reason why this is happened is that the EA community is closely associated with the rationality community, so it's often easiest just to have the personal development discussion online over there. Plus another reason that people mightn't feel a need for it online is that lots of discussion of personal development occurs informally at meetups.

Comment by casebash on Final update on EA Norway's Operations Project · 2020-01-12T12:32:31.026Z · score: 2 (1 votes) · EA · GW

What's Good Growth?

Comment by casebash on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T20:33:24.639Z · score: 14 (9 votes) · EA · GW

Yeah, EA is likely less compelling when this is defined as feeling motivating/interesting to the average person at the moment, although it is hard to judge since EA hasn't been around for anywhere near as long. Nonetheless, many of the issues EAs care about seem way too weird for the average person, then again if you look at feminism, a lot of the ideas were only ever present in an overly academic form. Part of the reason why they are so influential now is that they have filtered down into the general population in a simpler form (such as "girl power", "feeling good, rationality bad"). Plus social justice is more likely to benefit the people supporting it in the here and now than EA which focuses more on other countries, other species and other times which is always a tough sell.

SJ is an extremely inclusive movement (basically by definition)

I'm generally wary of argument by definition. Indeed, SJ is very inclusive to members of a racial minority or those who are LGBTI, but is very much not when it comes to ideological diversity. And some strands can be very unwelcoming to members of majorities. So it's much more complex than that.

Comment by casebash on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T17:14:04.571Z · score: 6 (4 votes) · EA · GW

"There are definitely many who see these more in the movement/tribe sense" - For modern social justice this tends to focus on who is a good or bad person, while for EA this tends to focus more on who to trust. (There's a less dominant strand of thought within social justice that says we shouldn't blame individuals for systematic issues, but it's relatively rare). EA makes some efforts towards being anti-tribal, while social justice is less worried about the downsides of being tribal.

Comment by casebash on Updates from Leverage Research: history, mistakes and new focus · 2019-11-26T10:46:39.051Z · score: 8 (5 votes) · EA · GW

Greater knowledge of psychology would be powerful, but why should we expect the sign to be positive, instead of say making the world worse by improving propaganda and marketing?

Comment by casebash on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T22:53:40.322Z · score: 24 (12 votes) · EA · GW

Why is Leverage working on psychology? What is it hoping to accomplish?

Comment by casebash on "EA residencies" as an outreach activity · 2019-11-18T00:50:34.540Z · score: 6 (4 votes) · EA · GW

This seems like a good idea and definitely the thing I'd consider once I learn enough about ai that this would be valuable for others.

Comment by casebash on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T08:34:07.988Z · score: 4 (2 votes) · EA · GW

"It’s not clear that advanced artificial intelligence is going to arrive any time within the next several decades" - On the other hand, it's seems, at least to me, most likely that it will. Even if several more breakthroughs would be required to reach general intelligence, those may still come relatively fast as deep learning has now finally become useful enough in a wide enough array of applications that there is far more money and talent in the field than there ever was before by orders of magnitude. Now this by itself wouldn't necessarily guarantee fast advancement in a field, but AI research is still the kind of area where a single individual can push the research forward significantly just by themselves. And governments are beginning to realise the strategic importance of AI, so even more resources are flooding the field.

"One of the top AI safety organizations, MIRI, has now gone private so now we can’t even inspect whether they are doing useful work." - this is not an unreasonable choice and we have their past record to go on. Nonetheless, there are more open options if this is important to you.

"Productive AI safety research work is inaccessible to over 99.9% of the population, making this advice almost useless to nearly everyone reading the article." - Not necessarily. Even if becoming good enough to be a researcher is very hard, it probably isn't nearly as hard to become good enough at a particular area to help mentor other people.

Comment by casebash on How can I hire an EA research assistant? · 2019-10-19T20:41:05.765Z · score: 5 (4 votes) · EA · GW

I'm definitely in favour of this kind of project since I feel more EAs should be experimenting with small projects.

Comment by casebash on Shapley values: Better than counterfactuals · 2019-10-11T21:29:16.794Z · score: 2 (1 votes) · EA · GW

"The situation seems pretty symmetric, though: if a politician builds roads just to get votes, and an NGO steps in and does something valuable with that, the politician's counterfactual impact is still the same as the NGO's" - true, but the NGO's counterfactual impact is reduced when I feel it's fairer for the NGO to be able to claim the full amount (though of course you'd never know the government's true motivations in real life)

Comment by casebash on Shapley values: Better than counterfactuals · 2019-10-11T08:45:35.124Z · score: 16 (8 votes) · EA · GW

The order indifference of Shapely values only makes sense from a perspective where there is perfect knowledge of what other players will do, but if you don't have that, a party that spent a huge amount of money on a project that was almost certainly going to be wasteful and ended up being saved when by sheer happenstance another party appeared to save the project was not making good spending decisions. Similarly, many agents won't be optimising for Shapely value, say a government which spends money on infrastructure not caring about whether it'll be used or not just to win political points, so they don't properly deserve a share of the gains when someone else intervenes with notifications to make the project actually effective.

I feel that this article presents Shapley value as just plain superior, when instead a combination of both Shapley value and counterfactual value will likely be a better metric. Beyond this, what you really want to use is something more like FDT where you take into account the fact that the decisions of some agents are subjunctively linked to you and that the decisions of some other agents aren't. Even though my current theory is that very, very few agents are actually subjunctively linked to you, I suspect that thinking about problems in this fashion is likely to work reasonably well in practise (I would need to dedicate a solid couple of hours in order to be able to write out my reasons for believing this more concretely)

Comment by casebash on casebash's Shortform · 2019-09-15T00:22:13.127Z · score: 12 (8 votes) · EA · GW

If we run any more anonymous surveys, we should encourage people to pause and consider whether they are contributing productively or just venting. I'd still be in favour of sharing all the responses, but I have enough faith in my fellow EAs to believe that some would take this to heart.

Comment by casebash on Movement Collapse Scenarios · 2019-08-27T14:12:28.149Z · score: 23 (10 votes) · EA · GW

I'm most concerned about attempts to politicise the movement as unlike most of the other risks, this risk is adversarial. EA has to thread the needle of operating and maintaining our reputation in a politicised environment without letting this distort our way of thinking.

Comment by casebash on casebash's Shortform · 2019-08-27T08:05:30.518Z · score: 2 (1 votes) · EA · GW

I suspect that it could be impactful to study say a masters of AI or computer science even if you don't really need it. University provides one of the best opportunities to meet and deeply connect with people in a particular field and I'd be surprised if you couldn't persuade at least a couple of people of the importance of AI safety without really trying. On the other hand, if you went in with the intention of networking as much as possible, I think you could have much more success.

Comment by casebash on Effective Altruism London Strategy 2019 · 2019-08-22T22:42:18.108Z · score: 3 (2 votes) · EA · GW

Interesting reading your strategy, particularly what you aren't focusing on. The one part I'd be somewhat skeptical of is decreasing upskilling. People, particularly the people that we want to join our community, want to grow and improve. It's important to be realistic about how much someone can upskill in a limited amount of time, but these kinds of events seem like a key draw.

Comment by casebash on casebash's Shortform · 2019-08-21T11:17:33.038Z · score: 5 (3 votes) · EA · GW

One of the vague ideas spinning around in my head is that maybe in addition to EA which is a fairly open, loosely co-ordinated, big-tent movement with several different cause areas, there would also in value in a more selective, tightly co-ordinated, narrow movement focusing just on the long term future. Interestingly, this would be an accurate description of some EA orgs, with the key difference being that these orgs tend to rely on paid staff rather than volunteers. I don't have a solid idea of how this would work, but just thought I'd put this out there...

Comment by casebash on Why has poverty worldwide fallen so little in recent decades outside China? · 2019-08-08T01:59:48.604Z · score: 3 (2 votes) · EA · GW

That is pretty concerning. I would love an explanation of this as well!

Comment by casebash on What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) · 2019-08-04T22:23:34.205Z · score: 18 (9 votes) · EA · GW

I'm strongly in favour of creating a fellowship with a fancy name and website in order to allow people to build career capital; or at least make accepting these fellowships not a step backwards. EA Grant doesn't exactly sound prestigious.

Comment by casebash on Four practices where EAs ought to course-correct · 2019-07-31T03:28:58.899Z · score: 8 (7 votes) · EA · GW

"I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem)" - Really not that easy. A tennis racket? Not like banning drones stops someone flying a drone from somewhere else. And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

Maybe EA should grow more, but I don't think that the issue is that we are "not ruthless enough". Instead I'd argue that meta is currently undervalued, at least in terms of donations.

Comment by casebash on The EA Forum is a News Feed · 2019-07-29T07:03:51.548Z · score: 8 (4 votes) · EA · GW

People often assume that tagging is strictly better than sub-forums because it is more flexible, but categories have advantages too. For one, it is easier to filter them out, since there are less categories. Additionally, if you visit one category, then another, you are less likely to see duplicate posts.

Comment by casebash on What posts you are planning on writing? · 2019-07-24T08:59:07.339Z · score: 4 (3 votes) · EA · GW

Wow, they all sound so fascinating!

Comment by casebash on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-17T01:24:51.786Z · score: 4 (4 votes) · EA · GW

"What if you share short-termists’ skepticism of weird claims and hypothetical risks, but you’re willing to focus on first-principles reasoning and work on a long time scale?" - then you'd probably focus on nuclear which isn't at all hypothetical

Comment by casebash on Rationality, EA and being a movement · 2019-07-12T02:24:27.320Z · score: 2 (1 votes) · EA · GW

Here's the link: https://meaningness.com/geeks-mops-sociopaths

Comment by casebash on Rationality, EA and being a movement · 2019-07-11T14:53:08.991Z · score: 4 (2 votes) · EA · GW

Sorry, I can't respond to this in detail, because the conversation was a while back. Further, I don't have independent confirmation on any of the factual claims.

I could PM you one name they mentioned for point three, but out of respect for their privacy I don't want to post this publicly. Regarding point four, they mentioned article as a description of the dynamic they were worried about.

In terms of resources being directed to something that is not the mission, I can't remember what was said by these particular people, but I can list the complaints I've heard in general: circling, felon voting rights, the dispute over meat at EAG, copies of HPMoR. Since this is quite a wide spread of topics, this probably doesn't help at all.

Comment by casebash on Rationality, EA and being a movement · 2019-07-11T14:22:25.506Z · score: 2 (1 votes) · EA · GW

"EAs seem to mostly interact with research groups and non-profits" - They were talking more about the kinds of people who are joining effective altruism than the groups we interact with