EA should wargame Coronavirus 2020-02-12T04:32:02.608Z · score: 21 (10 votes)
Why were people skeptical about RAISE? 2019-09-04T08:26:52.654Z · score: 14 (6 votes)
casebash's Shortform 2019-08-21T11:17:32.878Z · score: 6 (1 votes)
Rationality, EA and being a movement 2019-06-22T05:22:42.623Z · score: 31 (22 votes)
Most important unfulfilled role in the EA ecosystem? 2019-04-05T11:37:00.294Z · score: 14 (4 votes)
A List of Things For People To Do 2019-03-08T11:34:43.164Z · score: 41 (28 votes)
What has Effective Altruism actually done? 2019-01-14T14:07:50.062Z · score: 29 (14 votes)
If You’re Young, Don’t Give To Charity 2018-12-24T11:55:42.798Z · score: 17 (10 votes)
Rationality as an EA Cause Area 2018-11-13T14:48:25.011Z · score: 22 (26 votes)
Three levels of cause prioritisation 2018-05-28T07:26:32.333Z · score: 8 (15 votes)
Viewing Effective Altruism as a System 2017-12-28T10:09:43.004Z · score: 21 (21 votes)
EA should beware concessions 2017-06-14T01:58:47.207Z · score: 1 (11 votes)
Reasons for EA Meetups to Exist 2016-07-20T06:22:39.675Z · score: 11 (11 votes)
Population ethics: In favour of total utilitarianism over average 2015-12-22T22:34:53.087Z · score: 0 (0 votes)


Comment by casebash on Final update on EA Norway's Operations Project · 2020-01-12T12:32:31.026Z · score: 2 (1 votes) · EA · GW

What's Good Growth?

Comment by casebash on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T20:33:24.639Z · score: 12 (8 votes) · EA · GW

Yeah, EA is likely less compelling when this is defined as feeling motivating/interesting to the average person at the moment, although it is hard to judge since EA hasn't been around for anywhere near as long. Nonetheless, many of the issues EAs care about seem way too weird for the average person, then again if you look at feminism, a lot of the ideas were only ever present in an overly academic form. Part of the reason why they are so influential now is that they have filtered down into the general population in a simpler form (such as "girl power", "feeling good, rationality bad"). Plus social justice is more likely to benefit the people supporting it in the here and now than EA which focuses more on other countries, other species and other times which is always a tough sell.

SJ is an extremely inclusive movement (basically by definition)

I'm generally wary of argument by definition. Indeed, SJ is very inclusive to members of a racial minority or those who are LGBTI, but is very much not when it comes to ideological diversity. And some strands can be very unwelcoming to members of majorities. So it's much more complex than that.

Comment by casebash on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T17:14:04.571Z · score: 6 (4 votes) · EA · GW

"There are definitely many who see these more in the movement/tribe sense" - For modern social justice this tends to focus on who is a good or bad person, while for EA this tends to focus more on who to trust. (There's a less dominant strand of thought within social justice that says we shouldn't blame individuals for systematic issues, but it's relatively rare). EA makes some efforts towards being anti-tribal, while social justice is less worried about the downsides of being tribal.

Comment by casebash on Updates from Leverage Research: history, mistakes and new focus · 2019-11-26T10:46:39.051Z · score: 8 (5 votes) · EA · GW

Greater knowledge of psychology would be powerful, but why should we expect the sign to be positive, instead of say making the world worse by improving propaganda and marketing?

Comment by casebash on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T22:53:40.322Z · score: 24 (12 votes) · EA · GW

Why is Leverage working on psychology? What is it hoping to accomplish?

Comment by casebash on "EA residencies" as an outreach activity · 2019-11-18T00:50:34.540Z · score: 6 (4 votes) · EA · GW

This seems like a good idea and definitely the thing I'd consider once I learn enough about ai that this would be valuable for others.

Comment by casebash on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T08:34:07.988Z · score: 4 (2 votes) · EA · GW

"It’s not clear that advanced artificial intelligence is going to arrive any time within the next several decades" - On the other hand, it's seems, at least to me, most likely that it will. Even if several more breakthroughs would be required to reach general intelligence, those may still come relatively fast as deep learning has now finally become useful enough in a wide enough array of applications that there is far more money and talent in the field than there ever was before by orders of magnitude. Now this by itself wouldn't necessarily guarantee fast advancement in a field, but AI research is still the kind of area where a single individual can push the research forward significantly just by themselves. And governments are beginning to realise the strategic importance of AI, so even more resources are flooding the field.

"One of the top AI safety organizations, MIRI, has now gone private so now we can’t even inspect whether they are doing useful work." - this is not an unreasonable choice and we have their past record to go on. Nonetheless, there are more open options if this is important to you.

"Productive AI safety research work is inaccessible to over 99.9% of the population, making this advice almost useless to nearly everyone reading the article." - Not necessarily. Even if becoming good enough to be a researcher is very hard, it probably isn't nearly as hard to become good enough at a particular area to help mentor other people.

Comment by casebash on How can I hire an EA research assistant? · 2019-10-19T20:41:05.765Z · score: 5 (4 votes) · EA · GW

I'm definitely in favour of this kind of project since I feel more EAs should be experimenting with small projects.

Comment by casebash on Shapley values: Better than counterfactuals · 2019-10-11T21:29:16.794Z · score: 2 (1 votes) · EA · GW

"The situation seems pretty symmetric, though: if a politician builds roads just to get votes, and an NGO steps in and does something valuable with that, the politician's counterfactual impact is still the same as the NGO's" - true, but the NGO's counterfactual impact is reduced when I feel it's fairer for the NGO to be able to claim the full amount (though of course you'd never know the government's true motivations in real life)

Comment by casebash on Shapley values: Better than counterfactuals · 2019-10-11T08:45:35.124Z · score: 16 (8 votes) · EA · GW

The order indifference of Shapely values only makes sense from a perspective where there is perfect knowledge of what other players will do, but if you don't have that, a party that spent a huge amount of money on a project that was almost certainly going to be wasteful and ended up being saved when by sheer happenstance another party appeared to save the project was not making good spending decisions. Similarly, many agents won't be optimising for Shapely value, say a government which spends money on infrastructure not caring about whether it'll be used or not just to win political points, so they don't properly deserve a share of the gains when someone else intervenes with notifications to make the project actually effective.

I feel that this article presents Shapley value as just plain superior, when instead a combination of both Shapley value and counterfactual value will likely be a better metric. Beyond this, what you really want to use is something more like FDT where you take into account the fact that the decisions of some agents are subjunctively linked to you and that the decisions of some other agents aren't. Even though my current theory is that very, very few agents are actually subjunctively linked to you, I suspect that thinking about problems in this fashion is likely to work reasonably well in practise (I would need to dedicate a solid couple of hours in order to be able to write out my reasons for believing this more concretely)

Comment by casebash on casebash's Shortform · 2019-09-15T00:22:13.127Z · score: 12 (8 votes) · EA · GW

If we run any more anonymous surveys, we should encourage people to pause and consider whether they are contributing productively or just venting. I'd still be in favour of sharing all the responses, but I have enough faith in my fellow EAs to believe that some would take this to heart.

Comment by casebash on Movement Collapse Scenarios · 2019-08-27T14:12:28.149Z · score: 23 (10 votes) · EA · GW

I'm most concerned about attempts to politicise the movement as unlike most of the other risks, this risk is adversarial. EA has to thread the needle of operating and maintaining our reputation in a politicised environment without letting this distort our way of thinking.

Comment by casebash on casebash's Shortform · 2019-08-27T08:05:30.518Z · score: 2 (1 votes) · EA · GW

I suspect that it could be impactful to study say a masters of AI or computer science even if you don't really need it. University provides one of the best opportunities to meet and deeply connect with people in a particular field and I'd be surprised if you couldn't persuade at least a couple of people of the importance of AI safety without really trying. On the other hand, if you went in with the intention of networking as much as possible, I think you could have much more success.

Comment by casebash on Effective Altruism London Strategy 2019 · 2019-08-22T22:42:18.108Z · score: 3 (2 votes) · EA · GW

Interesting reading your strategy, particularly what you aren't focusing on. The one part I'd be somewhat skeptical of is decreasing upskilling. People, particularly the people that we want to join our community, want to grow and improve. It's important to be realistic about how much someone can upskill in a limited amount of time, but these kinds of events seem like a key draw.

Comment by casebash on casebash's Shortform · 2019-08-21T11:17:33.038Z · score: 5 (3 votes) · EA · GW

One of the vague ideas spinning around in my head is that maybe in addition to EA which is a fairly open, loosely co-ordinated, big-tent movement with several different cause areas, there would also in value in a more selective, tightly co-ordinated, narrow movement focusing just on the long term future. Interestingly, this would be an accurate description of some EA orgs, with the key difference being that these orgs tend to rely on paid staff rather than volunteers. I don't have a solid idea of how this would work, but just thought I'd put this out there...

Comment by casebash on Why has poverty worldwide fallen so little in recent decades outside China? · 2019-08-08T01:59:48.604Z · score: 3 (2 votes) · EA · GW

That is pretty concerning. I would love an explanation of this as well!

Comment by casebash on What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) · 2019-08-04T22:23:34.205Z · score: 18 (9 votes) · EA · GW

I'm strongly in favour of creating a fellowship with a fancy name and website in order to allow people to build career capital; or at least make accepting these fellowships not a step backwards. EA Grant doesn't exactly sound prestigious.

Comment by casebash on Four practices where EAs ought to course-correct · 2019-07-31T03:28:58.899Z · score: 8 (7 votes) · EA · GW

"I have seen EAs circulate half-baked fears of suicide drones making it easy to murder people (just carry a tennis racket, or allow municipalities to track or ban drone flights if they so choose) or assassinate political leaders (they already speak behind bullet-resistant plexiglass sometimes, this is not a problem)" - Really not that easy. A tennis racket? Not like banning drones stops someone flying a drone from somewhere else. And political leaders sure you can speak behind the glass, but are you going to spend your whole life behind a screen?

Maybe EA should grow more, but I don't think that the issue is that we are "not ruthless enough". Instead I'd argue that meta is currently undervalued, at least in terms of donations.

Comment by casebash on The EA Forum is a News Feed · 2019-07-29T07:03:51.548Z · score: 8 (4 votes) · EA · GW

People often assume that tagging is strictly better than sub-forums because it is more flexible, but categories have advantages too. For one, it is easier to filter them out, since there are less categories. Additionally, if you visit one category, then another, you are less likely to see duplicate posts.

Comment by casebash on What posts you are planning on writing? · 2019-07-24T08:59:07.339Z · score: 4 (3 votes) · EA · GW

Wow, they all sound so fascinating!

Comment by casebash on "Why Nations Fail" and the long-termist view of global poverty · 2019-07-17T01:24:51.786Z · score: 4 (4 votes) · EA · GW

"What if you share short-termists’ skepticism of weird claims and hypothetical risks, but you’re willing to focus on first-principles reasoning and work on a long time scale?" - then you'd probably focus on nuclear which isn't at all hypothetical

Comment by casebash on Rationality, EA and being a movement · 2019-07-12T02:24:27.320Z · score: 2 (1 votes) · EA · GW

Here's the link:

Comment by casebash on Rationality, EA and being a movement · 2019-07-11T14:53:08.991Z · score: 4 (2 votes) · EA · GW

Sorry, I can't respond to this in detail, because the conversation was a while back. Further, I don't have independent confirmation on any of the factual claims.

I could PM you one name they mentioned for point three, but out of respect for their privacy I don't want to post this publicly. Regarding point four, they mentioned article as a description of the dynamic they were worried about.

In terms of resources being directed to something that is not the mission, I can't remember what was said by these particular people, but I can list the complaints I've heard in general: circling, felon voting rights, the dispute over meat at EAG, copies of HPMoR. Since this is quite a wide spread of topics, this probably doesn't help at all.

Comment by casebash on Rationality, EA and being a movement · 2019-07-11T14:22:25.506Z · score: 2 (1 votes) · EA · GW

"EAs seem to mostly interact with research groups and non-profits" - They were talking more about the kinds of people who are joining effective altruism than the groups we interact with

Comment by casebash on Announcing plans for a German Effective Altruism Network focused on Community Building · 2019-07-04T21:51:47.655Z · score: 5 (3 votes) · EA · GW

What's EAF focusing on now and why did it decide to deprioritise community building?

Comment by casebash on Leverage Research shutting down? · 2019-07-04T21:33:58.401Z · score: 11 (9 votes) · EA · GW

This doesn't really seem to be a question; just a statement. Might be worthwhile to make the question explicit.

Comment by casebash on [deleted post] 2019-06-24T08:33:11.038Z

I agree; this is a duplicate

Comment by casebash on Rationality, EA and being a movement · 2019-06-23T06:54:47.044Z · score: 2 (1 votes) · EA · GW

I can't say exactly what the people I was talking about meant since I don't want to put words in their mouth, but controversial figures was likely at least part of it.

Comment by casebash on Information security careers for GCR reduction · 2019-06-21T08:41:39.894Z · score: 15 (8 votes) · EA · GW

Happy to see this post. Definitely feels like security issues have received insufficient attention.

Comment by casebash on What new EA project or org would you like to see created in the next 3 years? · 2019-06-14T13:38:02.830Z · score: 2 (1 votes) · EA · GW

Interesting, unfortunately teammates are only available on the $500 per month version :-(

Comment by casebash on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:42:10.952Z · score: 4 (3 votes) · EA · GW

Suggesting a tried and true model is a major plus!

Comment by casebash on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:35:42.672Z · score: 8 (6 votes) · EA · GW

A URL shortening service. It would be nice if, for example, took you to the group page for Sydney, provided a link to a nice landing page for reading more about cost effectiveness and linked to the ea forum. The Czech EA association currently owns the URL, but no-one has picked up this project yet.

Comment by casebash on What new EA project or org would you like to see created in the next 3 years? · 2019-06-13T14:30:54.996Z · score: 1 (1 votes) · EA · GW

What would you use it for?

Comment by casebash on Can/should we define quick tests of personal skill for priority areas? · 2019-06-11T14:14:32.943Z · score: 3 (2 votes) · EA · GW

I definitely think this is worth experimenting with to see if we can effectively identify those who should pursue a particular path.

Comment by casebash on EA Forum: Footnotes are live, and other updates · 2019-05-21T08:18:48.685Z · score: 3 (2 votes) · EA · GW

Suggestion: Include a footnote in the main post so that we can see.

Also, will this feature also be coming to the LessWrong forum?

Comment by casebash on What caused EA movement growth to slow down? · 2019-05-12T22:48:38.877Z · score: 16 (9 votes) · EA · GW

Maybe it's just a result of EA deciding to focus on fidelity rather than speed of movement growth + decreasing marginal returns on outreach

Comment by casebash on I want an ethnography of EA · 2019-05-03T11:07:13.922Z · score: 1 (1 votes) · EA · GW

Maybe post this as a separate question?

Comment by casebash on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-26T10:42:09.089Z · score: 20 (11 votes) · EA · GW

I thought I'd share my impressions as someone who has spent significant time at the EA hotel

I think this makes them a particularly easy and promising target for people who tend to abuse that kind of trust relationship and who are looking for social influence.

Most of the people at the EA hotel have been involved in the movement for a while, so they already have reasonably well-developed positions already

it's plausible that the EA Hotel could form a geographically and memetically isolated group that is predisposed for conflict with the rest of the EA community in a way that could result in a lot of negative-sum conflict.

The EA hotel has a limit of 2 years free accommodation (although it is possible exceptions might be made). Most people tend to stay only a certain number of months given that it is Blackpool and not the most desirable location. Further, there are regularly visitors and there is frequent change over in the guests. I actually feel more memetically isolated in Australia than when I was at the EA hotel; especially since visiting London is relatively easy.

Generally high-dedication cultures are more likely to cause people to overcommit to to take drastic actions that they later regret

None of the projects that I am aware of having being undertaken at the EA hotel seemed to be especially high risk. But further than this, whoever is running the checkins will have an opportunity to steer people away from high risk projects.

Comment by casebash on Does climate change deserve more attention within EA? · 2019-04-17T12:59:26.292Z · score: 10 (8 votes) · EA · GW

Thanks for writing this. I think that it would be good if there was at least some EA investment in climate change so a) we gain a better understanding of the issue b) we are in a better position to shift resources this direction if we receive evidence that it is likely to be worse than we expect c) we gain the opportunity to spread EA ideas into the climate change movement

I'm not proposing a huge amount of investment, but I'd love to see at least some.

Comment by casebash on Political culture at the edges of Effective Altruism · 2019-04-12T13:38:01.200Z · score: 3 (3 votes) · EA · GW

I agree with the commentators that it is worthwhile keeping in mind that some of the political pressure may in fact be correct, but I also feel that this post is valuable because it helps highlight the kinds of pressures that we are subject to.

Comment by casebash on Political culture at the edges of Effective Altruism · 2019-04-12T13:35:52.194Z · score: 14 (12 votes) · EA · GW

I think, for good or bad, EA is much more vulnerable to pressure from the left-wing because the institutions we interface with and the locations where most EAs are based lean that way.

Comment by casebash on Most important unfulfilled role in the EA ecosystem? · 2019-04-05T21:22:41.970Z · score: 3 (2 votes) · EA · GW

To what extent do you think people are born that way and to what extent do you think they become it? If they are more born that way, how do we get such people involved? And if they become it, how do we make that happen?

Comment by casebash on Salary Negotiation for Earning to Give · 2019-04-04T22:48:23.232Z · score: 1 (1 votes) · EA · GW

I think this is an excellent idea and that someone should pursue this. I'm sure plenty of people have considered paying someone else to manage the negotiation for them before, but the risk is always that the fees don't outweigh the increase in wage you would have gotten negotiating by yourself. Here, since the money is going to charity, this risk is much less of a concern because at least it is doing something and the world has improved even if you don't earn a single extra dollar.

Comment by casebash on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-28T00:14:46.838Z · score: 11 (6 votes) · EA · GW

I'm actually donating to the Patreon, but here are the arguments against that are most persuasive to me:

One argument I've heard raised is that the EA hotel is a rather expensive way of testing the idea of supporting EAs with low-cost living. Perhaps it would have been better to have started with a smaller scale experiment such as a group house and perhaps funding the EA hotel is too costly a way of learning about the potential of such projects.

Another is that the EA hotel should be more selective about who it admits, unlike its current very minimally low bar in order to achieve sufficient expected return. Some people may believe that the current approach is unlikely to be cost effective and that the hotel as it is currently structured is therefore testing the wrong thing. In this case, spending a few hundred thousand pounds on informational value could be seen as waste. Worse, we can imagine that after such a failure, funders would be extremely reluctant to fund a similar project that was more selective. In this case, the thing that we'd want to test might never actually be tested.

A third option is that people might not want to donate because they don't believe that other people will donate. Let's suppose that you believe the hotel needs to run for at least another year before it could build up the kind of track record for it to be sustainable and you have the option to donate one month's worth of funding. It seems that donating one month's worth of operating expenses might allow the hotel to do one month's worth of good regardless of whether it later collapses or not, so perhaps this is irrelevant.

However, there may be two ways in which you may be trying to leverage your donation to have more than just direct impact. Firstly, if the hotel survives to the point where it builds up a track record to justify for others to fund it, counterfactual value is generated to the extent that the hotel is better than the other opportunities available to those funders. And by allowing this opportunity to exist, you would get to claim part of this value. Secondly, we can imagine extreme success scenarios where the hotel turned out to be so successful that the EA community decided to copy the concept around the world. Again, you could claim partial responsibility for this.

But, the key point is that if you think other funders won't be forthcoming, you'll miss out on these highly leverage scenarios. And if these are the reasons you'd want to fund the hotel, you might decide it's best to fund something else instead.

Comment by casebash on What to do with people? · 2019-03-06T21:49:02.890Z · score: 7 (5 votes) · EA · GW

This is an interesting idea, but I'm skeptical as I think it underestimates the difficulties in co-ordination. Givewell has had difficulty with volunteers due to unreliability. Another datapoint is the shift in .IMPACT (now Rethink Charity) from relying on volunteers to relying on paid staff. Volunteer hierarchical organisations will be hit by these issues doubly hard as they are relying on volunteers for both management and object level work. I would love to be proven wrong though.

Comment by casebash on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T23:31:55.616Z · score: 3 (3 votes) · EA · GW

I'm not saying everyone should go into this, just that a portion should

Comment by casebash on Effective Impact Investing · 2019-03-01T13:41:05.894Z · score: 3 (2 votes) · EA · GW

Impact investing to encourage companies to do more on AI Safety is a particularly fascinating idea. I'm curious how much your influence depends on the number of shares. Obviously if you own 20% of a company you're likely to be heard, but is there much difference between owning 1 share vs. 100?

Comment by casebash on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T13:26:13.826Z · score: 4 (8 votes) · EA · GW

I would suggest that the presence of such a high amount of talent means that projects like the EA hotel are more vital since this increases the amount of talent that can be deployed.

Comment by casebash on Rationality as an EA Cause Area · 2019-02-24T09:10:18.961Z · score: 1 (1 votes) · EA · GW

Yeah, InIn was the main attempt at this. Gleb was able to get a large number of articles published in news sources, but at the cost of quality. And some people felt that this would make people perceive rationality negatively, as well as drawing in people from the wrong demographic. I think he was improving over time, but perhaps too slowly?

PS. Have you seen this?

Comment by casebash on Rationality as an EA Cause Area · 2019-02-23T19:06:38.085Z · score: 1 (1 votes) · EA · GW

I would be surprised to see much activity on a comment on a three month old thread. If you want to pursue this, I'd suggest writing a new post. Good luck, I'd love to see someone pursuing this project!