Posts

Should EA be explicitly long-termist or uncommitted? 2022-01-11T08:56:39.015Z
Sydney AI Safety Fellowship 2021-12-02T07:35:00.188Z
List of AI safety courses and resources 2021-09-06T14:26:42.397Z
The Sense-Making Web 2021-01-04T23:21:57.226Z
Effective Altruism and Rationalist Philosophy Discussion Group 2020-09-16T02:46:19.168Z
Mike Huemer on The Case for Tyranny 2020-07-16T09:57:13.701Z
Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? 2020-07-01T23:32:22.016Z
Making Impact Purchases Viable 2020-04-17T23:01:53.273Z
The World According to Dominic Cummings 2020-04-14T23:52:37.334Z
The Hammer and the Dance 2020-03-20T19:45:45.706Z
Inward vs. Outward Focused Altruism 2020-03-04T02:05:01.848Z
EA should wargame Coronavirus 2020-02-12T04:32:02.608Z
Why were people skeptical about RAISE? 2019-09-04T08:26:52.654Z
casebash's Shortform 2019-08-21T11:17:32.878Z
Rationality, EA and being a movement 2019-06-22T05:22:42.623Z
Most important unfulfilled role in the EA ecosystem? 2019-04-05T11:37:00.294Z
A List of Things For People To Do 2019-03-08T11:34:43.164Z
What has Effective Altruism actually done? 2019-01-14T14:07:50.062Z
If You’re Young, Don’t Give To Charity 2018-12-24T11:55:42.798Z
Rationality as an EA Cause Area 2018-11-13T14:48:25.011Z
Three levels of cause prioritisation 2018-05-28T07:26:32.333Z
Viewing Effective Altruism as a System 2017-12-28T10:09:43.004Z
EA should beware concessions 2017-06-14T01:58:47.207Z
Reasons for EA Meetups to Exist 2016-07-20T06:22:39.675Z
Population ethics: In favour of total utilitarianism over average 2015-12-22T22:34:53.087Z

Comments

Comment by casebash on saulius's Shortform · 2022-01-19T16:01:21.653Z · EA · GW

Interesting idea. I think this could be useful in cases where people know that they don't have the credibility to receive a direct grant.

Comment by casebash on FLI launches Worldbuilding Contest with $100,000 in prizes · 2022-01-18T23:57:49.338Z · EA · GW

Even without a singularity, no unexpected power upsets seems a bit implausible.

Comment by casebash on FLI launches Worldbuilding Contest with $100,000 in prizes · 2022-01-17T22:54:29.812Z · EA · GW

Yeah, it seems to require quite distinct skills. That said, they seem to be encouraging collaboration.

Comment by casebash on FLI launches Worldbuilding Contest with $100,000 in prizes · 2022-01-17T22:53:01.483Z · EA · GW

Yeah, it seems strange to be forced to adopt a scenario where the development of AGI doesn't create some kind of surprising upset in terms of power.

I suppose a contest that included a singularity might seem too far out for most people. And maybe this is the best we can do insofar as persuading people to engage with these ideas. (There's definitely a risk that people over update on these kind of scenarios, but it's not obvious that this will be a huge problem).

Comment by casebash on Uses of EA Retreats, Case Study UK Uni Organisers’ Retreat Dec 2021 · 2022-01-17T13:28:15.442Z · EA · GW

I guess that makes sense.

I suppose organising such a bootcamp is probably one of the most useful things that national level organisers could be doing.

Comment by casebash on Uses of EA Retreats, Case Study UK Uni Organisers’ Retreat Dec 2021 · 2022-01-17T10:34:52.987Z · EA · GW
Intro fellowship facilitator -> run a social -> committee member

Seems like you could let someone run a social essentially straight off, as it's pretty hard to mess up a social.

That said, I agree with your core point, it's important to provide people exciting opportunities when they're most enthusiastic:

This takes time and there's dropout at every stage. The observation is that organisers are usually the most motivated after a retreat/conference/...

That said, your ideas for sessions all sound really useful:

Some ideas for sessions: how to 1-1s, facilitation training, mental health, pitches for EA short and long, people management & project delegation, Personal productivity, Effective planning, Movement building strategy and strategic prioritization for groups, creating positive epistemic norms, "Agenticness" (as explained in my post), how to trade money for time

I guess my main skepticism is the following:

This seems doubly useful since other organisers seldom have time to skill up new organisers

It seems like there is a lot of effort in running a retreat and that this would likely involve multiple people, so I don't see you coming out ahead here. That said, I expect you'd end up with more highly trained organizers at the end of this both because of increased amount of training time for each organizer and from the peer-to-peer exchange of ideas.

Comment by casebash on Uses of EA Retreats, Case Study UK Uni Organisers’ Retreat Dec 2021 · 2022-01-17T00:58:23.674Z · EA · GW

"I personally am seriously thinking about running a “bootcamp” for new organisers, fellowship facilitators, etc. as a direct result of the retreat. I’ve spoken to Jessica McCurdy from CEA about this and there’s a ~50% chance I’ll actually do it"

I'd be curious to hear more about this idea. What's the plan?

Comment by casebash on Existential Risk Observatory: results and 2022 targets · 2022-01-15T11:01:12.054Z · EA · GW

I think that EAs generally haven't pursued media outreach due to considerations such as covered in this post: What to know before talking with journalists about EA. Their worries seem to be mostly related to journalists misunderstanding or misrepresenting what was said, unfavorable quotes, or stories being fitted into a narrative.

I suppose Op-Eds manage to avoid most of these problems and add a lot of credibility to the field. I guess the main potential downside I can see is that we wouldn't want existential risk to become a buzzword that people start adding to all kinds of proposals that have nothing to do with x-risk. However, it seems unlikely that just a couple of articles would have this kind of effect. So overall, I think having at least a small amount of this kind of work is important as it does improve the credibility of the field.

Comment by casebash on Funds are available to fund non-EA-branded groups · 2022-01-15T09:07:43.783Z · EA · GW

I find that surprising. Any thoughts on why that might be? Do you think that groups don't know that they can apply or that most groups aren't really doing much in the way of activities that would benefit from funding?

Comment by casebash on We Should Run More EA Contests · 2022-01-15T08:16:21.251Z · EA · GW

I think the forum prize should have focused on EAs not at orgs b/c those EAs are already sufficiently incentivised to do good work and when the prizes are dominated by people already at orgs this dilutes the ability of the forum prizes to highlight and encourage new talent.

Comment by casebash on We Should Run More EA Contests · 2022-01-13T01:00:25.026Z · EA · GW

Have you seen this post?

Emerson Spartz recently ran a similar contest where he was paying people $1000 to come up with bounties.

Here's a few ideas I shared in this thread.

I really like the second idea. I mean the first one isn't bad either, but a bit of a let down if you don't actually have the $20,000.

Comment by casebash on Why is Operations no longer an 80K Priority Path? · 2022-01-12T06:45:14.059Z · EA · GW

I guess if you start setting the standards that high, maybe that would lead to far too many jobs becoming a priority path.

Comment by casebash on Should EA be explicitly long-termist or uncommitted? · 2022-01-12T06:33:36.411Z · EA · GW

Tbh, I don't have a huge amount of desire to produce more content on this topic beyond this post.

Comment by casebash on Why is Operations no longer an 80K Priority Path? · 2022-01-11T18:19:51.265Z · EA · GW

I'd be curious to know about the kind of salary that those orgs were offering as if it were significantly below market rate that might explain the discrepency. Alternatively, maybe it's the case that well-known and long-established orgs are flooded with applications, while newer ones have slimmer pickings?

Comment by casebash on Should EA be explicitly long-termist or uncommitted? · 2022-01-11T17:52:54.779Z · EA · GW
So I think it's likely that EA efforts with cost-effectiveness comparable or higher than GiveWell top charities will continue to be funded going forwards, rather than "have the rug pulled out from underneath them.

Yeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it's easy to make the mistake of saying "We'll never get to point X" and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we'll end up?

So from a moral uncertainty/trade perspective, it makes a lot of things for EA to dump lots of $s (and relatively little oversight) into shovel-ready neartermism projects, while focusing the limited community building, vetting, etc capacity on longtermism projects.

This is an excellent point and now that you've explained this line of reasoning, I agree.

I guess it's not immediately clear to me to what extent my proposals would shift limited community building and vetting capability away from long-termist projects. If, for example, Giving What We Can had additional money, it's not clear to me, although it's certainly possible, that they might hire someone who would otherwise go to work at a long-termist organisation.

I guess it just seems to me that even though there are real human capital and vetting bottlenecks, that you can work around them to a certain extent if you're willing to just throw money at the issue. Like there has to be something that's the equivalent of GiveDirectly for long-termism.

Comment by casebash on Introducing Effective Self-Help · 2022-01-07T05:08:05.289Z · EA · GW
For example, Clearer Thinking runs its own studies for almost every article it writes

Wouldn't that be extremely expensive?

Comment by casebash on Introducing Effective Self-Help · 2022-01-06T16:14:02.424Z · EA · GW

I'm very keen to see how this project goes.

I think it's certainly going to be something of a challenge given how many high-quality resources are out there.

Two thoughts:

• If this projects starts becoming influential within EA, then it would be worthwhile paying experts to comment and review the articles

• It might be worth running a survey to see which alternate resources EAs are most likely to utilize instead and use that as the bar you need to exceed

Comment by casebash on Funds are available to fund non-EA-branded groups · 2022-01-05T08:46:41.481Z · EA · GW

Hey Buck, I guess I'm curious because you linked to the EAIF form down the bottom, but the latest payout report didn't include any payouts to Less Wrong or ASX groups. Perhaps you could clarify?

Comment by casebash on Increased Availability and Willingness for Deployment of Resources for Effective Altruism and Long-Termism · 2022-01-05T08:33:21.796Z · EA · GW

Oh, here's one thing that I missed:

Funds are available to fund non-EA branded groups

Comment by casebash on Get fast funding for your student group · 2022-01-05T08:19:47.083Z · EA · GW

Since CEA is willing to fund a lot of these expenses as well, I guess I'm curious why the Global Challenges project is offering this as well and if there's any effort to avoid doubling up.

Is the idea that it's better to have a one-shop stop for all of the support needs of student groups?

Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2022-01-05T07:11:43.676Z · EA · GW

I'd be very keen to hear what you're planning/provide feedback.

Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2022-01-05T07:09:59.435Z · EA · GW

I definitely think it's important to pay attention to language when a simple substitution can avoid issues. Maybe it'd be better to use the word "evaluation" or "stewardship" rather than "gatekeeping"?

"High-impact" might also be a good substitute for "elite".

However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example  imposing cognitive/jargon costs on everyone.

I would suggest using contentious words when substitutes would significantly impede communication or obscure the point being made, but otherwise being flexible.

Comment by casebash on Do you use the EA Wiki? · 2022-01-04T17:33:45.281Z · EA · GW

Well even if you implement that model I still think it'd be important to keep tags and for those tags to be able to have descriptions.

Comment by casebash on The phrase “hard-core EAs” does more harm than good · 2022-01-04T06:46:57.584Z · EA · GW

"Super bought-in EAs" Drank the kool-aid EAs.”

I appreciate you raising these concerns, but I doubt this will ever catch on. Unfortunately, these terms are too long and awkward sounding for casual contexts and too informal for formal contexts.

I agree with Peter Wildeford that "highly engaged EAs" is better for formal contexts. It does have a slightly different meaning as "hard-core" suggests more ideological conformity than "highly engaged", however I would suggest that in most contexts the latter is what we want to focus on. For a start, it's easier to ascertain, but even more importantly we want people who think for themselves.

Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-31T09:24:03.671Z · EA · GW

Although I don't think they have mandatory volunteering?

Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-30T10:37:19.664Z · EA · GW

Perhaps, although it may also increase the number of people working uncompensated.

Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-30T08:23:05.700Z · EA · GW

So thinking about how this works overall:

  • The government is providing a 20% subsidy
  • Any artist who receives insurance from their employer subsidises those who don't
  • Artists outside of the KSK (either because they've not been very successful or they choose not to join) subsidise those inside
  • Highly profitable artists subsidise the less profitable ones (egalitarian component likely works better here as there is more variation in what artists earn, however, EAs are probably more happy to cross-subsidise each other)
  • People hiring artists have to complete additional paperwork
  • The total compensation for artists is likely slightly higher because people forget about these additional costs when considering what they can pay
Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-30T06:46:45.871Z · EA · GW

The Nonlinear Fund is working on addressing this problem for people in AI Safety (My guess would be they will start with people at orgs, then possibly expand it to people on certain grants, I interned there a while ago, so I don't know the current plan).

Gavin Li is working on EA Offroad for people "not constituted for college" or who would find attending college challenging due to their financial position.

I would really like to see the establishment of more EA Hubs in cities that are more affordable. I think that the financial challenges a lot of people are facing are the result of trying to support themselves in some of the most expensive cities that there are. That said, there seem to be a few projects starting in this space, so I would probably encourage people to support existing projects, rather than starting more.

I'm not exactly sure of the scope of Magnify mentoring (previously WAMBAM), but it might be able to provide some support helping people figure out their lives. If not, then perhaps someone should create a mentoring service more focused on helping people improve their lives.

Further ideas:

  • Bountied Rationality - I'm sure that there are a lot of small, useful, and accessible tasks to do. Perhaps someone should apply for funding in order to post more bounties here. (Argument against: bounties are generally winner-takes-all so they can easily result in people burning up a lot of time without receiving any money in return)
  • On a similar, but slightly different note, the AI Safety Fundamental course is now paying facilitators $1000. Having more of these kinds of opportunities available seems positive.
  • Programming bootcamps - a lot of EAs are capable of becoming programmers and this could provide a path to financial stability.
  • Some kind of peer support project with group facilitators receiving training from professionals.
  • Something like Y Couchinator to help EAs share their free rooms.
  • Exit grants. In some circumstances, it might make sense to award exit grants to people who were funded/employed productively for a reasonable period but have now become unproductive. These grants should probably be awarded privately with only the total number of grants and dollar value reported.

Final thoughts:

Given all the excellent points you make about the challenges of such a fund, I believe that it's important to have a wide variety of other means of support. Nonetheless, I suspect that a more traditional assistance organisation would be valuable, so long as there was proper communication about its role, specifically, the limits on how much support it can provide and that the organisation wouldn't be able to help everyone.

Comment by casebash on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-30T05:50:45.870Z · EA · GW

That's a shame to hear. Is there a write-up anywhere?

Comment by casebash on The Altruist - Proposal for an EA Newspaper · 2021-12-30T03:37:57.349Z · EA · GW

I agree that a trade publication could potentially be valuable - my main issue would be that attempts at outreach to the existing philanthropic community haven't been very successful before. Nonetheless, I suspect that if it were able to produce sufficiently high-quality content - including content about topics outside of the traditional EA areas, I suspect I would gain readers and influence.

Comment by casebash on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-30T03:29:41.564Z · EA · GW

One downside of considering this question in the abstract is that it downplays the crucial issue of trust. The greater trust between the population and the experts, the more willing the population will be to accept more technocracy.

Comment by casebash on Field-specific LE (Longterm Entrepreneurship) · 2021-12-23T11:43:10.813Z · EA · GW

I'd be keen to hear about this as well.

Comment by casebash on A huge opportunity for impact: movement building at top universities · 2021-12-22T17:16:20.048Z · EA · GW

"Running a mini-conference every week (one group has done this already - they have coworking, seminar programmes, a talk, and a social every week of term, and it seems to have been very good for engagement, with attendance regularly around 70 people). I could imagine this being even bigger if there were even more concurrent ‘tracks’"

Which group was this?

Comment by casebash on Are there any current students at University of Washington? · 2021-12-22T16:27:10.153Z · EA · GW

Thanks!

Comment by casebash on Are there any current students at University of Washington? · 2021-12-22T15:36:19.834Z · EA · GW

I've never heard of the University Group Accelerator Program. Could you tell me more about it?

Comment by casebash on The Effective Altruism Handbook · 2021-12-22T09:24:01.643Z · EA · GW

Oh, I feel silly, I missed that!

Comment by casebash on The Effective Altruism Handbook · 2021-12-22T09:15:13.820Z · EA · GW

Great to see this exists, but I'd love to see better navigation - it's difficult to jump from a post back to the index and when you get to the end of a post, you have to scoll all the way up and look for the hard to see arrow to move to the next post.

Comment by casebash on Aiming for the minimum of self-care is dangerous · 2021-12-20T17:59:51.637Z · EA · GW

This, but for the community overall and not just individuals.

Comment by casebash on EA Internship & Research Opportunities for Undergraduates · 2021-12-20T16:03:32.599Z · EA · GW

In what venue are you doing this science communication? Is it online or in person?

Comment by casebash on A huge opportunity for impact: movement building at top universities · 2021-12-20T13:10:06.206Z · EA · GW

Australia doesn't really have elite universities (at least in terms of undergraduate admissions) in the same sense as the US. There's no university in Australia where you can tell people that you went to it and they will be impressed. There's no university that is hard to get into if, for example, you just want to do a basic arts degree.

That said, I suspect Sydney University would be a pretty good university to target at some point because it's one of the best universities, if not the best, in terms of (English-language) debating in the world.

Comment by casebash on Supporting Video, Audio, and other non-text media on the Forum · 2021-12-20T12:54:24.282Z · EA · GW

I would prefer to experiment with a different, more casual site. Perhaps at some stage we might decide to port some of the features over to the forum, but I think it would make more sense to keep it separate for now.

Comment by casebash on EA conferences in 2022: save the dates · 2021-12-16T08:17:50.759Z · EA · GW

Yeah, I think it makes more sense to position virtual on different dates than the main conferences as I know that if I'd paid money to fly out to a conference, I would be heavily focusing on in-person meetings.

Comment by casebash on EA conferences in 2022: save the dates · 2021-12-16T08:15:30.796Z · EA · GW

It's been like that for a while.

Comment by casebash on External Evaluation of the EA Wiki · 2021-12-14T00:56:17.201Z · EA · GW

I guess one of my key questions would be why the old Less Wrong Wiki seemed - while not fantastic - seemed kind of useful.

Comment by casebash on Potential EA NYC Coworking Space · 2021-12-07T07:03:22.445Z · EA · GW

Have you talked to the other people running Coworking Spaces? There's one setting up in London, another in Berlin + Bay Area has had this for a while.

Comment by casebash on Retrospective on the Summer 2021 AGI Safety Fundamentals · 2021-12-07T06:24:12.271Z · EA · GW

One of the key benefits I see from this program is in establishing a baseline of knowledge for EAs who are interested in AI safety which other programs can build upon.

Comment by casebash on Does anyone have a list of summer internship opportunities that are a particularly good fit for EAs? · 2021-12-03T15:26:44.445Z · EA · GW

Nice! How are you planning to differentiate it from https://www.eawork.club/ ?

Comment by casebash on EA megaprojects continued · 2021-12-03T13:44:18.970Z · EA · GW

I would be surprised if it were worthwhile building an entire university with all the normal departments, but I could see value if it offered specialist masters degrees that you can't obtain elsewhere such as a Masters of AI Safety.

Comment by casebash on What are the most impactful roles that EAs are currently not entering (and why)? · 2021-12-03T09:24:29.568Z · EA · GW

AI Ethicists and Bioethicists. Covid has demonstrated how people in these roles can really mess things up if they spout nonsense and I think we should assume that the same applies to AI as well.

Comment by casebash on Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go? · 2021-12-02T12:28:29.806Z · EA · GW

I guess when I think about existing charities a lot of them have these perverse incentives to do things to get funding, rather than fix the problem, even without these bonuses.

On the other hand, I'm keen to see staff paid fairly and I think people are more likely to think of working somewhere longterm if they see that there's a possibility of this.