EA/Rationalist Safety Nets: Promising, but Arduous

post by Ozzie Gooen (oagr) · 2021-12-29T18:41:53.836Z · EA · GW · 41 comments

Contents

  Introduction
  Evidence of the Problem
  Related Infrastructure
  Potential Challenges
  Possible Research Questions
    Questions for individuals who might get into challenging circumstances:
    Questions for individuals interested in making better programs:
None
36 comments

Rigor: Quickly written (~6 hours). Originally made as a Facebook post that emphasized the “Potential Challenges” section. There’s some discussion there.

Epistemic Status: This mostly comes from personal experiences and discussions with community members in the last few years.

Many thanks to Aaron Gertler, Stefan Schubert, Julia Wise, and Evan Gaensbauer for feedback directly on this post. Also, thanks for everyone involved in the Facebook discussion.

Introduction

I’ve been around EA/rationality for several years now (starting in 2008, during college). I’ve seen several instances where promising people (myself included) could have really used some help.

Potential help includes:

I’m based in the United States. Government benefits here are substantially worse than in some European countries, so it’s possible these concerns don’t apply elsewhere.

I think interventions in these areas could be valuable. However, I believe they’re unusually challenging to implement. I encourage future groups tackling this space to plan accordingly. I also hope that people upset with a lack of existing infrastructure can sympathize with the challenges around it.

Also, see this post for related discussion: An Emergency Fund for Effective Altruists [EA · GW].[1]

Evidence of the Problem

Some evidence of what I’m referring to includes:

Our communities already have a few valuable initiatives. Some of these include (just off the top of my head):

Perhaps we can learn from religious communities. A while ago, I chatted to a Mormon effective altruist who explained how their system works. Mormons regularly tithe 10% of their income to the Mormon church, but in return, the church seems to take care of them when they're down. I’ve recently been watching videos from Peter Santenello about the Hasidic Jewish and the Amish communities, and they seem to have similar systems.

Potential Challenges

At this point, there are some wealthy people in and around effective altruism. So if there are straightforward spending opportunities that would be competitive through an EA lens, there could be funding for them.

Unfortunately, I think setting up a safety net would result in several nasty complications. These might be particularly grueling if the program were intended to itself be an “effective intervention” instead of a “community pool, funded by and benefiting regular effective altruists.

These complications include:

  1. It's tough to discern "I'm giving money to people with high EV" from "I'm giving money to friends and people I want favors from." So I think anyone who tried to do this would have a complex case to make, and onlookers would assume it was corrupt. Additionally, I beleive such a process would be ripe for potential corruption.
  2. Decisions about who to exclude are some of the least enjoyable decisions. There are tons and tons of people out there with horrible situations. Some people are particularly good at putting together sob stories, and others have critical stories but are too polite to speak up. It's kind of like the real-life version of Papers, Please.
  3. It's easy to conflate "a good-hearted person who sort of morally "deserves" money, but is unlikely to produce much social impact" from "some jerk we don't like, but we expect to produce more social impact."
  4. Many people hate being evaluated in this way. People don’t like being rejected for jobs, and this might be more intense, as it might have to be a broader estimate. (Unlike with employment, you couldn’t claim, “Maybe you’re high-value, but you’re not a fit for this specific role”). If the application process were to take place when someone’s in need, then that person might already be in an emotionally challenging place.
  5. Social safety nets make it harder to leave a community or pursue other, independent goals. It seems really unhealthy to have a situation where someone feels like they need to signal their belonging in a community or overstate their impact in order to get basic food or psychological services. Similarly bad, people who don’t believe in effective altruism, but need a safety net, might feel pressured into trying to shmooze with the right people and pretend.
  6. It’s hard to tell who should qualify for services. The critical data might be confidential. Applicants want this to be done emotionally — "just talk to me, and I'll explain it" — but this seems to me like the least objective or high-quality way to make the decision.
  7. Where they exist, non-governmental community-wide safety nets are created by religious organizations. Creating an EA variant might make EA seem weirder.

I think the number one problem here is that we're just in a harsh world, and there are lots of great people out there going through callous times. I find much of the situation globally heartbreaking.

But trying to fix it for the "most altruistic people in expectation" is difficult.

On reflecting over this list, I think many of these concerns are common for social workers and similar professions. I imagine they could be overcome with the right efforts. 

Possible Research Questions

If you don’t want to set up an organization yet, but you are interested in doing investigation on this topic, here’s a list of quick questions I have at this point.

Questions for individuals who might get into challenging circumstances:

Questions for individuals interested in making better programs:


[1] Note that I wrote this post on Facebook a few weeks before this other post came out. However, I converted the Facebook post to an EA Forum post earlier because of that piece. (Just in case more people were actively considering setting up such a service).

41 comments

Comments sorted by top scores.

comment by John_Maxwell (John_Maxwell_IV) · 2021-12-30T06:28:44.499Z · EA(p) · GW(p)

An idea for addressing the challenges is to make the safety net something that only a "genuine EA" would find attractive. For example, you get free room and board in a house with other EAs in a low-prestige + low-rent location, with mandatory EA volunteer hours (perhaps spent helping other inhabitants of the house with their issues?) Only vegan food is served, and the length of your stay is capped at N years. I'm not sure it's necessary to be 100% resistant to outsiders with sob stories; I'd say the important thing is that outsiders with sob stories should be able to market those stories elsewhere & get more of what they want. Also, even if they fake their way into an EA support group like what I described, they might find they absorb EA values and identify as an EA at the end... lol.

Replies from: peterbarnett, gruban
comment by peterbarnett · 2021-12-30T12:41:45.293Z · EA(p) · GW(p)

This sounds like an almost exact description of the EA Hotel (CEEALAR), which is mentioned in the post. I think this does a pretty decent job of selecting for 'genuine EA' people

Replies from: casebash
comment by Chris Leong (casebash) · 2021-12-31T09:24:03.671Z · EA(p) · GW(p)

Although I don't think they have mandatory volunteering?

comment by Patrick Gruban (gruban) · 2021-12-31T10:37:14.336Z · EA(p) · GW(p)

Having listened to the 80K podcast with Howie Lempel it seems that for him it was important to get out of a context where he was with EAs for work and friendship for a time in order to recover. So I'm not sure for which cases this would actually be a good solution.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2022-01-01T20:53:16.031Z · EA(p) · GW(p)

Good point. However, since Howie was employed at an EA organization, he might be eligible for the idea described here [EA(p) · GW(p)]. One approach is to implement several overlapping ideas, and if there's an individual for whom none of the ideas work, they could go through the process Ozzie described in the OP (with the associated unfortunate downsides).

comment by Linch · 2021-12-29T22:22:33.752Z · EA(p) · GW(p)

Some anecdata that might or might not be helpful:

As I mentioned on FB, I didn't have a lot of money in 2017, and I was trying to transition jobs (not even to do something directly in EA, just to work in tech so I had more earning and giving potential). I'm really grateful to the EAs who lent me money, including you. If I instead did the standard "work a minimum wage job while studying in my off hours" (or worse, "work a minimum wage job while applying to normal grad jobs, and then work a normal grad job while studying in my off hours") route, I think my career trajectory would've been delayed for at least a year, probably longer.

Delaying my career trajectory would've cost ~100k Ev if I just stayed in tech and was donating, but I think my current work is significantly more valuable so I think it would've cost more than that.
The main counterpoint I could think of is that minimum wage jobs are good for the soul or something, and I think it's plausible that if I worked for one long enough I would be more "in touch" with average Americans and/or been more generally mature on specific axes. I currently do not believe the value of this type of maturity is very high, compared to my actual counterfactual (at Google etc) of the skills/career capital gained via having more experience interacting in "elite cultures," being around ambitious people, or thinking about EA stuff.

comment by Patrick Gruban (gruban) · 2021-12-30T07:25:53.257Z · EA(p) · GW(p)

Thank you for the overview! What comes to my mind as similar is the Künstlersozialkasse (KSK) in Germany that is ruled by the special law called Künstlersozialversicherungsgesetz.

This artist social fund is open to anyone that works self-employed in an artistic job like visual artists, authors, journalists, musicians etc. and doesn't have employees.  You have to fill out a 9-page form to apply where you state what work you have already done and that you're over the minimum income from artistic work of 3,900€/year. In the first three years of your work life, you don't have to prove this minimum and this was also deferred during Covid-times.

If you get accepted then the KSK will pay the cost of your health-, care- and pension insurance which includes for example doctors, clinics, medicine, psychotherapy, rehabilitation clinics and dentists. You will have to state your income yearly and pay a portion to the KSK. 

The KSK is financed by three sources:

  • Payments by artists (50%)
  • Payment by the government (20%)
  • Payment by clients that employ the artists (30%)

My company has to list all artists invoices that we paid in a year (by graphic designers, photographers, make up artists etc.) and submit it to the KSK. We are then charged a percentage (currently 4.2%) of this. Every company and self-employed person in Germany has to do this.

An analogue in EA could be a system where you:

  • have to prove that you've 
    • either got EA funding for your work
    • you're working at an EA org without insurance
    • you're in the first 3  years of your EA career
  • pay a portion of your EA salary for the insurance
  • the insurance covers health insurance and other insurance-like services
  • Funders fill up the gap of the payments

This model would still have the issue of vetting applicants, but one clear criterion would be that you can only get in for the first three years without showing that you have minimum funding through grant approvals or EA-aligned jobs. If you don't earn any EA money after that you would get excluded.

Replies from: casebash
comment by Chris Leong (casebash) · 2021-12-30T08:23:05.700Z · EA(p) · GW(p)

So thinking about how this works overall:

  • The government is providing a 20% subsidy
  • Any artist who receives insurance from their employer subsidises those who don't
  • Artists outside of the KSK (either because they've not been very successful or they choose not to join) subsidise those inside
  • Highly profitable artists subsidise the less profitable ones (egalitarian component likely works better here as there is more variation in what artists earn, however, EAs are probably more happy to cross-subsidise each other)
  • People hiring artists have to complete additional paperwork
  • The total compensation for artists is likely slightly higher because people forget about these additional costs when considering what they can pay
Replies from: gruban
comment by Patrick Gruban (gruban) · 2021-12-31T10:33:22.065Z · EA(p) · GW(p)

I wasn't as precise as I could and will try to clarify:

  • The German health-, care- and pension insurance system is set up where employees and employers each pay 50%  of the fees. The fee is defined as a percentage of the income. High-income earners subsidise low-income workers in this way.
  • The KSK is a system on top only for self-employed artists who typically have to cover the 50% share that an employer would cover. 50% of the insurance is paid by the artists (same as what employees would pay), the government subsidises 20%, and clients cover 30%
  • Clients have to pay without knowing if the artist is part of the KSK, so there is some additional subsidising.
  • The additional paperwork for clients could be reduced if the artists would be allowed to collect the payments themselves, which I would like better.

I'm not in favour of how the KSK system works and wouldn't recommend it as a model. However, I think their way of identifying an artist by type of work and minimum revenue from this work area is an interesting input.

comment by Larks · 2021-12-30T00:08:53.061Z · EA(p) · GW(p)

Thanks for writing this good overview of a perennial topic.

Paying people higher salaries for EA jobs might be an alternative approach to at least part of this problem. It would allow people to save to protect themselves from future unemployment, without the difficult vetting and bad incentive effects of 'insurance'. It doesn't help people very early on in their careers, but probably no insurance product would either, as these people would often not have built up a credible history of contribution anyway.

Replies from: oagr, John_Maxwell_IV, casebash, gruban
comment by Ozzie Gooen (oagr) · 2021-12-30T01:11:42.404Z · EA(p) · GW(p)

Agreed that higher salaries could help (and are already helping). Another nice benefit is that they can also be useful for the broader community; more senior people will have more money to help out more junior people, here and there.

I imagine if there were an insurance product, it would be subsidized a fair amount. My hope would be that we could have more trust than would exist for a regular insurance agency, but I'm not sure how big of a deal this would make.

comment by John_Maxwell (John_Maxwell_IV) · 2021-12-30T06:18:54.114Z · EA(p) · GW(p)

Another idea is a safety net which estimates the opportunity cost associated with taking a low-paying EA role and caps the financial support at said opportunity cost. Potentially a much cheaper way to achieve the same end result.

The best approach might be to have people register for this safety net as soon as they get an EA role, so they can argue for a particular opportunity cost at that time and know how much "insurance" they're getting.

comment by Chris Leong (casebash) · 2021-12-30T10:37:19.664Z · EA(p) · GW(p)

Perhaps, although it may also increase the number of people working uncompensated.

comment by Patrick Gruban (gruban) · 2021-12-31T10:42:14.848Z · EA(p) · GW(p)

For people how have taken the further pledge an increase in salary would be less valuable than insurance that is paid by the employer. This might be a case that is only relevant for a few people, however, they might also be part of the most dedicated group.

comment by bob · 2021-12-29T21:17:11.712Z · EA(p) · GW(p)

Hey, I wrote the article you refer to. I only intend to partially reimburse people who donated money to EA-related causes. Most problems you describe apply to a safety net for all effective altruists, which would be much more difficult. I'll quote a comment of mine:

By focusing exclusively on reimbursing donors in financial trouble, we avoid opening a can of worms. First of all, the risk of fraud is much lower. If EAs can only get back half of what they gave away, there is no way to use the fund to make money, unless they control a GiveWell-recommended charity. Second, we do not have to judge whether people are EA-aligned. Third, people cannot take advantage of the fund and, perhaps more importantly, people will not have to worry other people are taking advantage of the fund.

To expand on the last point, if we ever decide to reimburse a donation because someone's second car broke down, that might annoy some people with a different idea of what constitutes an emergency, but at least they know that person donated at least twice the amount needed to fix the car. Now, if we handed out fix-your-second-car money to people who never donated anything to charity, I'd predict riots.

Also, coupling the two promotes donations. People who would normally find parting with substantial amounts of money scary have the assurance that they can always knock on our door.

I believe this covers all points you raised, but let me know if I missed anything. Just to reiterate, my hypothetical charity wouldn't make a judgment on whether applicants are still effective altruists if they need money.


The top comment on my article:

This seems like the type of infrastructure that should be experimented with on a small scale rather than heavily debated

Do you agree with this?

Replies from: Jackson Wagner, oagr
comment by Jackson Wagner · 2021-12-29T22:41:06.950Z · EA(p) · GW(p)

I liked your post a lot too, and I think it would be a good starting point precisely because it would be simpler and easier to avoid corruption by creating a safety net with a fixed group of members (people who had submitted evidence of donations) and capped payouts (50% of their total donation amount, or etc), rather than having a charity-style organization that evaluates applications from anyone, like EA Funds.

Mormon/Amish/etc social insurance through church works well because there is a pretty clear, pretty hard-to-fake signal of who's a community member and who's not (ie, do you spend every sunday in church or not).  Normal insurance companies create a clear distinction between members and nonmembers by requiring everyone to pay a monthly premium.  The EA and rationalist communities will probably always be more amorphous and fuzzy than a typical Amish group, but if we just required that everyone pays a monthly premium then it's unclear how we could do any better than existing insurance companies.  So I like the idea of deciding membership based on proof of past charitable donations to EA causes.

I also agree that a service like this (allowing people to "get their donations back" from the community pool if they unexpectedly fell on hard times) might encourage people to donate more or take on more risks in the first place, which would be good for EA overall.

I think it would be good to start experimenting with the service described in your post.  Over time, if successful, the insurance pool could try to branch out into other more advanced services -- perhaps helping people make risky but high-expected-value career moves by offering them some kind of insurance or support in case their ambitious career move fails.  Or doing the kind of community-assistance grantmaking that Ozzie is exploring here.

The biggest wins probably come from finding more good ways to support early-career people facing precarious situations while just getting into EA, exactly like Linch's story above.  The "get back your proven past donations" approach won't work as well for people in those situations since most of them won't have made many EA donations yet.  But hopefully we could try to build up to that over time somehow.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-12-30T01:22:05.136Z · EA(p) · GW(p)

(Just noting general agreement with this)

comment by Ozzie Gooen (oagr) · 2021-12-30T01:20:27.265Z · EA(p) · GW(p)

I agree that your proposal gets around most (maybe all?) of the issues I mentioned. However, your proposal focuses on earning-to-givers who have already given a fair bit, this seems to be tackling a minority of the problem (maybe 20%?). Maybe this is a good place to begin. I feel like I haven't met many people in this specific camp, but maybe there are more out there. 
 

Do you agree with this?

I'm happy to see it on a small scale. That said, the existing discussion/debate doesn't seem like all too much to me. I also feel like there could be some easy wins for research, like doing some investigation into the questions I linked above. 

I'd expect 1-8 weeks of investigation would be the best next step. (Note that "investigation" could mean "interviewing a bunch of people to see what they might want")

Replies from: bob
comment by bob · 2021-12-30T21:19:54.257Z · EA(p) · GW(p)

I agree that your proposal gets around most (maybe all?) of the issues I mentioned.

Ah, that's where we went wrong. I assumed you would have mentioned that if you thought so.

However, your proposal focuses on earning-to-givers who have already given a fair bit, this seems to be tackling a minority of the problem (maybe 20%?).

I agree, and it is quite challenging to determine the size of that minority. If anyone knows anyone who has been in this situation, please send me a message.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-12-30T23:22:24.654Z · EA(p) · GW(p)

Will do. No one comes to mind now, but if someone does, I'll let you know.

(Also, I'm sure others reading this with ideas should send them to Bob)

comment by Chris Leong (casebash) · 2021-12-30T06:46:45.871Z · EA(p) · GW(p)

The Nonlinear Fund is working on addressing this problem for people in AI Safety (My guess would be they will start with people at orgs, then possibly expand it to people on certain grants, I interned there a while ago, so I don't know the current plan).

Gavin Li is working on EA Offroad for people "not constituted for college" or who would find attending college challenging due to their financial position.

I would really like to see the establishment of more EA Hubs in cities that are more affordable. I think that the financial challenges a lot of people are facing are the result of trying to support themselves in some of the most expensive cities that there are. That said, there seem to be a few projects starting in this space, so I would probably encourage people to support existing projects, rather than starting more.

I'm not exactly sure of the scope of Magnify mentoring (previously WAMBAM), but it might be able to provide some support helping people figure out their lives. If not, then perhaps someone should create a mentoring service more focused on helping people improve their lives.

Further ideas:

  • Bountied Rationality - I'm sure that there are a lot of small, useful, and accessible tasks to do. Perhaps someone should apply for funding in order to post more bounties here. (Argument against: bounties are generally winner-takes-all so they can easily result in people burning up a lot of time without receiving any money in return)
  • On a similar, but slightly different note, the AI Safety Fundamental course is now paying facilitators $1000. Having more of these kinds of opportunities available seems positive.
  • Programming bootcamps - a lot of EAs are capable of becoming programmers and this could provide a path to financial stability.
  • Some kind of peer support project with group facilitators receiving training from professionals.
  • Something like Y Couchinator [LW · GW] to help EAs share their free rooms.
  • Exit grants. In some circumstances, it might make sense to award exit grants to people who were funded/employed productively for a reasonable period but have now become unproductive. These grants should probably be awarded privately with only the total number of grants and dollar value reported.

Final thoughts:

Given all the excellent points you make about the challenges of such a fund, I believe that it's important to have a wide variety of other means of support. Nonetheless, I suspect that a more traditional assistance organisation would be valuable, so long as there was proper communication about its role, specifically, the limits on how much support it can provide and that the organisation wouldn't be able to help everyone.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-12-30T23:25:18.909Z · EA(p) · GW(p)

That all sounds pretty good to me. I like the idea of a wide variety of means of support; both to try out more things (it's hard to tell what would work in advance), and because it's probably a better solution long-term. 

comment by Vanessa · 2021-12-30T17:38:57.514Z · EA(p) · GW(p)

Kudos for this post. One quibble I have is, in the beginning you write

Potential help includes:

  • Money
  • Good mental health support
  • Friends or helpers, for when things are tough
  • Insurance (broader than health insurance)

But later you focus almost exclusively on money. [Rest of the comment was edited out.]

Replies from: oagr, gruban
comment by Ozzie Gooen (oagr) · 2021-12-30T17:57:34.659Z · EA(p) · GW(p)

Good point about focusing on money; this post was originally written differently, then I tried making it more broad, but I think it wound up being more disjointed than I would have liked.

First, I’d also be very curious about interventions other than money.

Second though, I think that “money combined with services” might be the most straightforward strategy for most of the benefits except for friends.

“Pretty strong services” to help set up people with mental and physical health support could exist, along with insurance setups. I think that setting up new services that are better than existing ones, but much more limited in scope, is possible, but expensive (at least in the opportunity cost of those who would set them up.)

Some helpers when things are rough could in theory be hired.

Encouraging more friendships seems pretty great, but very different. I imagine that’s more about encouraging good community structures/networks/events and stuff, but I’m not sure.

I also want to encourage you and others reading this to brainstorm on the topic. I don’t have any private knowledge, and I imagine others here would have much better insight into much of the problem than I do. (I’m on the older side of EAs now, and am less connected to many of the new/younger/growing communities)

comment by Patrick Gruban (gruban) · 2021-12-31T10:51:44.283Z · EA(p) · GW(p)

I think this is a good point. One possibility of addressing this could be on the level of local EA groups giving organizers the tools and education to identify struggling members and help them better. As a local organizer, I would find additional resources helpful, especially if they are very action-orientated.

comment by Manuel_Allgaier · 2022-01-04T14:04:49.996Z · EA(p) · GW(p)

Effective and easy intervention: Help EAs new to your city settle in

Many EAs move to Berlin for jobs, many of them (especially non-Germans but also Germans) don't find good housing right away and some find it difficult to make friends (for example if they only found housing far away from the city center). A single 1-1 conversation / chat with some advice and introductions to people who share their interests can really make a difference, and it's easy to do: Just reach out to new people on your local meetups, in your local EA facebook group etc and offer them help in a friendly, respectful & non-obtrusive way (make sure you don't want to come across weird, creepy etc). Ideally coordinate with your local group organiser on how to best do that (and if you don't have a group, contact CEA and set one up! :))

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2021-12-29T23:17:42.640Z · EA(p) · GW(p)

(Very small point) From my understanding REACH is no longer operational 

Replies from: casebash, oagr
comment by Chris Leong (casebash) · 2021-12-30T05:50:45.870Z · EA(p) · GW(p)

That's a shame to hear. Is there a write-up anywhere?

comment by Ozzie Gooen (oagr) · 2021-12-30T01:22:24.978Z · EA(p) · GW(p)

Yep. Sorry, I didn't mean to make it seem like it was. Changed. 

comment by Charles He · 2021-12-29T23:04:51.383Z · EA(p) · GW(p)

Hi Ozzie,

For distributing aid, especially money, do you have any thoughts on allocation/fairness/gatekeeping?

This can be either in a personal sense, or more technical "mechanism design" sense.

This seems to be the main blocker to doing something scaled up and systematic.

 

It seems that personal networks and relationships work, but scaling this up beyond personal relationships leads to questions about abuse and moral hazard. People who claim to be EA to get money, for example. 

My guess is that a serious question besides abuse is fairness. Who deserves it and how much? 

Bob's project which is mentioned in the comments, is one solution. However, his implementation timeline is unclear, but even if perfectly executed, it only helps a small set of earning to givers.

 

To be clear, I would personally be willing to bite the bullet (to be fair with not my own money) on some pretty aggressive schemes, but I think buy-in and optics play a role.

Replies from: oagr
comment by Ozzie Gooen (oagr) · 2021-12-30T01:34:52.661Z · EA(p) · GW(p)

I think this is a serious question.

One big question is is this would be viewed more as a "community membership" thing or as a "directly impactful" intervention. I could imagine both being pretty different from one another.

I think personally I'm more excited by the second, because it seems more scalable. 

The way I would view the "utilitarian intervention" version would be pretty intense, and much unlike almost all social programs, but it would be effective.
1. "Fairness" is a tricky word. The main thing that matters is who's expected to produce value. 
2. Many of the most valuable people are not EAs. Identifying these people and giving them support would be included. It could look like trying to find the most high-expected-value people globally, even if they have narrow online presences.
3. There would be pretty strict/disciplined measures for evaluating which individuals would represent a "good deal". This would mean people would have rankings, maybe "predictions of impact". 
4. Maybe there would be "insurance" options, for people to have the feeling of stability (assuming this makes them more productive and risk-taking), even if help later on would in isolation be a net loss. (For example, funding after retirement)

I guess in some ways, this would be a very elite social program, for a very specific definition of "elite". 

Back to the "community membership" variant; one great thing about this is that maybe it could be mostly community-funded, and not in need of external funding. I imagine people in this camp would need to pay a lot of attention to find possible bad actors early and out them. It seems like a tough problem, but the solution space is large. 

Another factor is that if people are willing to give up some privacy, then a lot of evaluation becomes easier, and gaming/abusing the system becomes harder.

Replies from: Charles He, Charles He
comment by Charles He · 2021-12-31T05:42:51.410Z · EA(p) · GW(p)

Random comment: Do you or anyone else have any comments about the use of terminology with negative connotations, like “gatekeeping” or “elite”?
 

Background (unnecessary to read):

Basically I’ve been using the word “gatekeeping” a fair bit. 

This word  seems to be an accurate description of principled, prosocial activity to create functional teams or institutions. It includes activities no one finds surprising there is control over, such as grant making. 

To see this another way, basically, someone somewhere (Party A) has given funding to achieve maximum impact for something (Party B), and we need people (Party C) to cause this to happen in some way. We owe Party A and B a lot, and that usually includes some sort of selection/control over party C.

Also, I think that “gatekeeping” seems particularly important in the early stages of founding a cause area or set of initiatives, where such activity seems necessary or has to occur by definition. In these situations, it seems less vulnerable to real or perceived abuse or at least insularity, at the same time it seems useful and virtuous to signpost and explain what gatekeeping is and what the parameters and intentions are. 

However, gatekeeping is basically a slur in common use.

 

Now, “elite” has the same problem ("elitism"). It is also an important, genuine and technical thing to consider and signpost, but it can also be associated with real or perceived misuse. 

Maybe it’s tenable if I use just "gatekeeping". I’m worried if I start passing docs, posts or comments around, filled mention of both “gatekeeping” and “elites" and terms of art from who knows what else (from various disciplines, not just EA), it might offend or at least look insensitive.

 

I guess I can change the words with another. 

However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example  imposing cognitive/jargon costs on everyone.

 

I’m not sure if you have any thoughts. I thought I would write this because this seems like one of those things that needs input from others.

Replies from: casebash
comment by Chris Leong (casebash) · 2022-01-05T07:09:59.435Z · EA(p) · GW(p)

I definitely think it's important to pay attention to language when a simple substitution can avoid issues. Maybe it'd be better to use the word "evaluation" or "stewardship" rather than "gatekeeping"?

"High-impact" might also be a good substitute for "elite".

However, I dislike it when people change words for political reasons. It seems like bad practice for a number of reasons, for example  imposing cognitive/jargon costs on everyone.

I would suggest using contentious words when substitutes would significantly impede communication or obscure the point being made, but otherwise being flexible.

comment by Charles He · 2021-12-31T05:18:37.810Z · EA(p) · GW(p)

Hi Ozzie,

This seems excellent and I learned a lot from this comment and your post. 

I agree with the impactfulness argument you have made and its potential. It seems important in being much larger scale. It might even ease other types of giving into the community somehow (because you might develop a competent, strong institution). It's also impactful, by design. 

Also, as you suggest, finding very valuable, non-EA people to execute causes seems like a pure win [1]

 

Now, it seems I have a grant by a major funder of EA longtermism projects. Related to this, I am researching (or really just talking about) a financial aid project to what you described. 

This isn't approved or even asked for by the grant maker, but there seems to be some possibility it will happen. (But not more than a 50% chance though).  

Your thoughts would be valuable and I might contact you.

I might copy and paste some content from the document into the above comment to get feedback and ideas.

 

 

[1] But finding and funding such people also seems difficult. My guess that people who do this well (e.g. Peter Thiel of Thiel Fellows) are established in related activities or connected, to an extraordinary degree. My guess is that this activity of finding and choosing people seems structurally similar to grant making, such as GiveWell. I think that successive grantmakers for alternate causes in EA have a mixed track record compared to the original. Maybe this is because the inputs are deceptively hard and somewhat illegible from the outside. 

Replies from: casebash
comment by Chris Leong (casebash) · 2022-01-05T07:11:43.676Z · EA(p) · GW(p)

I'd be very keen to hear what you're planning/provide feedback.

comment by Vilfredo's Ghost (Bluefalcon) · 2022-01-03T07:34:34.578Z · EA(p) · GW(p)

I have an objection to the idea  (or at least some versions of it) that isn't covered in the post. I don't think my objection applies to advice/counseling/that sort of thing, but certainly does to money: 

The process of building my own safety net, in itself, made me a lot more effective than I would have been otherwise. I went through some really rough times in my early career, including sleeping on a park bench for a month and seriously contemplating suicide after several rejections for jobs I was highly qualified for.  To fix my situation, I had to confront some hard truths about how the world works, and about the limits of my ability to plan the future accurately, that I am not sure I ever would have confronted otherwise.  I wound up in such extreme circumstances because I made risky career bets on independent projects, assumed they would work out and that those successes would lead to other successes that would solve all the problems created by not having a paying job or meaningful savings for several months while I pursued an independent project, and failed to contingency plan for either a failure of the project or for the project's success to not immediately land me a paying job consistent with the skillset it demonstrated. I also had a bad tendency to blame all my failures on difficult circumstances or other people instead of thinking hard about what I could have done differently.

I have relatives in their 60s who still seem to think a deus ex machina will swoop in and rescue them from imprudent decisions as long as that's what feels "fair" in that situation. I had definitely absorbed some of that attitude until confronted with the harsh reality that there was no one to rescue me except myself, and that I could do so only by making decisions with a clear eye toward their practical consequences and not based on my feelings about how the world "should" work. So I worry that in being the deus ex machina, even for extremely high EV people,  you would risk reducing their EV by depriving them of an important skill-building opportunity.