Yeah, I definitely agree with that - I think a pretty common issue is people entering into people management on the basis of their skills at research, and they don't seem particularly likely to be correlated. I also think organizations sometimes struggle to provide pathways to more senior roles outside of management too, and that seems like an issue when you have ambitious people who want to grow professionally, but no options to except people management.
I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.
The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest number of people reporting to them after the CEO)
I don't think this is fully accurate — my impression is that "operations" is used widely outside of EA in the US nonprofit space to refer to 90%+ of what ops staff in EA do. E.g. looking through a random selection of jobs at US nonprofits the operations jobs seem similar to what I'd expect in EA, which is basically working on admin / finance / HR / legal compliance, etc and some intersections with fundraising/comms. At lots of small nonprofits (like EA ones), these jobs are staffed necessarily generalists — you have to do all those functions, but none might be a full-time job on their own, so you find one person to do it all. I've worked at a bunch of US nonprofits outside of EA and all of them had staff with titles like "Operations Director" or "Operations Coordinator" who basically did the same thing as I'd expect those roles to do at EA organizations. I think EA likely just took this titling from the US nonprofit space in general, though EA does have some unusual operations norms (e.g. being unusually high touch).
I think that there is definitely a different use of this term in a lot of for-profit contexts (e.g. business operations) but I've also seen it used the same way there sometimes. And, COO usually stands for Chief Operating Officer, not Chief Operations Officer, and those are definitely different things.
Managers within EA don’t seem to realise that some things they call operations are actually management responsibilities, and that to be a manager you need to be willing to less or maybe none of the day job, e.g. the CEO of a large research organisation should probably not do research anymore
I agree that operations at EA organizations do lots of things that might often in other contexts be done by managers, and your specific example might be correct, but I also think that sometimes, especially in a nonprofit context, a large amount of admin burden is placed on programmatic staff, and it can be good to design systems to change this. That being said, the examples from the original post (e.g. dealing with emails for someone) sound more like an Executive Assistant's role, or just bad?
I think that lots of nonprofits outside of EA are under weird kinds of pressure (e.g. Charity Navigator rates charities on "administrative expense ratio") to not have particularly high operations costs. And an easy way to do this is to shift those expenses to managers (e.g. managers doing more paperwork). I don't think this is necessarily intentional, but a pretty undesirable effect of having fewer ops staff. I don't think EA organizations are under the same pressure, and that seems generally good.
If you're still interested in joining Rethink Priorities' board, we've extended the deadline to submit an application to January 20th. We'd love to hear from you by then! Apply today.
Hey James - we aren't talking publicly about this project right now for a variety of reasons, but it's inaccurate to say that the project hasn't launched or run programs — there are lots of lessons learned here worth sharing in the future. Feel free to email me with any questions you have about it.
Not sure if it is active anymore, but there is a longstanding hub for EAs to do this: https://donationswap.eahub.org/
I've noticed that it takes new orgs up to a year to show up in that search, so it might also be that they've applied for or gotten the status recently (given that FTX stuff was so new). Delaware corporation search suggests they are registered as a nonprofit corporation in Delaware - https://icis.corp.delaware.gov/ecorp/entitysearch/NameSearch.aspx, (have to search them by name).
Unfortunately not! We use Greater Wrong because we can do an RSS feed for a specific tag for the forum. E.g., we have a communications Slack channel where any post made and tagged "Rethink Priorities" is automatically posted using an RSS feed.
This isn't really that big a deal for us - I just thought I'd mention it here :)
This is minor, and probably not relevant to most people, but my work (Rethink Priorities) would definitely use an RSS feed version of the Forum so we can get notifications of when things with certain tags are posted in Slack. I think we could do this now with an account / notifications to email / email to Slack, but instead are using Greater Wrong for now for simplicity (e.g. this feed goes to our comms Slack channel) https://ea.greaterwrong.com/topics/rethink-priorities?format=rss). Thanks for all you do!
Yeah, I agree with this entirely. I think that probably most good critiques should result in a change, so just talking about doing that change seems promising.
That makes sense to me.
Yeah, I definitely think that also many people from left-leaning spaces who come to EA also become sympathetic to suffering focused work in my experience, which also seems consistent with this.
Definitely mostly using it to mean focused on x-risk, but most because that seems like the largest portion / biggest focus area for the community.
I interpret that Will MacAskill quote as saying that even the most hardcore longtermists care about nearterm outcomes (which seems true), not that lead reduction is supported from a longtermist perspective. I think it's definitely right that most longtermists I meet are excited about neartermist work. But I also think that the social pressures in the community currently still push toward longtermism.
To be clear, I don't necessarily think this is a bad thing - it definitely could be good given how neglected longtermist issues are. But I've found the conversation around this to feel somewhat like it is missing what the critics are trying to get at, and that this dynamic is more real than people give it credit for.
I think something you raise here that's really important is that there are probably fairly important tensions to explore between the worlds that having a neartermist view and longtermist view suggest we ought to be trying to build, and that tension seems underexplored in EA. E.g. an inherent tension between progress studies and x-risk reduction.
I mean, my personal opinion is that is there was a concerted effort of maybe 30-50 people over ~2015-2020, the industry could have been set back fairly significantly. Especially strong levers here seem to be around convincing venture capital not to invest in the space, because VC money is going to fund the R&D necessarily to get insectmeal cost-competitive with fishmeal for the industry to succeed. But the VC firms seemed to be totally shooting in the dark during that period on whether or not this would work, so I think plausibly a pretty small effort could have had a substantial impact on whether or not funding got into the space. At least, I think there would have been an opportunity to delay its development by several years, and give the animal welfare community time to organize / figure out better strategies.
Now, the biggest bottleneck for this space is finding people interested in working on it. (which would have been a bottleneck before too). It's definitely weird, but there just aren't that many people who want to do this work. Finding capable founders for new animal charities focused on highly neglected animals seems especially difficult.
Yeah that's fair - there are definitely people who take them seriously in the community. To clarify, I meant my comment as person-affecting views seem pretty widely dismissed in the EA funding community (though probably the word "universally" is too strong there too.).
That doesn't seem quite right - negative utilitarians would still prefer marginal improvements even if all suffering didn't end (or in this case, a utilitarian might prefer many become free even if all didn't become free). The sentiment is interesting because it doesn't acknowledge marginal states that utilitarians are happy to compare against ideal states, or worse marginal states.
Yeah, I think that some percentage of this problem is fixable, but I think one issue is that there are lots of important critiques that might be made from a place of privileged information, and filling in a form will be deanonymizing to some extent. I think this is especially true when an actor's actions diverge from stated values/goals — I think many of the most important critiques of EA that need to be made come from actions diverging from stated values/goals, so this seems hard to navigate. E.g. I think your recent criminal justice reform post is a pretty good example of the kind of critique I'm thinking of, but there are ones like it based on actions that aren't public or at least aren't written up anywhere that seem really important to have shared.
Related to this, I feel like a lot of people in EA lately have expressed a sentiment that they have general concerns like the one I outlined here, but can't point to specific situations. One explanation for this is that their concerns aren't justified, but another is that people are unwilling to talk about the specifics.
That being said, I think the anonymous submission form is really helpful, and glad it exists.
For what its worth, I've privately been contacted more about about this particular critique resonating with people than any other in this post by a large degree, which suggests to me that many people share this view.
Thanks for the response!
RE 5d chess - I think I've experienced this a few times at organizations I've worked with (e.g. multiple funders saying, "we think its likely someone else will fund this, so are not/only partially funding it, though we want the entire thing funded," and then the project ends up not fully funded, and the org has to go back with a new ask/figure things out. This is the sort of interaction I'm thinking of here. It seems costly for organizations and funders. But I've got like an n=2 here, so it might just be chance (though one person at a different organization has messaged me since I posted this and said this point resonated with their experiences). I don't think this is intentional on funders part!
RE timelines - I agree with everything here. I think this is a tricky problem to navigate in general, because funders can have good reasons to not want to fund projects for extended periods.
RE vocabulary - cultural differences make sense as a good explanation too. I can think of one instance where I felt like this was especially noticeable - I encouraged a non-EA project I thought was promising to apply for funding, and they didn't get it. I pitched the funder on the project personally, and they changed their mind. There are obviously other factors at play here (e.g. maybe the funder trusted my judgement?), but I felt like looking at their application, it seemed like they just didn't express things in "EA terms" despite being pretty cool, and their application wasn't overly sensational or something.
RE brain drain - I agree with everything here. I think I'm more concerned about less prestigious but really promising organizations losing their best people, and that grantmaking in particular is a big draw for folks (though maybe there is a lot of need for talented grantmakers so this isn't a bad thing!).
Yeah that makes sense to me. To be clear, the fact that two smart people have told me that they disagree with my sense that moral realism pushes against consistency seems like good evidence that my intuitions shouldn't be taken too strongly here.
I definitely agree with this. Here are a bunch of ideas that are vaguely in line with this that I imagine a good critique could be generated from (not endorsing any of the ideas, but I think they could be interesting to explore):
- Welfare is multi-dimensional / using some kind of multi-dimensional analysis captures important information that a pure $/lives saved approach misses.
- Relatedly, welfare is actually really culturally dependent, so using a single metric misses important features.
- Globalism/neoliberalism are bad in the longterm for some variety of reasons (cultural loss that makes human experience less rich and that's really bad? Capitalism causes more harms than benefits in the long run? Things along those lines).
- Some change is really expensive and takes a really long time and a really indirect route to get to, but it would be good to invest in anyway even if the benefits aren't obvious immediately. (I think this is similar to what people mean when they argue for "systemic" change as an argument against EA).
I think that one issue is that lots of the left just isn't that utilitarian, so unless utilitarianism itself is up for debate, it seems hard to know how seriously people in the EA community will take lefty critiques (though I think that utilitarianism is worth debating!). E.g. "nobody's free until everyone is free" is fundamentally not a utilitarian claim.
Yeah those are fair - I guess it is slightly less clear to me that adopting a person-affecting view would impact intra-longtermist questions (though I suspect it would), but it seems more clear that person-affecting views impact prioritization between longtermist approaches and other approaches.
Some quick things I imagine this could impact on the intra-longtermist side:
- Prioritization between x-risks that cause only human extinction vs extinction of all/most life on earth (e.g. wild animals).
- EV calculations become very different in general, and probably global priorities research / movement building become higher priority than x-risk reduction? But it depends on the x-risk.
Yeah, I'm not actually sure that a really convincing person-affecting view can be articulated. But I'd be excited to see someone with a strong understanding of the literature really try.
I also would be interested in seeing someone compare the tradeoffs on non- views vs person-affecting. E.g. person affecting views might entail X weirdness, but maybe X weirdness is better to accept than the repugnant conclusion, etc.
That's interesting and makes sense — for reference I work in EA research, and I'd guess ~90%+ of the people I regularly engage with in the EA community are really interested / excited about EA ideas. But that percentage is heavily influenced by the fact that I work at an EA organization.
Thanks for sharing these! It looks like this list ends at H (with some Ls at the beginning). I was wondering if it got cut off, or if that's coincidental?
My spouse shared this view when reading a draft of this post, which I found interesting because my intuitions went somewhat strongly the other way.
I don't really have strong views here, but it seems like are three possible scenarios for realists:
- Morality follows consistent rules and behave according to a logic we currently use
- Morality follow consistent rules but doesn't behave according to a logic we currently use
- Morality doesn't follow consistent rules
And in 2/3 of those, this problem might exist, so I leaned toward saying that this was an issue for realists.
There is a defense of ideas related to your position here that I didn't find it particularly compelling personally.
I'd be interested in a survey on this.
My impression is that realism isn't a majority view among EAs, but is way higher than the general non-religious public / greater tech and policy communities that lots of EAs come out of.
Though I think this is something I want to see critiqued regardless of realist-ness.
I think I agree with everything here, though I don't think the line is exactly people who spend lots of time on EA Twitter (I can think of several people who are pretty deep into EA research and don't use Twitter/aren't avid readers of the Forum). Maybe something like, people whose primary interest is research into EA topics? But it definitely isn't everyone, or the majority of people into EA.
It probably depends on the area, but probably non-welfare related impact is going to vary by industry significantly. E.g. I imagine that insecticide use has fairly substantial environmental impacts, but that residential insecticides do not. I haven't looked into this at all, but I'd guess there are many ways in which these industries are bad and also good (they all exist because they provide some useful benefit) besides the welfare implications.
I think that I agree with many aspects of the spirit of this, but it is fairly unclear to me that if organizations just tried to pay market rates for people to the extent that is possible it would result in this - I don't think funding is distributed across priorities according to the values of the movement as a whole (or even via some better conception of priorities where more engaged people were weighted more highly or something, etc.), and I think different areas in the movement have different philosophies around compensation, so it seems like there are other factors warping funding being ideally distributed. It seems really unclear to me if EA salaries currently are actually carrying signals about impact, as opposed to mostly telling us something about the funding overhang/relative ease of securing funding in various spaces (which I think is uncorrelated with impact to some extent). I guess to the extent that salaries seem correlated with impact (which I think is possibly happening but am uncertain), I'm not sure the reason is that it is the EA job market pricing in impact.
I'm pretty pro compensation going up in the EA space (at least to some extent across the board, and definitely in certain areas), but I think my biggest worry is that it might make it way harder to start new groups - the amount of seed funding a new organization needs to get going when the salary expectations are way higher (even in a well funded area) seems like a bigger barrier to overcome, even just psychologically, for entrepreneurial people who want to build something.
Though also I think a big thing happening here is that lots of longtermism/AI orgs. are competing with tech companies for talent, and other organizations are competing with non-EA businesses that pay less than tech companies, so the salary stratification is just naturally going to happen.
Thanks for sharing this! I think that it is tough that the experiences you list are shared by many other people with ops experience. I also think that something I've witnessed at a lot of organizations is that growth can be somewhat stumbling - e.g. new non-ops staff are added until ops is overwhelmed, and only then are ops staff added.
To mildly shamelessly plug my own employer, Rethink Priorities has been really focusing on offsetting some of these challenges, including doing things like:
- Having a pay system that doesn't discount ops work - ops staff are paid the same as other staff at the same title level
- Really emphasizing working at most 40 hours / week, and making it clear to people that if they are working more than 40 hours / week, it means we are understaffed and need to address something
- Investing in ops expansions prior to other expansions, so we have the bandwidth to grow, and slack in our operations in general
- Giving people a high amount of autonomy in their roles
- Focusing on providing professional development opportunities
So far, these have gone really well - we've had no turnover on our operations team, and the team consistently reports being quite happy in their roles. We are also hiring for a bunch of operations roles right now (https://forum.effectivealtruism.org/posts/of9qrfb5HQfwgj3Le/rethink-priorities-operations-team-is-expanding-we-re-hiring).
Sure thing! I am really excited about this position. I think the main motivation is that there are a lot of things where it seems like there ought to be summaries of the evidence for what the best practice is on an operational question, but there just isn't good information out there. So, we're hoping that some combination of literature review and self-experimentation can help us ensure we are operating efficiently and intelligently as we grow.
In response to your specific thoughts:
- I definitely think our exec teams work on these questions, but we'd like a deeper level of analysis than we typically have time for. I think one issue for our management team is that there are many competing and important demands on their time. So having someone specifically look into these questions from a research angle and making recommendations to our exec team seems useful.
- I think that the existing literature is often way too general to be applied to RP.
- E.g. a lot of the literature about hiring is not about specific roles, but about entire classes of work (e.g. "knowledge work"). I'd like to know how to best hire researchers for doing research on EA topics. The best way to figure that out seems to be just to look at our own practices and see what works and doesn't, and to do that systematically. I'm hoping we are now hiring for enough positions with regularity that we can have some power in these analyses.
- One issue we've run into is that operational research tends to be deeply mixed in with people's opinions about operations, and if those opinions don't align with our perspective or don't account for some particularity at our organization, the research doesn't end up being super useful. So having someone who understands our perspective / approach while looking at the literature or doing direct research seems really helpful.
- I think one important question for this role will be "how do we stay nimble/flexible as we grow?" I think RP has had a fairly strong attitude of not letting perfect be the enemy of the good in our organizational design, and this has served us really well, but often means there is room for improvement. And, we really aren't a static organization - we are growing and just changing, so someone paying attention and ensuring that our operations are changing with the organization is really helpful. I definitely think concerns about breaking things that are already working are good ones, but I think there are many areas where the improvements to be made are substantial enough to spend a lot of resources on it.
We set the title level for the Special Projects Associate roles for a few reasons:
- We think that this could be a valuable way for people new to operations for EA organizations to gain skills.
- We think that generally these roles would be good learning opportunities for early career EAs to explore ops careers.
- These roles are fairly generalist
I think it is likely that if someone came in who had a fairly deep background in operations relevant to these roles, we'd basically evaluate them for a different title level on an individual basis.
I think we'd also likely consider really strong candidates for the Director-level for other roles on the Special Projects team, so only one application is needed to that team, but for other positions at RP, we have different evaluation committees, so multiple applications would need to be submitted. We are happy to consider candidates for any number of roles they'd like to apply for across programs though!
Thanks! We are happy to be a good place to work and will keep that idea in mind for the future.
Sorry to callously steal your thunder Peter!
I know this question wasn't directed at me, but my impression was that we had a lot of people do the training and many also read the book, and most came away thinking that the training was not worth the time / covered a lot of the material in the book but in a less useful format.
That being said, I think it's possible that having all managers just being in a situation where they sit and think about good management practices for 3 days can be really helpful, even if the feeling of being there is negative / the training itself is bad, and I wouldn't be surprised if having a large number of people go through the training improved management at RP overall.
Yeah that makes sense to me - RP definitely is at an advantage in being able to recruit people interested in tons of different topics, and they might still be value aligned? I'd say that we've gotten some very good longtermism focused ops candidates, but maybe not proportional to the number of jobs in EA? Not sure though. I think remote work really factors heavily - most of the organizations mentioned in this thread as having open positions that they are struggling to fill aren't hiring remotely, and are just hiring in the Bay Area it looks like.
Looking at other comments here, it seems like more people share your thought. I think maybe the remote/non-remote line is still important. But given that other ops people perceive a bottleneck, I added a note to my answer that I don't think it's really accurate.
Yeah, I think it sounds like people are saying that there is a lack of executive-level talent, which makes sense and seems reasonable - if EA is growing, there are going to be more Executive-y jobs than people with that experience in EA already, so if value-alignment is critical, this will be an issue.
But, I guess to me, it seems odd to use "ops" to mostly refer to high-level roles at organizations / entrepreneurial opportunities, which aren't the vast majority of jobs that might traditionally be called ops jobs. I definitely don't think founding an organization is called Ops outside this context. Maybe the bottleneck is something more like founders/executives at EA orgs?
I think my experience is that finding really high quality junior ops folks like you describe is not that difficult (especially if we're willing to do some training), and that might hinge more on the remote factors I mentioned before, but I guess I totally buy that finding founders/execs is much harder.
I do think that ops skills matter for founding things, but also just having the foresight to hire ops-minded people early on is a pretty equivalent substitution. E.g. if I was running something like CE, I probably wouldn't look for ops related skills (but also I say all this as a person who founded an organization and is ops-inclined, so maybe my life experience speaks to something else?)
Edit: Given the other answers here it seems like there probably is a higher unmet demand for ops roles than I suggest here, so I don't think this comment should be the top answer here. I think my comments below might still be helpful for indicating why we and some other organizations have had less trouble hiring for ops than other organizations, but it seems like a bunch of groups are struggling to hire for ops.
I've hired operations people for EA-aligned organizations both during the period that 80,000 Hours had ops as a priority area and after.
Some quick thoughts:
- I've never perceived there to be a bottleneck in operations talent. I remember hiring for a role around 2018 that received probably 50+ applications that seemed worth at least looking at, and now we regularly receive 100+ for ops roles.
- My experience is that there are way more aligned and strong ops candidates than previously (e.g. in 2018 we'd probably have 2-3 highly skilled ops candidates per round, and now we have more like 10, though this is across two different organizations, so not a direct comparison).
- At the time they made ops a priority area, I was fairly surprised, as were several other people in ops I spoke to, because we had not had any trouble at all hiring ops people (my impression is now it's even easier, but it didn't feel like a bottleneck then either).
- The organizations I've hired for have been 100% remote. I think this is likely where the dividing line is between organizations that have trouble with hiring ops people, and those that don't.
- From the perspective of people considering EA careers during college, I think the non-remote ops jobs are pretty unappealing — they are relatively low salaried and status despite being mostly in some extremely expensive cities (e.g. San Francisco, London). If I was a college student considering in-person jobs, salaries for ops roles vs. technical roles in San Francisco would strongly bias me toward pursuing technical roles.
- Right now, I think remote organizations are in a way better market for EA-aligned talent. I'd guess in terms of EAs-per-job, the number is much lower in the Bay Area, Oxford, etc. vs outside those cities, and remote organizations can hire in the hub cities too. Plus, remote organizations can offer highly competitive salaries outside expensive cities without breaking the bank.
- I think it is fairly likely that this was made a priority area in the first place because of bottlenecks at some non-remote organizations or because of very high standards for value-alignment that might now be looser, but I am uncertain about this.
- Identifying talent for projects that haven't been started seems like a fundamentally different bottleneck that operations for existing projects.
I don’t know if I buy any specific theory of change as being particularly useful, but my impression is most people in the animal welfare world are working under something like scenarios 1, 3, or 4 on your list, but not in any deeper detail than you have here. It also doesn’t seem like you have to have a Theory of Victory if you think corporate campaigning is highly cost-effective and otherwise making progress on animal welfare issues is hard.
The closest thing I’ve seen to something explicit and detailed is DxE’s Roadmap to Animal Liberation - https://docs.google.com/document/d/1YN7KpuShiZItqVuQtWv6ykrjrNv6rAnmjVOcsofRj0I/
Here are roles Rethink Priorities has hired for since 2020. There hasn't been any real trend as far as I can see, except that my subjective impression is that the number of highly qualified applicants for research roles and operations roles is up, suggesting that it is getting harder to get a job at RP.
Our most competitive hiring round was for an Operations Associate a few months ago. Our researcher roles are in specific cause areas, so it's hard to compare directly to when we hired general researchers, but my impression is that they are up. We consistently get far fewer applications for management roles. For non-management roles, we still regularly get 60+ applications per offer we make.
The roles with * are ongoing hiring processes, so this is just my best guess at how many people we might end up hiring for each.
This potentially sounds useful, and I can definitely write about it at some point (though no promises on when just due to time constraints right now).
If you're donating on our website (https://rethinkpriorities.org/donate), on the second part of the donate form, you can add a comment. Just add a note there if you'd like us to restrict your gift to a specific pool - our finance team sees these notes.
If you're giving via another platform (EA Funds, a DAF, etc.) feel free to just email us at firstname.lastname@example.org and let us know!
Thanks for supporting us!
This is a little hard to tell, because often we receive a grant to do research, and the outcomes of that research might be relevant to the funder, but also broadly relevant to the EA community when published, etc.
But in terms of just pure contracted work, in 2021 so far, we've received around $1.06M of contracted work, (compared to $4.667M in donations and grants (including multi-year grants)), though much of the spending of that $1.06M will be in 2022.
In terms of expectations, I think that contracted work will likely grow as a percentage of our total revenue, but ideally we'd see growth growth in donations and grants too.
I appreciate it, but I want to emphasize that I think a lot of this boils down to careful planning and prep in advance, a really solid ops team all around, and a structure that lets operations operate a bit separately from research, so Peter and Marcus can really focus on scaling the research side of the organization / think about research impact a lot. I do agree that overall RP has been largely operationally successful, and that's probably helped us maintain a high quality of output as we grow.
I also think a huge part of RP's success has been Peter, Marcus, and other folks on the team being highly skilled at identifying low-hanging fruit in the EA research space, and just going out and doing that research.
So there are a bunch of questions in this, but I can answer some of the ops related one:
- We haven't had ops talent bottlenecks. We've had incredibly competitive operations hiring rounds (e.g. in our most recent hiring round, ~200 applications, of which ~150 were qualified at least on paper), and I'd guess that 80%+ of our finalists are at least familiar with EA (which I don't think is a necessary requirement, but the explanation isn't that we are recruiting from a different pool I guess).
- Maybe there was a bigger bottleneck in ~2018 and EA has grown a lot since or reached people with more ops skills since?
- We spend a lot of time resources on recruiting, and advertise our jobs really widely, so maybe we are reaching a lot more potential candidates than some other organizations were?
- Management bottlenecks are probably our biggest current people-related constraint on growth (funding is a bigger constraint).
- We've worked a lot on addressing this over the summer, partially by having a huge internship program, and getting a lot of current staff management experience (while also working with awesome interns on cool projects!) and sending anyone who wants it through basic management training.
- My impression is that we've gotten many more qualified applications in recent manager hiring pools.
- Bypassing bottlenecks
- In general, I think we haven't experienced these as much as other groups (at least so far)
- We tend to hire ops staff prior to growth, as opposed to hiring them when we need them to take on work immediately (e.g. we hire ops staff when things are fine, but we plan to grow in a few months, so the infrastructure can be in place for expansion, as opposed to hiring ops staff when the current ops staff has too much on their plate, or something).
- We do a ton of prep to ensure that we are careful while scaling, thinking about how processes would scale, etc.
- The above mentioned intern program really stress-tested a lot of processes (we doubled in size for 3 months), and has been really helpful for addressing issues that come with scaling.
- Downsides to hiring quickly
- I'd say that we've seen a mild amount to the downsides to growing in general, though it hasn't necessarily been related to speed of hiring - e.g. mildly more siloing of people, people not sure what other people are working on, etc. and we've been taking a lot of steps to try to mitigate this, especially as we get larger.
It's a little hard to say because we don't necessarily know the background / interests of all donors, but my current guess is around 2%-5% in 2021 so far. It's varied by year (we've received big grants from non-EA sources in the past). So far, it is almost always to support animal welfare research (or unrestricted, but from a group motivated to support us due to our animal welfare research).
One tricky part of separating this out - there are a lot of people in the animal welfare community who are interested in impact (in an EA sense), but maybe not interested in non-animal EA things.
This is correct - the RFMF is how much we think we'd like to raise between now and the end of 2022 to spend in 2022 and 2023 according the budgets above.
Edit: This looks like it is be wrong - the oldest reference I found on the EA Forum to it is explicitly the biology one: https://forum.effectivealtruism.org/posts/WAhFnueRgHkAf8KHc/making-ea-groups-more-welcoming.
My guess would be that people have accidentally swapped "founder's syndrome" with "founder effects." Founder's syndrome is widely used outside EA to refer to the things people are talking about: https://en.wikipedia.org/wiki/Founder's_syndrome. EA seems to use it to refer to a wider range of things, but this seems more likely than people intentionally applying founder effects from bio, since the meaning of founder effects in bio is pretty different and very specific.
It seems pretty bizarre to me to say that these historical examples are not at all relevant for evaluating present day social movements. I think it's incredibly important that socialists, for example, reflect on why various historical folks and states acting in the name of socialism caused mass death and suffering, and likewise for any social movement look at it's past mistakes, harms, etc., and try to reevaluate their goals in light of that.
To me, the examples you give just emphasize the post's point — I think it would be hard to find someone who did a lot of thinking on socialist topics who thought that there were no lessons or belief changes should happen after human rights abuses in the Soviet Union were revealed. And if someone didn't think there were lessons there for how to approach making the world better today, that it would seem completely unreasonable.
I also don't think the original post was asking longtermist orgs to make blog posts calling for action on diversity, equity, and inclusion. I think it was doing something more like asking longtermists to genuinely reflect on whether or not unsavory aspects of the intellectual movement's history are shaping the space today, etc.
It’s definitely the case that we can hire people in most countries (though some countries have additional considerations we have to account for, like whether the person has working hours that will overlap with their manager, some financial / logistical constraints, etc), and we are happy to review any candidate’s specific questions about their particular location on a case by case basis if folks want to reach out to email@example.com. For reference, we currently have staff in the US, Canada, Mexico, Spain, UK, Switzerland, Germany, and New Zealand.