What to do with people?
post by Jan_Kulveit
Hierarchical networked structure
How is this different
What we already have and what we should do
How is this decision relevant
Remarks and discussion
I would like to offer one possible answer to the ongoing discussion in the effective altruism community, centered around the question about scaleable use of the people (“Task Y”).
The following part of the 80000h podcast with Nick Beckstead is a succinct introduction of the problem (as emphasized by alxjrl [EA · GW])
Nick Beckstead: (… ) I guess, the way I see it right now is this community doesn’t have currently a scalable use of a lot of people. There’s some groups that have found efficient scalable uses of a lot of people, and they’re using them in different ways.
For example, if you look at something like Teach for America, they identified an area where, “Man, we could really use tons and tons of talented people. We’ll train them up in a specific problem, improving the US education system. Then, we’ll get tons of them to do that. Various of them will keep working on that. Some of them will understand the problems the US education system faces, and fix some of its policy aspects.” That’s very much a scalable use of people. It’s a very clear instruction, and a way that there’s an obvious role for everyone.
I think, the Effective Altruist Community doesn’t have a scalable use of a lot of its highest value … There’s not really a scalable way to accomplish a lot of these highest valued objectives that’s standardised like that. The closest thing we have to that right now is you can earn to give and you can donate to any of the causes that are most favored by the Effective Altruist Community. I would feel like the mass movement version of it would be more compelling if we’d have in mind a really efficient and valuable scalable use of people, which I think is something we’ve figured out less.
I guess what I would say is right now, I think we should figure out how to productively use all of the people who are interested in doing as much good as they can, and focus on filling a lot of higher value roles that we can think of that aren’t always so standardised or something. We don’t need 2000 people to be working on AI strategy, or should be working on technical AI safety exactly. I would focus more on figuring out how we can best use the people that we have right now.
Relevant posts and discussions on the topic are under several posts on the forum:
Hierarchical networked structure
The answer I’d like to offer is abstract, but general and scalable. The answer is: “build a hierarchical networked structure”, for lack of better name. It is best understood as a mild shift of attitude. A concept on a similar level of generality as “prioritization” or “crucial considerations”.
The hierarchical structure can be in physical space, functional space or research space.
An example of a hierarchy in physical space could be the structure of local effective altruism groups: it is hard to coordinate an unstructured group of 10 thousands people. It is less hard, but still difficult to coordinate a structure of 200 “local groups” with widely different sizes, cultures and memberships. The optimal solution likely is to coordinate something like 5-25 “regional” coordinators/ hub leaders, who then coordinate with the local groups. The underlying theoretical reasons for such a structure are simple considerations like “network distance” or “bandwidth constraints”.
A hierarchy in functional space could be for example a hierarchy of organizations and projects providing people career advice. It is difficult to give personalized career advice to tens of thousands of people as a small and lean organization. Scalable hierarchical version of career advice may look like this: based on general request, a student considering future study plans is redirected to e.g. Effective Thesis, specialized on the problem. Further, the student is connected with a specialist coach with object-level knowledge. The hierarchical structure could in my guess scale approximately 100x more than a single organization focusing just on picking the few most impactful people.
A hierarchy in research space could be a structure of groups working on various sub-problems and sub-questions. For example, part of the answers to the question “how to influence the long-term future” depend on the extent to which the world is world, or random, or predictable. It would be great to have a group of people working on this. There are thousands of relevant questions and tens of thousands of sub-questions which should be studied from an effective altruist perspectives.
In general, hierarchical networked structures are the way how complex functional systems are organized and can scale. Closely related concept is “modular decomposition”.
Why networked? I want to point toward the network properties of structures. It is possible to think about some crucial properties of complex system using concepts from network science. E.g. average and maximal distance between nodes in the network, “bandwidth” of links, mechanisms for new link creation, and similar.
Why structures? To put structural aspects in focus. The word hierarchy has many other meanings or connotations like status hierarchy or top-down, command-and-control style of management, which I do not want to recommend.
How is this different
It may be helpful to contrast creating hierarchical structure with other organizational principles.
Effective altruism has in its heart a principle of prioritization: where pure hierarchization tells you to decompose the whole into subparts, and assign someone to deal with each of the parts, pure prioritization tells you select just the best action, and assign just the best person to do it. Taken to the extreme, prioritization leads to recipes like “find the brightest prodigy, make him or her work on the most important problems in AI safety”. Taken to the extreme, hierarchization leads people to work on obscure questions.
Do not get me wrong: prioritization is a great principle, but I would suggest effective altruism should use hierarchization more than it does.
Another competing (self-)organizational principle is homophily, that is, people’s tendency to form ties with people who are similar to themselves. Where hierarchization leads to different levels of specialization, homophily leads to homogenous clusters of people. Starting with several Oxford utilitarian philosophers, you attract more Oxford utilitarian philosophers (so called founder’s effect). Good ML researchers are more likely to know other good AI researchers. People critical of EA’s organizational landscape will more likely talk to other people dissatisfied with the same problems.
Homophily is in general neither good nor bad - in some ways, it provides immense benefits to the movement (like: we want smart altruistic people). But from a structural perspective, it also has significant drawbacks.
Taken together, prioritisation and homophily lead to problems. For example, let’s suppose there is a pool of several hundreds EAs, who are in some ways quite similar - elite university education, good analytic thinkers, concerned about the long-term future, looking mainly for high-impactful jobs, without much practical experience in project management, technical disciplines, grant-making, and many other more specialized skills. All of them do the prioritization of their career options, and all of them apply to the research analyst role at OpenPhil. At the same time, despite the pool of talent, organizations have trouble finding people who would fit in specific roles, and there is always much more work than people.
I hope you have the general direction now. If not, to get more of the background this is related:
While it may be more difficult to turn an answer in the form “go and build hierarchical networked structure” into action, than, let’s say “go and teach”, I’m optimistic that the current effective altruism community is competent enough to be able to use such high-level principles. Moreover, it is not necessary for everyone to work on “structure building” - many people would just “fit into the structure”.
I would expect that a lot would be achievable just by a change of attitude in this direction, both among the talented EAs, and among the movement leaders.
By a rough estimate, for some EA jobs, literally years of work are spend in aggregate by the talented people just competing for the positions. I’m confident that similar effort directed toward figuring out what hierarchical structures we need would lead to at least some good plans, and thinking about where one can fit in the structure could lead more people to do useful work.
Note: this requires actual, real, intellectual work. There aren’t any ready-made recipes, or lists of what structures to create, network maps, or similar resources.
What we already have and what we should do
To some extent, hierarchies emerge naturally. From the above described examples, local effective altruism group structure would likely develop toward 2-layered hierarchy even without much planning. In the research domain, we can see gradual development of more specialized sub-groups, such as the Center for the Governance of AI within FHI.
What I’m trying to say is that hierarchical structure may be grown more deliberately, and can productively use people.
How is this decision relevant
If the above still sounds very theoretical, I’ll try to illustrate the possible shift of attitude on several examples.
Let’s say you are in the situation of hundreds of EAs applying for jobs - with good university education, good analytical skills, focus on the long-term future, looking mainly for high-impact jobs. Looking on your situation mainly with the “prioritization” attitude, you can easily arrive at the conclusion that some of your best career options are, for example, research analyst job in OpenPhil, research-management roles in FHI, CHAI, BERI, or various positions in CEA. Maybe less attractive are jobs in, for example, GiveWell.
What happens if you take your “build hierarchical networked structure” hat? You pick, for example, “effective altruism movement building” as an area/task (it is likely somewhere near the top of prioritization). In the next step, you attempt to do the hierarchical “decomposition” of the area. You can get started just by looking on past and present internal structures of CEA, with sub-groups or sub-tasks like Events, Grants or Groups. Each of these “parts” usually needs all of theoretical work, research and development, and execution and ops. After a bit of looking around, you may find, for example, there are just a few people systematically trying to create amazing events. There are opportunities to practice: CFAR is often open to ops volunteers, EAG as well, you may run an event for your group, or create some new event which would be useful to have for the broader community. All of this is impactful work, if not impactful job. Or, you may find out there isn’t anyone around exactly working on research of EA events. By that, I mean questions like: “How do events lead to impact? How we can measure it? Are there some characteristic patterns in how people meet each other? What are the relevant non-EA reference classes for various EA events?” When you try to work on this you may find out it depends on specific skills, or requires contact with people working on events, so it may be less tractable - but it’s still worth trying. I would also expect good work on this topic to have impact, attract attention, and possibly funding.
While I picked up examples from the “EA movement building” cause area which can ultimately lead to working in effective altruism professionally, that’s not the point. In different cause areas the build hierarchical networked structure attitude can lead to work doesn’t have the EA label in the name at all, yet is still quite impactful. We need EA experts and professionals in many fields. Also, often the most impactful action may be not doing something directly, but creating a structure, or optimizing some network. Short example: x-risk seems to be a neglected consideration in most of the economics literature. One good option could be to pursue an academic career, and work on the topic. Possibly an even better option is to somehow link researchers in academia who are already thinking about these topics in different institutions, e.g. by organizing a seminar.
How can the shift look like for someone in central positions? One change could be described as matching “2nd best options” and “3rd best options” with people. Delegating. Supporting growth of more specialized efforts.
How a good practice may look like: the Center for the Governance of AI has an extensive research agenda. Obviously the core researchers in the institution should focus on top priority problems, but as even some of the sub-problems are still quite important, it may make sense to encourage others to work on them. How may this happen in practice? For example, via the research affiliates program, having AI Safety Camp participant work on the topics.
Another example: let’s say you are 80.000h, an effective altruist organization trying to help people have impact with their career. You prioritize focusing mainly on moving ML PhDs to AI safety, and impressive policy people to the governance of AI. At the same time, you are running the currently largest EA mass outreach project. The unfortunate result is that almost all the people interested in having impactful careers have to rely just on the website, and only a tiny fraction gets some personal support.
What might a hierarchical networked structure approach look like? For example, distilling the coaching knowledge, and creating a guide for professional EA groups organizers to provide coaching to a less exclusive group of effective altruists. There are now dozens of professional EA community builders, EA career coaching is part of their daily jobs, yet as far as there is more knowledge than on the website, they are mostly left to rediscover it.
How can the shift look for someone working in the funding part of the ecosystem? One obvious way is to encourage re-granting. This is to some extent happening: obviously it likely does not make sense for OpenPhil to evaluate $10000 grant applications, so such projects are a better fit for EA Grants. Yet there are small things which can be impactful which are so small that it does not make sense to evaluate them even as EA grants, and could be supported e.g. by community builders in larger EA groups.
Another opportunity for networked hierarchical structures is in project evaluations and talent scouting. Instead of relying mainly on informal personal networks of grant evaluators, there could be more formal structures of trusted experts.
It is possible that some important tasks are not decomposable in a way which would be good for delegating them to hierarchical structures.
- While this problem is a question of active theoretical research, it seems clear that many important practical problems are decomposable.
Hierarchical structures composed of a large number of people have significant inertia, and when they gain momentum, it may be hard to steer them. (Think about bureaucracies.)
- I agree this is true, but in my view it would be good to have some parts of the effective altruism movement which have more of this property. It seems to me in the current state too many EAs are too “fluid”, willing to often change plans, based on the latest prioritization results or 80000h posts (e.g. someone switching from research career to EtG, then switching back to study of x-risks, then considering ops roles, etc.)
- Also I would consider it a good result if the “trail” behind the core of effective altruism movement was dotted with structures and organizations working on highly impactful problems, even if the problems are no longer on the exactly first place in current prioritization.
It is difficult to create such structures and very few people have the relevant skills.
- I’m generally sceptical of such arguments. The effective altruism movement managed to gather an impressively competent group of people, and many of the “new” EAs do not seem to be less competent than “old” EAs who built the existing structures. For example, I would expect the current community to contain a number of people generally as competent as Robert Wiblin or Nick Beckstead, which makes me optimistic about the structures they would create.
Remarks and discussion
The previous is a rough sketch, pointing to one possible direction how more people can do as much good as possible. It is not intended as suggestion for scaling effective altruism to truly mass proportions, not speaking of hundreds of millions people. But that is also not the situation we are in: the reality is currently effective altruism does not know how to utilize even thousands people, apart from earning to give. My hope is that shift toward building hierarchical network structures would help.
Big weakness of this set of ideas is it is likely not memetically fit in the present form. Building hierarchical network structure is a bad name. Also, this post isn’t a nice one paragraph introduction. Just finding a better name could be big improvement (for various reasons, it is also hard for me - I would really appreciate suggestions).
I would like to thank many EAs for comments and discussions on the topic.
Comments sorted by top scores.
comment by Peter Wildeford (Peter_Hurford) ·
2019-03-06T16:25:16.400Z · EA(p) · GW(p)
"Earning to give" feels like a pretty endlessly scalable use of people. What do you think?Replies from: Jan_Kulveit, Jon_Behar, Raemon
↑ comment by Jan_Kulveit ·
2019-03-06T17:50:24.808Z · EA(p) · GW(p)
As a whole, I think effective altruism is currently more structurally bottlenecked and network bottlenecked than funding bottlenecked. Improving the structural and networking constrains is higher leverage than adding more money to the system (on average). Which is not say increasing funding is not valuable. I would expect this to depend a lot on individual circumstances.
If you look on the funding bottlenecks, they seem to be mostly the result of the structure of the funding, that of aggregate sum: imagine a counterfactual world in which OpenPhil has $1b more than it has. How much more effective actions you would expect to see in the world?
So also in funding, we need more structures. EtG is highly impactful as far as the giving is smart, with money directed toward alleviating the structural constrains of funding. I don't think this scales "endlessly".
Another point is, from a global perspective, EtG makes more sense in some places than others. For example, a philosophy postdoc in Prague can earn as little as £1100/m. Should such a person drop an academic career, and do EtG? Almost certainly not. What about EAs in India? They have likely very different comparative advantages than EtG.Replies from: Peter_Hurford
↑ comment by Peter Wildeford (Peter_Hurford) ·
2019-03-07T00:39:18.605Z · EA(p) · GW(p)
I think the framework of "try to figure out what EA most needs and do that" could be helpful, but can go wrong if over-applied. Personal fit is important. Comparative advantage is important. Spreading out talent is important too. If our movement was 100% ETG, that would be really bad. But if you're some EA person and you're having trouble figuring out what to do and can't get an EA job or enter into some flashy academic field, doing ETG is a lot better than just feeling dejected. But right now the message I hear from EA has not always been in line with that.Replies from: vaidehi_agarwalla
↑ comment by Vaidehi Agarwalla (vaidehi_agarwalla) ·
2019-03-08T04:47:21.731Z · EA(p) · GW(p)
I think Jan's point actually solves the problem of people resorting to EtG because they feel dejected. I assume these people are bright, talented and have a lot to contribute (implied from the recent EA jobs post). This method is encouraging people to contribute to fields they are interested in in perhaps more unorthodox ways which have important effects on the movement: preventing drift, allowing for innovation (I favor any structure which encourages startup-like activity, especially in an org like EA where there is a good track record to follow through, develop and work collaboratively), creating a more thoughtful and well-informed community and allowing for more experiential/experimental learning and exposure to someone's Cause X. These effects are, unfortunately, fuzzy and hard to measure, but ultimately I think which have a lot of really important effects on the movement - someone who is risk averse and concerned with the value of their efforts paths could pair this with another strategy like EtG. I think we should consider different types of jobs/ways to make impact as percentages - you may begin your career spending 10% of your resources (time, energy, money) on EtG, 70% on skills-building (getting a PhD, working) and 20% participation in movement building, 10 years later you might change to 30% EtG, 40% volunteering for a Cause X project and 30% movement building.
↑ comment by Jon_Behar ·
2019-03-06T17:07:57.113Z · EA(p) · GW(p)
Agree this is scalable, as long as people aren’t purely trying to maximize income/giving capacity which I don’t think is sustainable. (I’ve done quantitative finance while passionate about that work, and I’ve done it when I wasn’t passionate about it; the former is WAY easier). I’d love to see more early career EAs pursue work that they’re interested in and donate effectively while building skills, networks, etc.
↑ comment by Raemon ·
2019-03-06T19:34:01.891Z · EA(p) · GW(p)
The main thing with scaling Earning to Give is eventually you have give up on any clear definition of "effective." Part of the appeal of early-days Earn to Give was it was so simple. Make money. Give 10%. Choose from one of this relatively short list of charities.
My sense is that the "well vetted" charities can only handle a few hundred million a year, and the "weird plausibly good unvetted charities that easily fit into any EA frameworks" also only can handle a few hundred million a year, and then after that... I dunno you're back to basically just donating anywhere that seems remotely plausible.
Which... maybe is actually the correct place for EA to go. But it's important to note that it might go in that direction.
(Relatedly, I used to have some implicit belief that EA was better than the Gates Foundation, but nowadays, apart from EA taking X-risk and a few other weird beliefs seriously, EA seems to do basically the same things the Gates Foundation does, and the Gates Foundation is just what it looks like when you scale up by a factor of 10)Replies from: deluks917
↑ comment by sapphire (deluks917) ·
2019-03-06T19:52:53.613Z · EA(p) · GW(p)
I think we are pretty far from exhausting all the good giving oppurtunities. And even if all the highly effective charities are filled something like Give Directly can be scaled up. It is possible in the future we will eventually get to the point where there are so few people in poverty that cash transfers are ineffective. But if that happens there is nothing to be sad about. the marignal value of donations will go down as more money flows into EA. That is an argument for giving more now. A future where marginal EA donaions are ineffective is a very good future. Replies from: Peter_Hurford, Raemon
↑ comment by Peter Wildeford (Peter_Hurford) ·
2019-03-07T00:36:56.703Z · EA(p) · GW(p)
Yeah, GiveDirectly feels like the kind of thing that could take hundreds of millions or billions of dollars. If we ever do run out of funding opportunities, which I don't think we will any time soon, that's a really good problem to have.Replies from: Jon_Behar
↑ comment by Jon_Behar ·
2019-03-07T15:21:42.289Z · EA(p) · GW(p)
GiveWell also recently announced they are doubling the size of their research team, which will presumably uncover even more giving opportunities that can absorb a lot of funding.
↑ comment by Raemon ·
2019-03-06T19:57:00.979Z · EA(p) · GW(p)
Nod. My comment wasn't intended to be an argument against, so much as "make sure you understand that this is the world you're building" (and that, accordingly, you make sure your arguments and language don't depend on the old world)
The traditional EA mindset is something like "find the charities with the heavy tails on the power law distribution."
The Agora mindset (Agora was an org I worked at for a bit, that evolved sort of in parallel to EA) was instead "find a way to cut out the bottom 50% on charities and focus on the top 50%", which at the time I chafed at but I appreciate better now as the sort of thing you automatically deal with when you're trying to build something that scales.
I do think we're *already quite close* to the point where that phase transition needs to happen. (I think people who are very thoughtful about their donations can still do much better than "top 50%", but "be very thoughtful" isn't a part of the thing that scales easily)
comment by Vaidehi Agarwalla (vaidehi_agarwalla) ·
2022-01-04T20:32:47.020Z · EA(p) · GW(p)
This was one of many posts I read as I was first getting into meta EA that was pretty influential on how I think about things. It was useful in a few different ways:
1. Contextualising a lot of the other posts that were published around the same time, written in response to the "It's hard to get an EA job" post [EA · GW].
2. Providing a concrete model of action with lots of concrete examples of how to implement a hierarchical structure
3. I've seen the basic argument for more management made many times over the last few years in various specific contexts. We seem to be taking steps towards this structure within meta EA and specific causes.
"Another example: let’s say you are 80.000h, an effective altruist organization trying to help people have impact with their career. You prioritize focusing mainly on moving ML PhDs to AI safety, and impressive policy people to the governance of AI. At the same time, you are running the currently largest EA mass outreach project. The unfortunate result is that almost all the people interested in having impactful careers have to rely just on the website, and only a tiny fraction gets some personal support.
What might a hierarchical networked structure approach look like? For example, distilling the coaching knowledge, and creating a guide for professional EA groups organizers to provide coaching to a less exclusive group of effective altruists. There are now dozens of professional EA community builders, EA career coaching is part of their daily jobs, yet as far as there is more knowledge than on the website, they are mostly left to rediscover it."
4. Although the quote above is an illustrative example (and was being discussed by many other posts at the time), I think the framing of this was particularly useful. Arguments like this was one of several factors that led to me starting the Local Career Advice Network, where we worked on compiling current best knowledge on career advice, especially career 1-1s, as well as exploring ways for organizers to develop more localized group resources.
5. Overall, I think I would have liked to see more development of this concept and more applications to concrete situations. In general it seems like we need to be thinking more systematically about building EA infrastructure, but this is slow moving because coordination is hard.
comment by Raemon ·
2019-03-06T19:43:19.069Z · EA(p) · GW(p)
I'm generally sold on the "you need more hierarchical networks" to get real things done (and even more on the more general claim that you need to expand the network in some way, hierarchical or not).
But, interestingly, the bottleneck on fixing the lack of scalable hierarchical network structures is... still the lack of hierarchical network structure. Identifying the problem doesn't make it go away.
I think most orgs seem to be doing at least a reasonable job of focusing on building out their infrastructure, it's just that they're at the early stages of doing so and it's a necessarily slow process. Scaling too quickly kills organizations. Hierarchy works best when you know exactly what to do, and runs the risk of being too inflexible.
(If you run an org, and aren't already thinking about how to build better infrastructure that expands the surface area of the network, I do think you should spend a fair bit of time thinking about that)Replies from: Raemon, Raemon, Jan_Kulveit
↑ comment by Raemon ·
2019-03-06T19:45:31.322Z · EA(p) · GW(p)
FYI, the LessWrong team's take on this underlying problem is "find ways to make intellectual progress in a decentralized fashion, even if it's less efficient than it'd be in a tight knit organization."
The new Questions feature and the upcoming improvements to it are meant to provide a way for the community to keep track of it's collective research agenda and allow people to identify important unsolved problems, and solve them.
↑ comment by Raemon ·
2019-03-06T19:51:45.326Z · EA(p) · GW(p)
A particular risk here, is that coordination is one of the most costly things to fail at.
I'm happy to encourage new EAs to tackle a random research project, or to attempt the sort of charity entrepreneurship that, well, Charity Entrepreneurship seems to encourage.
I'm much more cautious about encouraging people to try to build infrastructure for the EA community, if it only actually works if it not only is high quality but also everyone gets on board with it at the same time. In particular, it seems like people are too prone to focus on the second part.
Every time you try to coordinate on a piece of changing infrastructure, and the project flops, it makes people less enthusiastic to try the next piece of coordination infrastructure (and I think there's a variation on this for hierarchical leadership)
But I'm fairly excited about things like AI Safety camp, i.e. building new hubs of infrastructure that other existing infrastructure doesn't rely on until it's been vetted.
(It's still important to make sure something like AI Safety camp is done well, because if it's done poorly at scale it can result in a confusing morass of training tools of questionable quality. This is not a warning not to try it, just to be careful when you do)Replies from: Aidan O'Gara
↑ comment by Jan_Kulveit ·
2019-03-06T20:33:22.287Z · EA(p) · GW(p)
I agree some orgs are possibly close to the margin on how fast you can grow, but my view is we are way below that on the movement level.
One reason for that belief is just looking around on the amount of effort which is going in that direction. If you compare how much work and attention is spent on thinking about the structure, in comparison to the amount of work spent collectively on for example "selecting people for jobs", my impression is there is a difference of an order. So while you may be right that more effort would not help, in my view we are "not actually trying" (with some positive exceptions).
Another reason is looking on cheap actions people or orgs can take (as with the coaching knowhow transfer) which are not taken.
comment by Aaron Gertler (aarongertler) ·
2019-03-06T23:18:41.476Z · EA(p) · GW(p)
Upvoted. This is a good summary of several different community structures, and you represent the strengths of hierarchy well (though I wish there'd been examples of what an individual's "journey through the hierarchy process" might look like).
I think of EA as being fairly hierarchical already. There are dozens of different organizations geared toward different task/cause combinations; if you tell me you're a person with X experience who wants to do work of type Y in country Z, there's a good chance an organization exists for that, at least for the most common causes/most EA-populated countries.
There's also a reasonably large population of people within EA who can offer suggestions if you ask "what should I do next?" I sometimes see questions like that on Facebook or in CEA surveys (though not often on the Forum, yet), and I try to advise where I can. 80,000 Hours may not have the resources to coach hundreds of additional people, but I'd hope that people in the informal EA community (at least those who are pretty familiar with the landscape) would spend time giving advice.
Perhaps some of the available resources [? · GW] for finding one's next move aren't well-known enough? If anyone reading this found themselves in a place where they wanted to do EA work, didn't know what to do next, and didn't find a good way to learn about their options, I'd appreciate hearing your story!
Regarding this quote:
I would consider it a good result if the “trail” behind the core of effective altruism movement was dotted with structures and organizations working on highly impactful problems, even if the problems are no longer on the exactly first place in current prioritization.
This seems... kind of true already? It's hard to say what "current prioritization" entails, since "global health and poverty" gets more money from the EA community and has more open jobs than any other major area, but some of the largest EA organizations are more focused on long-term work. But since most people think of LTF as the "current" priority, I'll use global poverty as an example.
There are plenty of active, even thriving organizations working on global health/poverty that have strong connections to EA. 80,000 Hours' job board lists dozens of positions in the area, and Tom Wein's collection of job boards features hundreds more (not all of those jobs are necessarily "EA-aligned", but many are, and even organizations that aren't maximally effective may offer great learning opportunities and much higher impact than an "average" job).
A job at the United Nations (there are a ton of those at the first non-80K link I clicked on from Tom Wein's list) may not have the same kind of prestige as an Open Phil job, but I still can't imagine meeting someone who works for the UN at EA Global and not having (a) a bunch of questions I'm eager to ask them, and (b) respect for their perspective on global development causes.
Jan: How might the "trail" you envision look different than what we have now? Is there some cause you're thinking of that doesn't have good organizations/structures because it is "no longer first place"? (If the argument was "there should be more orgs working on promising-but-seldom-prioritized topics like mental health", I think I'd be more in agreement.)
Also, this looks like a typo:
For example, part of the answers to the question “how to influence the long-term future” depend on the extent to which the world is world, or random, or predictable.
comment by Moses ·
2019-03-06T21:18:36.708Z · EA(p) · GW(p)
I feel you could come to the same conclusions/prescriptions with a much simpler underlying framework:
In order to utilize human effort, someone must come up with some valuable activity to pipe that effort into. A manager/employer, roughly speaking.
Some people manage/employ themselves; they find something to pipe their efforts into on their own. Maybe they start a project, a charity, a startup, organize a local group or an event, what have you.
Some people are even willing to manage/employ other people: they come up with so many ideas of what to do that it can keep multiple people busy.
Other people require external management/employment; they look for pre-defined jobs to slot themselves into.
[Rest of comment edited for clarity:]
The practical suggestions seem to fall into two categories:
"Be more self-managing, stop looking for a job and come up with your own idea what you can do"—e.g., organize events, do research on your own.
"Delegate"—e.g. distill the 80k know-how and delegate coaching. But the people at 80k don't have the time to actively orchestrate this. Again, there will need to be people who actively step up and make this happen.
So I think you could take out all the hierarchy stuff, radically simplifying the idea, and still make roughly the same suggestions:
Stop looking for other people to manage you. If you show up looking for a job, requiring management from other people who are already busy managing themselves or others, you're adding to their burden, not easing it. The high-profile EA orgs are not bottlenecked on "structure" or "network"; they're bottlenecked because there's a hundred people requiring management for every one person willing to manage others. Create your own research agenda, start your own EA org, organize your own event, find out on your own how some aspect of the EA community could be improved, propose a solution, implement it.
Replies from: lexande, Greg_Colbourn, Jan_Kulveit
↑ comment by Jan_Kulveit ·
2019-03-07T12:32:59.515Z · EA(p) · GW(p)
In my view without all the hierarchy stuff, it is harder to see what to create, start, manage, delegate. I would be significantly more worried about the meme of "just go&do things&manage others" spreading than about the meme "figure out how to grow the structure".
comment by andavargas ·
2019-03-06T16:51:11.044Z · EA(p) · GW(p)
Replies from: Jan_Kulveit, Raemon
↑ comment by Jan_Kulveit ·
2019-03-07T00:56:24.253Z · EA(p) · GW(p)
I like the second link. From a network scientists perspective, one way how to model such structures is by overlapping hierarchical stochastic block models, or generally "community structure". (Alexander's essay predates network science by several decades.)
Which also makes “partonomy” and “mereonomy” possibly problematic labels (because of tree structures).
↑ comment by Raemon ·
2019-03-06T22:28:47.689Z · EA(p) · GW(p)
I was very interested in the "city is not a tree" post, but found it juuust confusing/dense enough to bounce off of it. I'd be interest in a link-post or comment that summarizes the key insights there in layman's terms.Replies from: Jan_Kulveit
↑ comment by Jan_Kulveit ·
2019-03-07T21:25:38.288Z · EA(p) · GW(p)
My rough understanding:
To some extent the ideas seem to be now "in the water". The maths part is something now more developed under the study of complex networks. Alexander's general ideas about design inspired people to create wikis, patterns in software movement, to some extent objective oriented programming, and extreme programming, and some urbanists... which made me motivated to read more from of him.
(Btw in another response here, I pointed toward Wikipedia as a project with some interesting social technology behind it. So, it's probably worth to note that a lot of the social technology was originally created/thought about at wikis like Meatball and the original WikiWiki by Ward Cunningham who was in turn inspired by Alexander.)
comment by Chris Leong (casebash) ·
2019-03-06T21:49:02.890Z · EA(p) · GW(p)
This is an interesting idea, but I'm skeptical as I think it underestimates the difficulties in co-ordination. Givewell has had difficulty with volunteers due to unreliability. Another datapoint is the shift in .IMPACT (now Rethink Charity) from relying on volunteers to relying on paid staff. Volunteer hierarchical organisations will be hit by these issues doubly hard as they are relying on volunteers for both management and object level work. I would love to be proven wrong though.Replies from: Jan_Kulveit
↑ comment by Jan_Kulveit ·
2019-03-06T22:45:57.717Z · EA(p) · GW(p)
Counting datapoints, for some time I worked on "translating" the community structure behind Wikipedia from the en: version to cz:; co-founded Czech Wikimedia chapter; led CZEA.
Wikipedia was created almost entirely by hierarchical volunteer structure, so you can take it as some sort of existential proof. (IMO possibly the most impressive thing about Wikipedia is the social technology behind it, and how much deliberate is it)
I agree that fully volunteer organisations are difficult, but something like 5 volunteers / 1 staffer is much easier. Also what makes large difference is volunteers working physically together vs. distributed online work. Also what is actually important is not so much whether people are getting payed, but how large fraction of attention/time can someone put into coordination. (I.e. a group of 10 volunteers where every member puts in 0.1FTE is much less effective than group of 6 volunteers on 0.1FTE + one coordinator on 0.4FTE)
Overall didn't meant to suggest anything is easy; on the other hand in my opinion it is in general easier to get good in managing volunteers than, for example, to get hired for some of the most competitive jobs in EA orgs.
comment by mifeet ·
2019-03-17T21:23:15.046Z · EA(p) · GW(p)
I appreciate highlighting the concepts of prioritization and hierarchization and designing hierarchical networked structures (HNSs) seems like a high-leverage activity. I’m not an expert, but it seems to me to be an application of a more general mindset well-known in business: Systems Thinking. It could be a source of further inspiration.
However, I am not clear on how the proposal relates to to title „What to do with people?“. In order to effectively utilize people, one needs suitable positions, therefore I would expect the proposal leads to creation of new ones or connecting people with existing ones. But all the examples are along the lines of „go find a position for yourself where you can apply HNSs“ without necessarily creating any new positions (it can lead to some connecting but it doesn’t seem to be the main point there).
It seems to me that the solution to the original problem is going to require people leading other people and showing them the way. That’s what Teach for America did – quoting Can the EA community copy Teach for America? [EA · GW]:
It’s a very clear instruction, and a way that there’s an obvious role for everyone“.
As Moses [EA(p) · GW(p)] [EA(p) · GW(p)]correctly pointed out, only a small fraction of people is willing to manage themselves or other people.
Therefore, I see the topic of leadership as crucial for solving the people utilization problem. It includes the aforementioned activities like delegation or growing people professionally. Do you see that as an essential ingredient too? Do you think that kind of mindset is represented enough in the EA community?
Encouraging people who are not yet fully engaged in the EA community to build HNSs also seems potentially risky, because it seems like an advanced topic that requires good understanding of the system. On the other hand, if someone assumes a leadership role and thinks about delegating, thinking in terms of HNSs is going to be very important, because there are many possible decompositions with different impact and that needs to be deliberate and well-thought-out.
comment by EgilElenius ·
2019-03-10T10:14:55.197Z · EA(p) · GW(p)
Quickly written idea, could need some time to develop it:
Something I've thought about myself, which isn't quite a Task Y, but still similar, is widely accepted framework for what aspiring EA's can do, with "levels" of depth depending on how much they are willing to commit.
If the two dominant ideologies of Western society, to which EA has to relate to, are consumerism and environmentalism/social responsibility, to me the (primary) mean would be to spend money and the goal would be to have a environmentally/socially non-negative impact on the world. I get the impression that the moral message we get is that we should do as little impact as possible in our consumption, or buy products which are less harmful.
I would like to explore the idea of indulgence, to create a (relatively easy?) framework telling people that if they live in a way using x units of ressources and y of suffering suffering, they give so or so much money to the following organisations.
Something to stress in the case of natural ressources, would be that it's not the same as them not having been spent, but rather that atleast insofar that they will be spent, it would be a positive act of enabling an atleast comparable amount to come.
I personally don't believe EA, because of its intellectually ambitious commitments, have good conditions to spread to the public at large, while I believe a relatively straight-forward framework for how to act/spend would have an easier time.
That said, somebody else might already have written about this?