Can the EA community copy Teach for America? (Looking for Task Y)
post by alexrjl
Summary (edited in)
Motivation & introduction to Task Y
Why do I think this idea would be valuable to think about further?
Potential "Task Y"s which already exist
Summary (edited in)
Below, I make the case for the importance of thinking about "Task Y", a way in which people who are interested in EA ideas can usefully help, without moving full time into an EA career. The most useful way in which I am now thinking about "Task Y" is as an answer to the question "What can I do to help?".
Motivation & introduction to Task Y
Episode 10 of the 80000hours podcast was recently re-aired, and one part of the conversation really stayed with me as I listened, and prompted me to ask a question. I've bolded the part I'm referring to for emphasis, but included a much longer quote for context.
Robert Wiblin: A question that often comes up is whether Effective Altruism should aim to be a very broad movement that appeals to potentially hundreds of millions of people, and it helps them each to make a somewhat larger contribution, or whether it should be more, say, like an academic research group or an academic research community that has only perhaps thousands or tens of thousands of people involved, but then tries to get a lot of value out of each one of them, really get them to make intellectual advances that are very valuable for the world. What’s your thought on that, on the two options there?
Nick Beckstead: I guess, if I have to pick one, maybe I would pick the second option, but I might frame it a little bit differently, and I might say, “Let’s leave the first option open in the long run as well.” I guess, the way I see it right now is this community doesn’t have currently a scalable use of a lot of people. There’s some groups that have found efficient scalable uses of a lot of people, and they’re using them in different ways.
For example, if you look at something like Teach for America, they identified an area where, “Man, we could really use tons and tons of talented people. We’ll train them up in a specific problem, improving the US education system. Then, we’ll get tons of them to do that. Various of them will keep working on that. Some of them will understand the problems the US education system faces, and fix some of its policy aspects.” That’s very much a scalable use of people. It’s a very clear instruction, and a way that there’s an obvious role for everyone.
I think, the Effective Altruist Community doesn’t have a scalable use of a lot of its highest value … There’s not really a scalable way to accomplish a lot of these highest valued objectives that’s standardised like that. The closest thing we have to that right now is you can earn to give and you can donate to any of the causes that are most favored by the Effective Altruist Community. I would feel like the mass movement version of it would be more compelling if we’d have in mind a really efficient and valuable scalable use of people, which I think is something we’ve figured out less.
I guess what I would say is right now, I think we should figure out how to productively use all of the people who are interested in doing as much good as they can, and focus on filling a lot of higher value roles that we can think of that aren’t always so standardised or something. We don’t need 2000 people to be working on AI strategy, or should be working on technical AI safety exactly. I would focus more on figuring out how we can best use the people that we have right now.
Nick's conclusion, that we should focus on making best use of the people who are currently part of the EA community is a sensible one, but his statement, and in particular the bolded part, I believe hints at another, potentially exciting question:
What if there were a scaleable way to effectively use the effort and time of people who agree with broad EA principles, but who for for some reason aren't able to, for example, land a job from here?
To try to be more concrete: what if there were some "Task Y" (I'm borrowing from "Cause X") which had some or all of the following properties:
- Task Y is something that can be performed usefully by people who are not currently able to choose their career path entirely based on EA concerns*.
- Task Y is clearly effective, and doesn't become much less effective the more people who are doing it.
- The positive effects of Task Y are obvious to the person doing the task.
(* potentially this reason could be anything from not living in the right part of the world, to not having the right qualifications, to caring a lot but not enough to completely switch career, to simply being a student who wants to be doing something more direct right now than going to their weekly/monthly discussion group)
Why do I think this idea would be valuable to think about further?
I joined the "Teach First" (UK version of Teach for America) as soon as I graduated from University. In my case I had strongly considered teaching for a long time, but I met lots of people in the programme for whom that was not the case. One of the key things that appealed to lots of them was that Teach First did not require a huge commitment. Lots of the advertising and discussion from mentors and peers was around the fact that the commitment was only for two years, during which time we would build lots of flexible career capital. At the time, we were even told that several large consultancy firms let Teach First graduates skip the first two stages of their recruitment processes. The strategy of telling people that "you can do some good now, but don't worry if you're not ready to commit your whole life to it", meant that many people joined who, according to them, would never otherwise have considered teaching as a career.
At the moment, people who become casually interested in Effective Altrusim can attend meetups (if they live in an area with them), write forum posts, and donate small amounts of money (in the relative sense, if not in the moral sense, given how cheap it is to save a life). Without making serious career-changing plans, that's about it, and not many people change the entire path of their life on the basis of one conversation. The single biggest way people become involved in the EA community is via personal interactions with others [EA · GW] and, from the basis of the conversations I've had, getting people sold on the general idea of doing good and doing it well is easy, the difficult part is having a good answer to the question "So what can I do to help?". Allowing individuals to meaningfully contribute in a way which has direct benefits which are clear to them, especially if there is a relatively small barrier to them getting started, could be a very powerful recruitment and retention tool, as people who feel they are meaningfully contributing are potentially not only less likely to drift away, but are more likely and able to encourage others to follow their example.
Potential "Task Y"s which already exist
Unfortunately, I don't have a clear idea for what "Task Y" should be. I therefore am posting this in the hope that some others will feel that this question merits some thought. The existence of EAwork.club seems like a promising start, but at the moment barely anything actually gets posted and certainly nothing has seemed scalable. Glen Weyl's comments about artists, writers and video game designers are another area where I see some potential. His comments towards the end of the interview about the value of having a diverse range of viewpoints are also worth considering, especially as they have fairly srong implications for the value of the community expanding to encompass a large number of "part timers". The whole thing is worth a listen, but an indicative quote is :
Robert Wiblin: So, there’s two different reasons that you could like having this kind of democratic spirit in wanting to distribute power. One would be on principle. You just think it’s bad to concentrate power or elitism or hierarchy is unappealing in principle. The other is that, as an empirical matter, information is very widely distributed. In fact, even people who don’t have a great education or whatever are in fact, bring a lot of knowledge to the table when they are able to contribute.
Glen Weyl: Yes.
Robert Wiblin: I thought that you were going to justify on the first principle. But it sounds like you just think-
Glen Weyl: No, I’m justifying it on the second ground.
Robert Wiblin: Just in practice, like ordinary voters actually have a lot of value to contribute to the system.
Glen Weyl: Yeah.
While I'm acutely aware that Task Y might not exist, it seems to have a large enough upside that it is worth at least some effort to try to find. It's possible that there is no single task which fits the description, but a combination of sveral different things, with appropriate co-ordination, do. I'm really excited to be able to potentially update/strengthen parts of this post with contributions from the comments. The most important questions that I don't feel I have strong answers to are:
What properties should "Task Y" have? My initial attempt to answer this is above, but I think there's lots of room for improvement. edit: As @aarongertler pointed out in the comments, there's no need that "Task Y" would have to be a single thing. If it were the case that it were one of several useful projects, however, a centralised "task list", from which people could easily choose soemthing to work on, seems like an excellent idea.
Why isn't "Earning to give", or even just "donate effectively" sufficient to have the large positive effects "Task Y" could have? I'm very uncertain about this, part of me thinks that "set up a small standing order to an effective charity, and periodically review whether you can afford to increase it", could potentially work if pitched correctly - it's what I decided to do when I first heard about Effective Altruism, and resulted in me gradually becoming more and more actively involved. I think, however, for many people seeting up a recurring donation is a "fire and forget" action; once it's happened there's not much to keep that person actively involved. There may also be something to the idea that "giving money to an effective charity" doesn't feel sufficiently different to "giving money to a charity" to either make people feel like a part of a community, or for the message to be able to be heard above the many other voices telling people where to donate.
If there were a large expansion in the "Soft EA/Task Y" community, how could we most effectively leverage the collective actions of this large community, without taking time away from the other work they were doing? It strikes me that sufficiently good answers to this question could potentially be worth exploring and implementing even without a large growth in the EA community. If, for example, there were some effective way of using distributed computing in EA-aligned research, it seems possible that the curent EA community is already big enough (and rich enough on average to have access to an at least moderately powerful computer), to have potentialy large effects. Were the EA movement to grow rapidly enough to include very large numbers of people who were at least in some sense "EA-aligned", there is potential for the identification of many more causes which could potentially derive large benefits from the combination of large numbers of people taking small individual actions, even if none of these actions are significant enough to be potential "Task Y" candidates themselves. Voting reform and animal rights activism are two things which seem potentially able to benefit from things like letter/email campaigns in the style of Amnesty. Leveraging the combined low effort actions of a large group of EAs has already been very successfully done here.
Is there a significant downside to lots of people being involved in a "soft" version of EA? While my personal experience is that being involved to a small extent in Effective altruism led to increasingly strong involvement, the sample I have is extremely small (although not limited to just myself), and is clearly biased by the fact that most of the people I talk to about effective altruism are EAs. It is at least possible that people becoming involved in some small way might be less likely to consider other, more impactful options, in the same way that setting too small default donations can sometims cause people to give less money overall. In some ways this is the most important question of all. I believe that the ability to do something meaningful would be a stepping stone for people to get more involved with effective altruism. If it turned out to be a "stopping stone", preventing people who would otherwise have got fully involved from moving past the "I'll spend some chunk of my time doing task Y" stage, it would clearly be bad.
Edit, from @John_Maxwell_IV: supposing Task Y does exist, would you rather people working on Task Y think of themselves as "Soft EAs", or as people who are part of the "Task Y community"? Rather that copy John's (excellent) arguments on both sides in here, I think responses to this question should probably happen in replies to his comment.
Comments sorted by top scores.
comment by Vaidehi Agarwalla (vaidehi_agarwalla) ·
2022-01-04T20:37:38.522Z · EA(p) · GW(p)
- This post introduced the concept of Task Y, which I have found very helpful framing in thinking about ways to engage community members at scale, which to me is a very important question for movement sustainability.
- I think it's unlikely the Task Y concept would have been popularized in EA community building without this post.
- I think the motivation to find Task Y is asking the question in the correct direction, even though I think the Task Y framing is not useful beyond a point. But it seems like an important first step.
- My current thinking on Task Y is that there is no true Task Y for the EA movement because the needs of the movement are too varied and diverse. Rather it seems that within more narrow domains (e.g. causes, careers, etc.) there may be more scalable things that could be done.
comment by Jon_Behar ·
2019-02-26T16:48:54.498Z · EA(p) · GW(p)
Why isn't "Earning to give", or even just "donate effectively" sufficient to have the large positive effects "Task Y" could have?
I see “donate effectively” as “Task Y”, and would love to see that get wider acceptance. To get around concerns that people might “set and forget” their giving at a low level, I think messaging around effectiveness should include the idea of improving one’s giving over time. For instance, people can take a “personal best” approach to giving and try to give better (give more, give more effectively, do more research, etc.) each year.
My sense is this would go a long way in reducing some of the elitism concerns @John_Maxwell_IV mentioned. And rather than reducing option value, I think it would give EA a lot more flexibility and robustness if it could draw on a large pool of people with diverse skills who were sympathetic to core EA ideas. For instance, it’d be a lot easier to close the “operations gap” if there were a lot of EA sympathetic people with strong ops experience, and the same will be true of the next talent gap that comes along (my guess is that a “management gap” is the next natural progression).
comment by Aaron Gertler (aarongertler) ·
2019-02-21T20:25:46.916Z · EA(p) · GW(p)
Strong-upvoted for raising an important question, providing a relevant example from within EA, quoting directly from sources you wanted to reference, and giving a good definition of the kind of thing you're looking for. I really loved your formatting and "content design"; my only suggestion there would be to add some headers.
I don't know of any single "Cause Y" that can easily absorb hundreds of people with a standardized training protocol, but I suspect that there are dozens of small projects that would be worth trying and wouldn't take much individual research to prepare for.
For example, EA Giving Tuesday [EA · GW] was an independent project run by a couple of people who noticed an opportunity and took it, in turn giving hundreds of other people a chance to boost their own impact. (The EA Project for Awesome example you listed here is similar.)
There are also various lists of project [EA · GW] and research [EA · GW] ideas online. No one person will be suitable for all of these, and perhaps no one training program could reliably prepare anyone for a particular project, but any given person may be able to find at least one project idea that "fits", even if their role is market sizing or design or copyediting rather than direct research.
There's also lots of volunteer work available. EA Global and Rethink Charity use quite a few volunteers, for example, and plenty of other EA projects would benefit from more eyes/hands/minds:
- If you speak another language, you can translate something important for a new audience.
- If you're a good editor, you can help someone with an unpolished paper on Effective Altruism Editing and Review.
- If you have design skills, you can ask an author if they'd like you to create an infographic based on a paper or Forum post. Owen Shen does something similar for EA San Diego, creating flyers and graphics for upcoming events.
- There's an EA Volunteering Facebook group with lots of other opportunities and ideas.
While we don't have a single Task Y, there are a lot of ways to get involved, many of which could help you qualify for an EA Grant [? · GW] or find a job down the line. Anyone who reads this and wants ideas beyond what I've listed here is welcome to reach out to me.
comment by MichaelA ·
2021-02-19T06:28:57.680Z · EA(p) · GW(p)
I just wanted to quickly say that today I realised that this is probably among the 5 Forum posts (excluding ones written by me) which I most often link to (in posts, comments, messages, etc.). So thanks for writing it! Replies from: alexrjl
comment by John_Maxwell (John_Maxwell_IV) ·
2019-02-22T06:29:23.411Z · EA(p) · GW(p)
Another question related to Task Y: supposing Task Y does exist, would you rather people working on Task Y think of themselves as "Soft EAs", or as people who are part of the "Task Y community"? For example, if eating a vegan diet is Task Y, would you like vegans to start thinking of themselves as EAs due to their veganism? If veganism didn't exist already, and it was an idea that originated from within the EA community, would it be best to spin it off or keep it internal?
I can think of arguments on both sides:
Replies from: alexrjl
- Maybe there's already a large audience of people who have heard about EA and think it's really cool but don't know how to contribute. If these people already exist, we might as well figure out the best things for them to do. This isn't necessarily an argument for expansion of EA, however. (It's also not totally clear which direction this consideration points in.)
- If Task Y is a task where the argument for positive impact is abstruse & hard to follow, then maybe a "Task Y Movement" isn't ever going to get off the ground because it lacks popular appeal. Maybe the EA movement has more popular appeal, and the EA movement's popular appeal can be directed into Task Y.
- Some find the EA movement uninviting in its elitism. Even on this forum, reportedly the most elitist EA discussion venue, a highly upvoted post says [EA · GW]: "Many of my friends report that reading 80,000 Hours’ site usually makes them feel demoralized, alienated, and hopeless." There have been gripes about the difficulty of getting grant money for EA projects from grantmaking organizations after it became known that "EA is no longer funding-limited". (I might be guilty of this griping myself.) Do we want average Janes and Joes reading EA career advice that Google software engineers find [EA(p) · GW(p)] "very depressing"? How will they feel after learning that some EAs are considered 1000x as impactful as them?
- Expansion of the EA movement itself could be hard to reverse and destroy option value [EA · GW].
↑ comment by alexrjl ·
2019-02-24T17:31:24.645Z · EA(p) · GW(p)
Thanks for this. I've edited your question into the post. The third bullet point you wrote I actually think captures a lot of why I'm excited about a potential Task Y (or list, like the one aaron posted). If people have the option to do something which both genuinely is good, and seems good to them, and hear that this is actively encouraged by the EA community and enough to be considered a valuable part of it, I think this goes quite a long way towards stopping it seeming so elitist. Having multiple levels of commitment available to people, with good advice about the most effective thing to do given a particular level of commitment, seems to plausibly have lots of potential.
I have price discrimination in my head as a model here, though I realise the analogy is not a perfect one.Replies from: John_Maxwell_IV
↑ comment by John_Maxwell (John_Maxwell_IV) ·
2019-02-26T10:26:29.312Z · EA(p) · GW(p)
That makes sense.
The podcast with Rob Wiblin and Nick Beckstead is about whether EA should "aim to be a very broad movement that appeals to potentially hundreds of millions of people". I initially read your post as addressing that question. Maybe a good answer to that question is "not before we've found useful things for the people already interested in EA to do". My point about branding is most relevant in the case where we've found a useful thing and we want to scale it up beyond the existing EA community.
By the way, this new post [EA · GW] is interesting, from a guy with a ridiculous resume who get rejected for 20 different EA positions.
comment by aogara (Aidan O'Gara) ·
2019-02-24T22:52:35.694Z · EA(p) · GW(p)
As a college student, I volunteer a few hours a week at Faunalytics, an EA-aligned animal welfare advocacy/research group. I think volunteering with Faunalytics is a good candidate for a small-scale Task Y.
I started off by editing their old article archives and updating them to fit their new article formatting. It was pretty boring, but it was useful for Faunalytics because it let them publish their archived research summaries, and it let me show Faunalytics that I was committed and could be trusted with responsibility.
Sometimes I'd rewrite old articles that seemed poorly done, so after a few months, my supervisor liked my writing and moved me up to doing my own research summaries. Each week, I'd be assigned a paper about something relevant to animal or environmental advocacy. I'd write an 800 word summary in the style of a blog post, and Faunalytics would publish it to their library. Here's some of what I wrote (the tagging system is buggy, it doesn't list a lot of my articles).
I recently stopped doing research summaries for time reasons, but I'm now working with their research team on analyzing data from their annual Animal Tracker survey poll.
The parts I've really enjoyed about the work are:
- The papers could be interesting, and I learned a bit about animal topics
- I think most of what I wrote was informative and would be useful to e.g. animal activists who wanted to better understand a particular question. Examples: Does ecotourism help or harm local wildlife? What's the relationship between domestic violence and animal abuse? (But, see below: informative and useful to some people is not necessarily the same as effective in doing good)
- Writing research summaries is very engaging work, just the right level of difficulty, and my writing skills markedly improved
- It can lead to other opportunities: They now trust me enough to let me do their data analysis project, which is really fun, educational, and (given that I'm a student) will be probably the most legitimate thing I've published once it's done. I'd also be comfortable asking my supervisor for a recommendation letter for a job, and if I wanted to get more involved in EA animal rights, I think I'd be able to make connections through Faunalytics.
The parts that weren't so great are:
- On the whole, I'm not sure I've had much impact. If I were convinced that the majority of causes within animal welfare are effective, then I would probably think I've had a good positive impact. But I don't think e.g. the environmental impacts of ecotourism are very important from an altruistic standpoint, which really decreases my value.
- Being a low-commitment volunteer is simply a bad arrangement in a lot of ways. At least for me, doing something a few hours a week often leads to doing it zero hours a week, especially when it's a volunteer relationship where you've made very little firm commitment and there's no consequences for being late or failing to deliver. I think I combatted this pretty well by forcing myself to stick to deadlines, but I totally understand the GiveWell position of not accepting volunteers because they're not committed enough.
On the whole, for anyone looking to explore working in EA more broadly, I think volunteering at Faunalytics is a great idea: the possibility of direct impact, mostly engaging work, and a strong opportunity to prove yourself and make connections that can lead to future opportunities. Check it out here if you're interested, and feel free to message me with questions.
(Anybody have input on whether I should write a full post about my experience/advertising the opportunity?)Replies from: Khorton
↑ comment by Kirsten (Khorton) ·
2019-02-25T07:57:45.201Z · EA(p) · GW(p)
If you have time, turning this into a post would probably be good. I think it's sometimes hard for people, especially college students, to imagine how they can start volunteering for EA orgs or what it looks like to "gain career capital." Your story is a really good example of doing something useful and gaining skills starting from entry level.
comment by yhoiseth ·
2019-03-14T07:50:23.801Z · EA(p) · GW(p)
An idea for Task Y: Mentoring people a bit younger than oneself.
Tyler Cowen writes in The high-return activity of raising others’ aspirations:
At critical moments in time, you can raise the aspirations of other people significantly, especially when they are relatively young, simply by suggesting they do something better or more ambitious than what they might have in mind. It costs you relatively little to do this, but the benefit to them, and to the broader world, may be enormous.
This is in fact one of the most valuable things you can do with your time and with your life.
I think many young people today lack good mentors. Their peers are their own age, and the last person you want advice from as a 14-year-old is another 14-year-old. And parents, teachers and other grown-ups may not have the time, inclination, knowledge and/or skills to be very effective mentors. In any case, the age gap is often a bit too large.
A program where EAs systematically mentored, nudged and helped people up to, say, 15 years younger than themselves, could (I think) scale and be effective.
comment by alexrjl ·
2020-07-19T10:49:40.897Z · EA(p) · GW(p)
The more time I spend forecasting, the more I think it has lots of the features I was looking for in a good "Task Y":
Replies from: oagr
- It's fun and interesting.
- It's beneficial for the people doing it; it helps people be Less Wrong<sup>TM<\sup> and plausibly has nontrivial signalling value in EA-adjacent circles for those who are good at it.
- While the impact probably isn't huge, there's a reasonable case that, especially from a long-term perspective, it's not negligible, especially for something that can be done in your free time.
- It's something that bright students and/or other people without much disposable income can easily participate in, which sets it apart from Earning to Give.
↑ comment by Ozzie Gooen (oagr) ·
2020-10-07T03:29:14.042Z · EA(p) · GW(p)
I've been thinking a fair bit about this.
I think that forecasting can act as really good intellectual training for people. It seems really difficult to BS, and there's a big learning curve of different skills to get good at (can get into automation).
I'm not sure how well it will scale in terms of paid forecasters for direct value (agreeing with "the impact probably isn't huge). I have a lot of uncertainty here though.
I think the best analogy is that to hedge funds and banks. I could see things going one of two ways; either it turns out what we really want is a small set of super intelligent people working in close coordination (like a high-salary fund) or that we need a whole lot of "pretty intelligent people" to do scaled labor (like a large bank or trading institution).
That said, if forecasting could help us determine what else would be useful to be doing, then we're kind of set.
comment by Nathan Young (nathan) ·
2020-10-06T00:07:22.842Z · EA(p) · GW(p)
Task Y seems analogous to GiveDirectly - less effective that the top option but with the ability to take on many more resources.
Some ideas for task Y. I'm mainly spitballing, so don't take them too seriously:
Trying to convince friends and families of good policy ideas - EAs could be encouraged to learn and spread good policy ideas. I agree that politics is a polarising, but policy need not be zero sum.
Forecasting effective causes. I think there should be a board for forecasting impact of different causes. People can vote and then whenever GiveWell gets round to checking, you could test accuracy. Low cost and the accuracy might be better than random.
Testing decision-making interventions at work. If there is an effective decision making strategy then EAs could test it at their workplaces. This would lead to lots of case studies and adoption if effective.
comment by TomBill ·
2019-02-23T22:20:39.551Z · EA(p) · GW(p)
A well written post with a good level of depth into an important topic. Thank you! If I were to give a suggestion, I would say that I don't find the title a very good flag of the content.
This isn't a Task Y (at least it doesn't obviously fulfil your outlined components), but a small-scale interaction with EA not currently mentioned would be the 80,000 Hours careers guide emailing scheme. This sends you a part of their careers guide each week for you to read and interact with. This supposedly takes 180 minutes to complete over 12 weeks. This seems like it may have up-scaling capabilities, too.
Another potential quick-fix similar to this could also be a more robust internship system within EA. 80,000 Hour's job board has internships, and we could encourage people to utilise it to help fill some of the place of a Task Y (lower-commitment, can have a clear positive effect, includes career capital).
However, I would stress that neither of these ideas seem like they would completely fulfil the role of a good Task Y.
comment by Prabhat Soni ·
2021-04-14T03:11:01.932Z · EA(p) · GW(p)
Task Y candidate: Fellowship facilitator for EA Virtual Programs
EA Virtual Programs [? · GW] runs intro fellowships, in-depth fellowships, and The Precipice reading groups (plus occasional other programs). The time commitment for facilitators is generally 2-5 hours per week (depending on the particular program).
EA intro fellowships (and similar programs) have been successful at minting engaged [EA · GW] EAs. There are large diminishing returns even in selecting applicants with a not-so-strong application since the application process does not predict future engagement well (see this [EA · GW] and this [EA · GW]). Thus, if a fellowship/reading group has to reject people, that's significant value lost. Rejected applicants generally re-apply at low rates (despite being encouraged to!).
- Is EA Virtual Programs short on facilitators? I don't know. The answer to this question would presumably change post-COVID (IMO the answer could shift in either direction), and so in the interest of future-proofing this answer, I will not bother to find the current demand for facilitators.
- Will EA Virtual Programs exist post-COVID? An organizer at the EA Virtual Programs informally said that nothing concrete has been decided yet, but the project was probably leaning towards continuing in some capacity. It is not clear to me whether there will even be significantly fewer applicants post-COVID (since most(?) university groups are running their fellowships independantly right now).
I know of atleast a few non-student working professionals who are facilitators for EA Virtual Programs, which I will take as evidence that this can be a Task Y.Replies from: akrivka, yiyang
↑ comment by Yi-Yang (yiyang) ·
2021-09-01T12:41:46.238Z · EA(p) · GW(p)
I think this sounds right! This makes me feel like we should also pay particular attention to making the facilitator experience is great too.
Organising local intro EA programs can also be a great Task Y candidate.
comment by toonalfrink ·
2019-03-01T17:46:30.659Z · EA(p) · GW(p)
- Many fairly complicated tasks can be broken down into tasks that are fairly simple to carry out. All it takes is some ingenuity and some investment to spend the time to systematize the thing well enough that people can do the thing (Amazon Mechanical Turk comes to mind).
- I've worked somewhat extensively with volunteers, and I find that only a small minority is actually willing to put in the work completely pro bono. Most volunteer work confers at least some benefit for the volunteer. If it doesn't, you usually find that turnover is so large that the overhead isn't worth it. In the case of regular volunteering, these benefits would be upgrading the community you're a part of, or perhaps learning skills or upgrading your CV, or maybe just fun. In EA I find that the motivation is often social interaction with like-minded peers.
- First 2 points imply that, at least in my limited experience, the bottleneck is incentivizing people to show up consistently.
- It seems that this requirement is sometimes automatically met if the volunteering happens offline. There's something about physicality and interacting with people that can be rewarding enough for the volunteer to keep showing up. That kind of magic is much less potent when you're online. If we could do something about that, it might be a breakthrough.
- Should there be no one task Y, but a bag of small tasks \(Y_1, Y_2, ...,\) there might still be a "incentive Z" that all of them could employ to help motivate people to help with things. The most obvious solution for Z is money, but there might be cultural ones that are much more scalable.
- An example to illustrate the last 2 points: if there was some kind of cozy online "EA living room" that was fun to hang around in but also repeatedly prompted people to "score points" to do things, that might be both scalable and keep people showing up. Maybe this wouldn't scale into the millions, but it would at least keep "soft EA's" meaningfully involved.
comment by Nathan Young (nathan) ·
2020-10-06T00:02:08.690Z · EA(p) · GW(p)
Is Teach For America impactful?
Or does it use two years of time at the start of people's careers. Caplan's "The Case Against Education" puts forward a strong case that a large chunk (certainly university) education's value is signalling. I think some of this applies to 5-18 year old education too. Certainly there are flaws in the current education system that it would be better without, but it might just be tinkering around the edges. In that sense, is it a better use of people's time than the counterfactual?
I know it's just an analogy, but the analogy carries across. We wouldn't want to be advertising people sink time into something which turns out to be ineffective. That would likely discredit EA among people who had seen that.
comment by EdoArad (edoarad) ·
2019-11-05T06:25:31.866Z · EA(p) · GW(p)
Has there been any further discussion of Task Y? More candidates?
Replies from: alexrjl
↑ comment by alexrjl ·
2019-11-05T12:37:36.032Z · EA(p) · GW(p)
I've updated towards earning to give having more of the characteristics of task y than I originally thought, based partly on the discussion on the comments. There are some good volunteering opportunities (for those in London, for example, doing charity analysis for sogive) but I haven't found anything as scalable yet.
One idea I want to explore more is that of effective activism. The difficulty of assessing outcomes is obvious, but XR, for all it's flaws, has shown the potential to get huge numbers of people involved.
Replies from: Sanjay, edoarad, jpaddison
↑ comment by EdoArad (edoarad) ·
2019-11-05T16:13:18.133Z · EA(p) · GW(p)
Sorry, what do you mean by XR?
Regarding earning to give, from reading this it seems that to maintain motivation and interest there is still a lot of structure needed. How do you think of that?
Replies from: Khorton, alexrjl
↑ comment by alexrjl ·
2019-11-05T18:48:56.871Z · EA(p) · GW(p)
I agree that lots of structure is needed, and I'm very uncertain on the best structure. I do really like John Behar's post above about the "personal best" approach though.
↑ comment by JP Addison (jpaddison) ·
2019-11-09T01:03:09.774Z · EA(p) · GW(p)
Thanks edoarad for making me think about this. I agree earning to give is actually a pretty good candidate. That's influencing me to think about how to make earning to give more like Teach for America.
I liked this post when it came out, but it's a rare post that I continue to be influenced by over time. Changed my vote from an upvote to a strong upvote.
comment by Feike ·
2019-03-08T17:35:18.280Z · EA(p) · GW(p)
I'm not sure this is a good idea, but let's stay creative: (internet) consultation in new policy. In the Netherlands you can do this for a lot of new laws of the central government. For a lot it may not be worth the trouble, but in some laws we may be able to find large improvements. I have the general image that in government, making the right laws is very difficult, and if you have strong arguments why a certain thing must be changed, people will go with that. Companies seem to often lobby for laws in their favor, we can do that to make laws and policies more EA-aligned. Replies from: Feike
↑ comment by Feike ·
2019-03-09T08:52:39.960Z · EA(p) · GW(p)
I realized this is too difficult and probably not impactful enough to be scalable.
What I do want to say, is that EA has already impacted my life by making me a more informed citizen. I know understand more of the big problems in the world, have become more rational and can thus more easily judge what I need to make of the information in the news for example. It may not look very impactful to spend a lot of time reading about EA-related topics, but EA is difficult, and if we also want to have the right message out in one-on-one conversations, we need people in the community to stay informed of all the great ideas and ways of thinking out there.
comment by alexrjl ·
2019-02-24T17:01:17.061Z · EA(p) · GW(p)
Thank you all for the positive comments and extrememly useful feedback! I've edited some subheadings and a summary into the original post, though I've (optimistically) left the title so that people who've read the post and want coming back to participate in the discussion don't get lost. I've also included John's question in the list of important question to ask.