Posts

The Unilateralist's Curse, An Explanation 2022-06-09T18:25:14.669Z
Why didn't we turn it off? A creative fictional story of AI takeover 2022-05-03T14:13:39.848Z
A visualization of some orgs in the AI Safety Pipeline 2022-04-10T16:52:44.169Z
Analogy of AI Alignment as Raising a Child? 2022-02-19T21:40:23.699Z
EA Claremont Winter 21/22 Intro Fellowship Retrospective 2022-01-21T06:15:07.399Z
We should be paying Intro Fellows 2021-12-25T10:23:54.870Z
Pilot study results: Cost-effectiveness information did not increase interest in EA 2021-12-19T08:22:55.847Z
Aaron_Scher's Shortform 2021-10-27T07:32:03.699Z

Comments

Comment by Aaron_Scher on Estimating the Current and Future Number of AI Safety Researchers · 2022-09-30T08:32:52.740Z · EA · GW

Thanks for making this. I expect that after you make edits based on comments and such this will be the most up to date and accurate public look at this question (the current size of the field). I look forward to linking people to it!

Comment by Aaron_Scher on "Doing Good Best" isn't the EA ideal · 2022-09-21T01:45:05.562Z · EA · GW

I disagree with a couple specific points as well as the overall thrust of this post. Thank you for writing it!

A maximizing viewpoint can say that we need to be cautious lest we do something wonderful but not maximally so. But in practice, embracing a pragmatic viewpoint, saving money while searching for the maximum seems bad.

I think I strongly disagree with this because opportunities for impact appear heavy tailed. Funding 2 interventions that are in the 90th percentile is likely less good than funding 1 intervention in the 99th percentile. Given this state of the world, spending much of our resources trying to identify the maximum is worthwhile. I think the default of the world is that I donate to a charity in the 50th percentile. And if I adopt a weak mandate to do lots of good (a non-maximizing frame, or an early EA movement), I will probably identify and donate to a charity in the 90th percentile. It is only when I take a maximizing stance, and a strong mandate to do lots of good (or when many thousands of hours have been spent on global priorities research), that I will find and donate to the very best charities. The ratios matter of course, and probably if I was faced with donating $1,000 to 90th percentile charities or $1 to a 99th percentile charity, I would probably donate to the 90th percentile charities, but if the numbers are $2 and $1, I should donate to the 99th percentile charity. I am claiming: the distribution of altruistic opportunities is roughly heavy tailed; the best (and maybe only) way to end up in the heavy tail is to take a maximizing approach; the “wonderful” thing that we would do without maximizing is, as measured ex post (looking at the results in retrospect), significantly worse than the best thing; a claim that I am also making, though which I think is weakest, is that we can differentiate between the “wonderful” and the “maximal available” opportunities ex ante (before hand) given research and reflection; the thing I care about is impact, and the EA movement is good insofar as it creates positive impact in the world (including for members of the EA community, but they are a small piece of the universe). 

There are presumably people who would have pursued PhDs in computer science, and would have been EA-aligned tenure track professors now, but who instead decided to earn-to-give back in 2014. Whoops!

To me this seems like it doesn’t support the rest of your argument. I agree that the correct allocation of EA labor is not all doing AI Safety research, and we need to have outreach and career related resources to support people with various skills, but to me this is more-so a claim that we are not maximizing well enough — we are not properly seeking the optimal labor allocation because we’re a relatively uncoordinated set of individuals. If we were better at maximizing at a high level, and doing a good job of it, the problem you are describing would not happen, and I think it’s extremely likely that we can solve this problem.

With regard to the thrust of your post: I cannot honestly tell a story about how the non-maximizing strategy wins. That is, when I think about all the problems in the world: pandemics, climate change, existential threats from advanced AI, malaria, mass suffering of animals, unjust political imprisonment, etc., I can’t imagine that we solve these problems if we approach them like exercise or saving for retirement. If I actually cared about exercise or saving for retirement, I would treat them very differently than I currently do (and I have had periods in my life where I cared more about exercise and thus spent 12 hours a week in the gym). I actually care about the suffering and happiness in the world, and I actually care that everybody I know and love doesn’t die from unaligned AI or a pandemic or a nuclear war. I actually care, so I should try really hard to make sure we win. I should maximize my chances of winning, and practically this means maximizing for some of the proxy goals I have along the way. And yes, it's really easy to mess up this maximize thing and to neglect something important (like our own mental health), but that is an issue with the implementation, not with the method. 

Perhaps my disagreement here is not a disagreement about what EA descriptively is and more-so a claim about what I think a good EA movement should be. I want a community that's not a binary in / out, that's inclusive and can bring joy and purpose to many people's lives, but what I want more than those things is for the problems in the world to be solved — for kids to never go hungry or die from horrible diseases, for the existence of humanity a hundred years from now to not be an open research question, for billions+ of sentience beings around the world to not live lives of intense suffering. To the extent that many in the EA community share this common goal, perhaps we differ in how to get there, but the strategy of maximizing seems to me like it will do a lot better than treating EA like I do exercise or saving for retirement.

Comment by Aaron_Scher on The discount rate is not zero · 2022-09-06T00:43:51.402Z · EA · GW

You write:

Another possible reason to argue for a zero-discount rate is that the intrinsic value of humanity increases at a rate greater than the long-run catastrophe rate[19]. This is wrong for (at least) 2 reasons. 

Your footnote is to The Precipice: To quote from The Precipice Appendix E:

by many measures the value of humanity has increased substantially over the centuries. This progress has been very uneven over short periods, but remarkably robust over the long run. We live long lives filled with cultural and material riches that would have seemed like wild fantasy to our ancestors thousands of years ago. And the scale of our civilization may also matter: the fact that there are thousands of times as many people enjoying these richer lives seems to magnify this value. If the intrinsic value of each century increases at a rate higher than r, this can substantially increase the value of protecting humanity (even if this rate of increase is not sustained forever). [Footnote here]

Regarding your first reason: You first cite that this would imply a negative-discount rate that rules in favor of future people; I'm confused why this is bad? You mention "radical conclusions" – I mean sure, there are many radical conclusions in the world, for instance I believe that factory farming is a moral atrocity being committed by almost all of current society – that's a radical view. Being a radical view doesn't make it wrong (although I think we should be healthily skeptical of views that seem weird). Another radical conclusion I hold is that all people around the world are morally valuable, and enslaving them would be terrible; this view would appear radical to most people at various points in history, and is not radical in most the world now. 

Regarding your second reason: 

while it is true that lives lived today are much better than lives lived in the past (longer, healthier, richer), and the same may apply to the future, this logic leads to some deeply immoral places. The life of a person a who will live a long, healthy, and rich life, is worth no more than the life of the poorest, sickest, person alive. While some lives may be lived better, all lives are worth the same. Longtermism should accept this applies across time too.

I would pose to you the question: Would you rather give birth to somebody who would be tortured their entire life or somebody who would be quite happy throughout their life (though they experience ups and downs)? Perhaps you are indifferent between these, but I doubt it (they both are one life being born, however, so taking the "all lives are worth the same" line literally here implies they are equally good). I think the value of a future where everybody being tortured is quite bad and is probably worse than extinction, whereas a flourishing future where people are very happy and have their needs met would be awesome! 

I agree that there are some pretty unintuitive conclusions of this kind of thinking, but there are also unintuitive conclusions if you reject it! I think the value of an average life today, to the person living it, is probably higher than the value of an average life in 1700 CE, to the person living it. In the above Precipice passage, Ord discusses some reasons why this might be so. 

Comment by Aaron_Scher on The discount rate is not zero · 2022-09-06T00:10:27.002Z · EA · GW

Welcome to the forum! I am glad that you posted this! And also I disagree with much of it. Carl Shulman already responded explaining why he things the extinction rate approaches zero fairly soon, reasoning I agree with. 

Under a stable future population, where people produce (on average) only enough offspring to replace themselves, a person’s expected number of descendants is equal to the expected length of human existence, divided by the average lifespan (). I estimate this figure is 93[22].

To be consistent, when comparing lives saved in present day interventions with (expected) lives saved from reduced existential risk, present day lives saved should be multiplied by this constant, to account for the longtermist implications of saving each person. This suggests priorities such as global health and development may be undervalued at present.

I think the assumption about a stable future population is inconsistent with your calculation of the value of the average life. I think of two different possible worlds:

World 1: People have exactly enough children to replace themselves, regardless of the size of the population. The population is 7 billion in the first generation; a billion extra (not being accounted for in the ~2.1 kids per couple replacement rate) people die before being able to reproduce. The population then goes on to be 6 billion for the rest of the time until humanity perishes. Each person who died cost humanity 93 future people, making their death much worse than without this consideration.

World 2: People have more children than replace themselves, up to the point where the population hits the carrying capacity (say it's 7 billion). The population is 7 billion in the first generation; a billion extra (not being accounted for in the ~2.1 kids per couple replacement rate) people die before being able to reproduce. The population then goes on to be 6 billion for one generation, but the people in that generation realize that they can have more than 2.1 kids. Maybe they have 2.2 kids, and each successive generation does this until the population is back to 7 billion (the amount of time this takes depends on numbers, but shouldn't be more than a couple generations).

World 2 seems much more realistic to me. While in World 1, each death cost the universe 1 life and 93 potential lives, in World 2 each death cost the universe something like 1 life and 0-2 potential lives.

It seems like using an average number of descendants isn't the important factor if we live in a world like World 2 because as long as the population isn't too small, it will be able to jumpstart the future population again. Thus follows the belief that (100% of people dying vs. 99% of people dying) is a greater difference than (0% of people dying vs. 99% of people dying). Assuming 1% of people would be able to eventually grow the population back. 

Comment by Aaron_Scher on My current impressions on career choice for longtermists · 2022-08-30T19:08:59.450Z · EA · GW

I read this post around the beginning of March this year (~6 months ago). I think reading this post was probably net-negative for my life plans. Here are some thoughts about why I think reading this post was bad for me, or at least not very good. I have not re-read the post since then, so maybe some of my ideas are dumb for obvious reasons. 

I think the broad emphasis on general skill and capacity building often comes at the expense of directly pursuing your goals. In many ways, the post is like “Skill up in an aptitude because in the future this might be instrumentally useful for making the future go well.” And I think this is worse than “Identify what skills might help the future go well, then skill up in these skills, then you can cause impact.” I think the aptitudes framework is what I might say if I knew a bunch of un-exceptional people were listening to me and taking my words as gospel, but it is not what I would advise to an exceptional person who wants to change the world for the better (I would try to instill a sense of specifically aiming at the thing they want and pursuing it more directly). This distinction is important. To flesh this out, if only geniuses are reading my post, I might advise that they try high variance, high EV things which have a large chance of ending up in the tails (e.g., startups, for which most the people will fail). But I would not recommend to a broader crowd that they try startups, because more of them would fail, and then the community that I was trying to create to help the future go well is largely made up of people who took long shot bets and failed, making them not so useful, and making my community less useful when it's crunch time (although I am currently unsure what we need at crunch time, having a bunch of people who pursued aptitudes growth is probably good). Therefore, I think I understand and somewhat endorse a safer, aptitudes based advice at a community scale, but I don't want it to get in the way of 'people who are willing to take greater risks and do whacky career stuff' actually doing so. 

My personal experience is that reading this post gave me the idea that I could sorta continue life as normal, but with a slight focus on developing particular aptitudes like building organizational success, research on core longtermist topics, communicating maybe. I currently think that plan was bad and, if adopted more broadly, has a very bad chance of working (i.e., AI alignment gets solved). However, I also suspect that my current path is suboptimal – I am not investing in my career capital or human capital for the long-run as much as I should be. 

So I guess my overall take is something like: people should consider the aptitudes framework, but they should also think about what needs to happen in the world in order to get the thing you care about. Taking a safer, aptitudes based approach, is likely the right path for many people, but not for everybody. If you take seriously the career advice that you read, it seems pretty unlikely that this would cause you to take roughly the same actions you were planning on taking before reading – you should be suspicious of this surprising convergence. 

Comment by Aaron_Scher on Global health is important for the epistemic foundations of EA, even for longtermists · 2022-06-09T03:36:04.154Z · EA · GW

This is great and I’m glad you wrote it. For what it’s worth, the evidence from global health does not appear to me strong enough to justify high credence (>90%) in the claim “some ways of doing good are much better than others” (maybe operationalized as "the top 1% of charities are >50x more cost-effective than the median", but I  made up these numbers).

The DCP2 (2006) data (cited by Ord, 2013) gives the distribution of the cost-effectiveness of global health interventions. This is not the distribution of the cost-effectiveness of possible donations you can make. The data tells us that treatment of Kaposi Sarcoma is much less cost-effective than antiretroviral therapy in terms of avoiding HIV related DALYs, but it tell us nothing about the distribution of charities, and therefore does not actually answer the relevant question: of the options available to me, how much better are the best than the others?

If there is one charity focused on each of the health interventions in the DCP2 (and they are roughly equally good at turning money into the interventions) – and therefore one action corresponding to each intervention – then it is true that the very best ways of doing good available to me are better than average.

The other extreme is that the most cost-effective interventions were funded first (or people only set up charities to do the most cost-effective interventions) and therefore the best opportunities still available are very close to average cost-effectiveness. I expect we live somewhere between these two extremes, and there are more charities set up for antiretroviral therapy than kaposi sarcoma.

The evidence that would change my mind is if somebody publicly analyzed the cost-effectiveness of all (or many) charities focused on global health interventions. I have been meaning to look into this, but haven’t yet gotten around to it. It’s a great opportunity for the Red Teaming Contest, and others should try to do this before me. My sense is that GiveWell has done some of this but only publishes the analysis for their recommended charities; and probably they already look at charities they expect to be better than average – so they wouldn’t have a representative data set.

Comment by Aaron_Scher on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T17:09:48.489Z · EA · GW

The edit is key here. I would consider running an AI-safety arguments competition in order to do better outreach to graduate-and-above level researchers to be a form of movement building and one for which crunch time could be in the last 5 years before AGI (although probably earlier is better for norm changes). 

One value add from compiling good arguments is that if there is a period of panic following advanced capabilities (some form of firealarm), then it will be really helpful to have existing and high quality arguments and resources on hand to help direct this panic into positive actions. 

This all said, I don't think Chris's advice applies here: 

I would be especially excited to see people who are engaged in general EA movement building to pass that onto a successor (if someone competent is available) and transition towards AI Safety specific movement building.

I think this advice likely doesn't apply because the models/strategies for this sort of AI Safety field building are very different from that of general EA community building (e.g., University groups), the background knowledge is quite different, the target population is different, the end goal is different, etc. If you are a community builder reading this and you want to transition to AI Safety community building but don't know much about it, probably learning about AI Safety for >20 hours is the best thing you can do. The AGISF curriculums are pretty great. 

Comment by Aaron_Scher on We should expect to worry more about speculative risks · 2022-05-30T05:49:15.267Z · EA · GW

I’m a bit confused by this post. I’m going to summarize the main idea back, and I would appreciate it if you could correct me where I’m misinterpreting.

Human psychology is flawed in such a way that we consistently estimate the probability of existential risk from each cause to be ~10% by default. In reality, the probability of existential risk from particular causes is generally less than 10% [this feels like an implicit assumption], so finding more information about the risks causes us to decrease our worry about those risks. We can get more information about easier-to-analyze risks, so we update our probabilities downward after getting this correcting information, but for hard-to-analyze risk we do not get such correcting information so we remain quite worried. AI risk is currently hard-to-analyze, so we remain in this state of prior belief (although the 10% part varies by individual, could be 50% or 2%).

I’m also confused about this part specifically: 

initially assign something on the order of a 10% credence to the hypothesis that it will by default lead to existentially bad outcomes. In each case, if we can gain much greater clarity about the risk, then we should think there’s about a 90% chance this clarity will make us less worried about it

 – why is there a 90% chance that more information leads to less worry? Is this assuming that for 90% of risks, they have P(Doom) < 10%, and for the other 10% of risks P(Doom) ≥ 10%?

Comment by Aaron_Scher on On funding, trust relationships, and scaling our community [PalmCone memo] · 2022-05-30T05:00:56.915Z · EA · GW

A solution that doesn’t actually work but might be slightly useful: Slow the lemons by making EA-related Funding things less appealing than the alternative.

One specific way to do this is to pay less than industry pays for similar positions: altruistic pay cut. Lightcone, the org Habryka runs, does this: “Our current salary policy is to pay rates competitive with industry salary minus 30%.” At a full-time employment level, this seems like one way to dissuade people who are interested in money, at least assuming they are qualified and hard working enough to get a job in industry with similar ease.

Additionally, it might help to frame university group organizing grants in the big scheme of the world. For instance, as I was talking to somebody group organizing grants I reminded them that the amount of money they would be making (which I probably estimated at a couple thousand dollars per month), is peanuts compared to what they’ll be earning in a year or two when they graduate from a top university with a median salary of ~80k. It also seems relevant to emphasize that you actually have to put in the time and effort into organizing a group for a grant like this; it’s not free money – it’s money in exchange for time/labor. Technically it’s possible to do nothing and pretty much be a scam artist, but I didn’t want to say that.

This solution doesn’t work for a few reasons. One is that it only focuses on one issue – the people who are actually in it for themselves. I expect we will also have problems of well-intending people who just aren’t very good at stuff. Unfortunately, this seems really hard to evaluate, and many of us deal with imposter syndrome, so self-evaluation/selection seems bad.

This solution also doesn’t work because it’s hard to assess somebody’s fit for a grant, meaning it might remain easier to get EA-related money than other money. I claim that it is hard to evaluate somebody’s fit for a grant in large part because feedback loops are terrible. Say you give somebody some money to do some project. Many grants have some product or deliverable that you can judge for its output quality, like a research paper. Some EA-related grants have this, but many don’t (e.g., paying somebody to skill up might have deliverables like a test score but might not). Without some form of deliverable or something, how do you know if your grant was any good? Idk maybe somebody who does grantmaking has an idea on this. More importantly, a lot of the bets people in this community are taking are low chance of success, high EV. If you expect projects to fail a lot, then failure on past projects is not necessarily a good indicator of somebody’s fit for new grants (in fact it's likely good to keep funding high EV, low P(success) projects, depending on your risk tolerance). So this makes it difficult to actually make EA-related money harder to get than other money.

Comment by Aaron_Scher on We Ran an AI Timelines Retreat · 2022-05-26T00:20:29.026Z · EA · GW

Good question. Short answer: despite being an April Fools post, that post seems to encapsulate much of what Yudkowski actually believes – so the social context is that the post is joking in its tone and content but not so much the attitude of the author; sorry I can't link to anything to further substantiate this. I believe Yudkowski's general policy is to not put numbers on his estimates.

Better answer: Here is a somewhat up-to-date database about predictions about existential risk chances from some folks in the community. You'll notice these are far below near-certainty. 

One of the studies listed in the database is this one in which there are a few researchers who put the chance of doom pretty high.

Comment by Aaron_Scher on What would you like to see Giving What We Can write about? · 2022-05-08T23:27:33.376Z · EA · GW

#17 in the spreadsheet is "How much do charities differ in impact?"

I would love to see an actual distribution of charity cost-effectiveness. As far as I know, that doesn't exist. Most folks rely on Ord (2013) which is the distribution of health interventions, but it says nothing about where charities actually do work. 

Comment by Aaron_Scher on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-08T20:29:49.044Z · EA · GW

Thanks for linking Claire's post, a great read!

Comment by Aaron_Scher on The AI Messiah · 2022-05-06T01:38:39.910Z · EA · GW

I really enjoyed this comment, thanks for writing it Thomas!

Comment by Aaron_Scher on Is it still hard to get a job in EA? Insights from CEA’s recruitment data · 2022-04-30T00:08:46.215Z · EA · GW

Thanks for writing this up and making it public. Couple comments:

On average 45 applications were submitted to each position.

CEA Core roles received an average of 54 applications each; EOIs received an average of 53 applications each.

Is the first number a typo? Shouldn't it be ~54

 

Ashby hires 4% of applicants, compared to 2% at CEA

...

Overall, CEA might be slightly more selective than Ashby’s customers, but it does not seem like the difference is large

Whether this is "large" is obviously subjective. When I read this, I see 'CEA is twice as selective as industry over the last couple years'. Therefore my conclusion is something like: Yes, it is still hard to get a job in EA, as evident from CEA being around twice as selective as industry for some roles; there are about 54 applicants per role at CEA. I think the summary of this post should be updated to say something like "CEA is more competitive but in the same ballpark as industry"

Comment by Aaron_Scher on EA needs money more than ever · 2022-04-26T03:29:40.269Z · EA · GW

Congrats on your first forum post!! Now in EA Forum style I’m going to disagree with you.... but really, I enjoyed reading this and I’m glad you shared your perspective on this matter. I’m sharing my views not to tell you you’re wrong but to add to the conversation and maybe find a point of synthesis or agreement. I'm actually very glad you posted this

I don’t think I have an obligation to help all people. I think I have an obligation to do as much good as possible with the resources available to me. This means I should specialize my altruistic work in the areas with the highest EV or marginal return. This is not directly related to the number of morally valuable beings I care about. I don’t think that now valuing future humans means I have additional obligations. What changes is the bar for what’s most effective.

Say I haven’t learned about longtermism, I think GiveWell is awesome, and I am a person who feel obligated to do good. Maybe I can save lives to ~$50,000 per life by donating to GiveDirectly. Then I keep reading and find that AMF saves lives for ~$5,000 per life. I want to do the most good, so I give to the AMF, maximizing the positive impact of my donations.

Then I hear about longtermism and I get confused by the big numbers. But after thinking for awhile I decide that there are some cost-effective things I can fund in the longtermism or x-risk reduction space. I pull some numbers out of thin air and decide that a $500 donation to LTFF will save one life in expectation.

At this point, I think I should do the most good possible per resource, which means donating to the LTFF[1].

My obligation is to do the most good, on the margin where I can, I think. What longtermism changes for me is the cost-effectiveness bar that needs to be cleared. Prior to longtermism, it’s about $5,000 per life saved, via AMF. Now it’s about $500 but with some caveats. Importantly, increasing the pool of money is still good because it is still good to prevent kids dying of malaria; however, this is not the best use of my money.

Importantly, efficiency still matters. If LTFF saves lives for $500 and NTI saves lives for $400 (number also pulled out of thin air), I should give to NTI, all else equal.

I somewhat agree with you about

“Wow, we need to help current people, current animals, and future people and future animals, all with a subset of present-day resources. What a tremendous task”

However, I think it’s better to act according to “do the most good I can with my given resources, targeting the highest EV or marginal return areas”. Doing good well requires making sacrifices, and the second framing better captures this requirement.

Maybe a way I would try to synthesize my view and your conclusion is as follows: We have enormous opportunities to do good, more than ever before. If saving lives is cheaper now than ever before, the alternatives are relatively more expensive. That is, wasting $500 was only worth 0.1 lives before and now it’s worth a whole life. This makes wasting our resources even worse than it used to be.

Edit: Also thank you for writing your post because if gave me an opportunity on my own beliefs about this. :)

  1. ^

    Although realistically I would diversify because of moral uncertainty, some psychological benefits of doing good with p~1, empirical uncertainty about how good LTFF is, social benefits of giving to near-term causes, wanting to remain connected to current suffering, and intuitively seems good, etc.

Comment by Aaron_Scher on Longtermist EA needs more Phase 2 work · 2022-04-24T04:03:45.788Z · EA · GW

Thanks for the clarification! I would point to this recent post on a similar topic to the last thing you said. 

Comment by Aaron_Scher on Longtermist EA needs more Phase 2 work · 2022-04-22T17:08:29.209Z · EA · GW

Sorry for the long and disorganized comment.

I agree with your central claim that we need more implementation, but I either disagree or am confused by a number of other parts of this post. I think the heart of my confusion is that it focuses on only one piece of end to end impact stories: Is there a plausible story for how the proposed actions actually make the world better?

You frame this post as “A general strategy for doing good things”. This is not what I care about. I do not care about doing things, I care about things being done. This is semantic but it also matters? I do not care about implementation for it’s own sake, I care about impact. The model you use assumes preparation, implementation and the unspoken impact. If the action leading to the best impact is to wait, this is the action we should take, but it’s easy to overlook this if the focus is on implementation. So my Gripe #1 is that we care about impact, not implementation, and we should say this explicitly. We don’t want to fall into a logistic trap either [1].

The question you pose is confusing to me:

“if the entire community disappeared, would the effects still be good for the world?”.

I’m confused by the timeline of the answer to this question (the effects in this instant or in the future?). I’m also confused by what the community disappearing means – does this mean all the individual people in the community disappear? As an example, MLAB skills up participants in machine learning; it is unclear to me if this is Phase 1 or Phase 2 because I’m not sure the participants disappear; if they disappear then no value has been created, but if they don’t disappear (and we include future impact) they will probably go make the world better in the future. If the EA community disappeared but I didn’t, I would still go work on alignment. It seems like this is the case for many EAs I know. Such a world is better than if the EA community never existed, and the future effects on the world would be positive by my lights, but no phase 2 activities happened up until that point. It seems like MLAB is probably Phase 1, as is university, as is the first half of many people’s careers where they are failing to have much impact and are skill/career capital building. If you do mean disappearing all community members, is this defined by participation in the community or level of agreement with key ideas (or something else)? I would consider it a huge win if OpenAI’s voting board of directions were all members of the EA community, or if they had EA-aligned beliefs; this would actually make us less likely to die. Therefore, I think doing outreach to these folks, or more generally “educating people in key positions about the risks from advanced AI” is a pretty great activity to be doing – even though we don’t yet know most the steps to AGI going well. It seems like this kind of outreach is considered Phase 1 in your view because it’s just building the potential influence of EA ideas. So Gripe #2: The question is ambiguous so I can’t distinguish between Phase 1 and 2 activities on your criteria.

You give the example of

writing an AI alignment textbook would be useful to the world even absent our communities, so would be Phase 2

I disagree with this. I don’t think writing a textbook actually makes the world much better. (An AI alignment textbook exists) is not the thing I care about; (aligned AI making the future of humanity go well) is the thing I care about. There’s like 50 steps from the textbook existing to the world being saved, unless your textbook has a solution for alignment, and then it’s only like 10 steps[2]. But you still need somebody to go do those things.

In such a scenario, if we ask “if the entire community disappeared [including all its members], would the effects still be good for the world?”, then I would say that the textbook existing is counterfactually better than the textbook not existing, but not by much. I don’t think the requisite steps needed to prevent the world from ending would be taken. To me, assuming (the current AL alignment community all disappears) cuts our chances of survival in half, at least[3]. I think this framing is not the right one because it is unlikely that the EA or alignment communities will disappear, and I think the world is unfortunately dependent on whether or not these communities stick around. To this end, I think investing in the career and human capital of EA-aligned folks who want to work on alignment is a class of activities relatively likely to improve the future. Convincing top AI researchers and math people etc. is also likely high EV, but you’re saying it’s Phase 2. Again, I don’t care about implementation, I care about impact. I would love to hear AI alignment specific Phase 2 activities that seem more promising than “building the resource bucket (# of people, quality of ideas, $ to a lesser extent, skills of people) of people dedicated to solving alignment”. By more promising I mean have a higher expected value or increase our chances of survival more. Writing a textbook doesn’t pass the test I don’t think. There’s some very intractable ideas I can think of like the UN creates a compute monitoring division. Of the FTX Future Fund ideas, AI Alignment Prizes are maybe Phase 2 depending on the prize, but depends on how we define the limit of the community; probably a lot of good work deserving of a prize would result in an Alignment Forum or LessWrong post without directly impacting people outside these communities much. Writing about AI Ethics suffers from the alignment textbook because it just relies on other people (who probably won’t) taking it seriously. Gripe 3: In terms of AI Alignment, the cause area I focus on most, we don’t seem to have promising Phase 2 ideas but some Phase 1 ideas seem robustly good.

I guess I think AI alignment is a problem where not many things actually help. Creating an aligned AGI helps (so research contributing to that goal has high EV, even if it’s Phase 1), but it’s only something we get one shot at. Getting good governance helps; much of the way to do this is Phase 1 of aligned people getting into positions of power; the other part is creating strategy and policy etc; CSET could create an awesome plan to govern AGI, but, assuming policy makers don’t read reports from disappeared people, this is Phase 1. Policy work is Phase 1 up until there is enough inertia for a policy to get implemented well without the EA community. We’re currently embarrassingly far from having robustly good policy ideas (with a couple exceptions). Gripe 3.5: There’s so much risk of accidental harm from acting soon, and we have no idea what we’re doing.

I agree that we need implementation, but not for its own sake. We need it because it leads to impact or because it’s instrumentally good for getting future impact (as you mention, better feedback, drawing in more people, time diversification based on uncertainty). The irony and cognitive dissonance of being a community dedicated to doing lots of good who then spends most its time thinking does not allude me; as a group organizer at a liberal arts college I think about this quite a bit.

I think the current allocation between Phase 1 and Phase 2 could be incorrect, and you identify some decent reasons why it might be. What would change my mind is a specific plan where having more Phase 2 activities actually increases the EV of the future. In terms of AI Alignment, Phase 1 activities just seem better in almost all cases. I understand that this was a high-level post, so maybe I'm asking for too much.

  1. ^

    the concept of a logistics magnet is discussed in Chapter 11 of “Did That Just Happen?!: Beyond “Diversity”―Creating Sustainable and Inclusive Organizations” (Wadsworth, 2021). “This is when the group shifts its focus from the challenging and often distressing underlying issue to, you guessed it, logistics.” (p. 129)

  2. ^

    Paths to impact like this are very fuzzy. I’m providing some details purely to show there’s lots of steps and not because I think they’re very realistic. Some steps might be: a person reads the book, they work at an AI lab, they get promoted into a position of influence, they use insights from the book to make some model slightly more aligned and publish a paper about it; 30 other people do similar things in academia and industry, eventually these pieces start to come together and somebody reads all the other papers and creates an AGI that is aligned, this AGI takes a pivotal act to ensure others don’t develop misaligned AGI, we get extremely lucky and this AGI isn’t deceptive, we have a future!

  3. ^

    I think it sounds self-important to make a claim like this, so I’ll briefly defend it. Most the world doesn’t recognize the importance or difficulty of the alignment problem. The people who do and are working on it make up the alignment community by my definition; probably a majority consider themselves longtermist or EAs, but I don’t know. If they disappeared, almost nobody would be working on this problem (from a direction that seems even slightly promising to me). There are no good analogies, but... If all the epidemiologists disappeared, our chances of handling the next pandemic well would plunge. This is a bad example partially because others would realize we have a problem and many people have a background close enough that they could fill in the gaps

Comment by Aaron_Scher on “Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments · 2022-04-20T21:25:15.339Z · EA · GW

Non-original idea: What about a misaligned AI threatening to torture people? An aligned AGI could exist, and then a misaligned AGI could be created. The second AGI threatens to torture or kill lots of people if not given more power. Presumably, it could get in a position where it is able to do this without triggering the Deterrence mode of the aligned AGI, unless there is really good interpretability and surveillance. The first AGI, being a utility maximizer and suffering minimizer, cedes control of the future to the second AGI because it's better than the tons of suffering (e.g., even human extinction or paperclipping may be better than billions suffering from non-fatal rabies, or other unimaginable suffering). This failure mode, maybe call it hostage-based-takeover (HBT) if it doesn't have a better name, is still possible even given the scenario you lay out. That is, HBT is strongly in favor of offense given many values an aligned AGI could have and imperfect surveillance/deterrence. Variants of this idea have been discussed here in terms of an AI threatening to torture simulations of you if you don't let it out. The simulation part and the "you" part don't seem important for this argument to go through, because many people would back down in the face of a realistic threat to torture everybody. 

More original idea: It seems to me that novel technologies often favor offense because, at the core, successful offense requires exploiting one unpatched vulnerability, whereas successful defense requires finding and patching all the vulnerabilities that could plausibly be found by others. The FBI has to stop all the terrorists to be successful, but from the perspective of the terrorists, even one successful attack is a win. We could have 100 years of avoiding nuclear war, but it only takes one mess up to be really bad.

I think that offense is favored by default in a lot of cases. And when the stakes are incredibly high, like extinction, the bar for safe deterrence is incredibly high. It feels unlikely to me that we could reach a sufficiently high bar of safety without some pivotal act like:

  1. Global monitoring of compute usage (I need to reply to your slack message)
  2. Invasive monitoring of AI labs
  3. Destruction of other GPUs etc.
  4. Something else.

Thinking through these needs, it seems like having the UN implement measures like this would be best, but as others have mentioned, this seems unlikely in the current environment.

Comment by Aaron_Scher on How I failed to form views on AI safety · 2022-04-20T03:19:05.480Z · EA · GW

Thanks for writing this, it was fascinating to hear about your journey here. I also fell into the cognitive block of “I can’t possibly contribute to this problem, so I’m not going to learn or think more about it.” I think this block was quite bad in that it got in the way of me having true beliefs, or even trying to, for quite a few months. This wasn’t something I explicitly believed, but I think it implicitly affected how much energy I put into understanding or trying to be convinced by AI safety arguments. I wouldn’t have realized it without your post, but my guess is that this trap is one of the most likely ways 80k could be counterproductive. By framing issues as “you need a phd from a top 10 uni to work on this cause,” they give a (implicit, unintentional) license to everybody else to not care about said cause. As somebody who studied psychology, I think the way we talk about AI safety turned me off of even thinking about it’s importance. There seems to have been a shift recently toward “we need good ops and governance people too” which seems better but maybe has the same problem to a lesser degree. For whatever it’s worth, my current belief is something like “ai safety is so important that it is worth it for me to work on it even if I don’t currently know how I can help” (exception being if I was counterproductive). I believe this quite strongly, and am willing(/privileged enough to be able to) sacrifice things like job security in order to try and help with alignment (though it’s unclear if this is the right decision). I would love to chat more about my and your beliefs in you’re interested. You can message me or find me on Facebook or something.

Comment by Aaron_Scher on A visualization of some orgs in the AI Safety Pipeline · 2022-04-11T19:20:11.021Z · EA · GW

Thank you for your comment. Personally, I'm not too bullish on academia, but you make good points as to why it should be included. I've updated the graphic and is says this "*I don’t know very much about academic programs in this space. They seem to vary in their relevance, but it is definitely possible to gain the skills in academia to contribute to professional alignment research. This looks like a good place for further interest: https://futureoflife.org/team/ai-existential-safety-community/"

If you have other ideas you would like expressed in the graphic I am happy to include them!

Comment by Aaron_Scher on A visualization of some orgs in the AI Safety Pipeline · 2022-04-11T18:55:46.402Z · EA · GW

Thanks! Nudged. I'm going to not include CERI and CHERI at the moment because I don't know much about them. I'll make a note of them

Comment by Aaron_Scher on A visualization of some orgs in the AI Safety Pipeline · 2022-04-11T18:53:18.775Z · EA · GW

Thanks for the reminder of this! Will update. Some don't have websites but I'll link what I can find.

Comment by Aaron_Scher on A visualization of some orgs in the AI Safety Pipeline · 2022-04-11T18:51:33.438Z · EA · GW

Good question. I think "Learning the Basics" is specific to AI Safety basics and does not require a strong background in AI/ML. My sense is that the AI Safety basics and ML are slightly independent. The ML side of things simply isn't pictured here. For example, the MLAB (Machine Learning for Alignment Bootcamp) program which ran a few months ago focused on taking people with good software engineering skills and bringing them up to speed on ML. As far as I can tell, the focus was not on alignment specifically, but was intended for people likely to work in alignment. I think the story of what's happening is way more complicated than a 1 dimensional (plus org size) chart, and the skills needed might be an intersection of software engineering, ML, and AI Safety basics. 

Comment by Aaron_Scher on [deleted post] 2022-04-09T05:09:31.630Z

Hey! I love this video. It's been one of my favorite youtube videos in the last few years, but I don't think it highlights some of the major risks from advanced AI. The video definitely highlights bad actors and the need to regulate the use of powerful technologies. However, risks from advanced AI include both that and some other really scary stuff. I'm particularly worried about accidents arising from very powerful AI systems, and especially existential catastrophes. 

I think the key reason that this is my focus is because I look at AI risks through the lens of existential risk – what threats might wipe out humanity's potential or make us go extinct. The things that do this are way worse, in my view, than things that kill almost everybody. For arguments on this, I recommend Toby Ord's book The Precipice. It seems quite unlikely to me that bad actors using drones would kill literally everybody (or enough people that civilization would collapse and not recover). One of my favorite arguments as to why advanced AI might pose a risk of killing us all is this one from Joseph Carlsmith.

I think there's a good case that technologies like these might destabilize the world in ways that make it difficult to prevent other catastrophes. For example, it would probably be harder to respond to an emerging pandemic if half the world's leaders were killed by drones. In this sense, Slaughterbots might be thought of as an existential risk factor

For what it's worth, I added this video to a reading list I was making because I thought it was so much more tangible/digestible than some of the other arguments about AI risk. It took one of my co-organizers pointing out that the video isn't really about the risks we're most worried about for me to realize this. I was quite excited about the video and didn't really consider how much it lined up with the message we wanted to get across.

Comment by Aaron_Scher on Why should we care about existential risk? · 2022-04-09T04:33:11.607Z · EA · GW

Congrats on your first post! I appreciate reading your perspective on this – it's well articulated. 

I think I disagree about how likely existential risk from advanced AI is. You write:

Given that life is capable of thriving all on its own via evolution, AI would have to see the existence of any life as a threat for it to actively pursue extinction

In my view, an AGI (artificial general intelligence) is a self-aware agent with a set of goals and the capability to pursue those goals very well. Sure, if such an agent views humans as a threat to its own existence it would wipe us out. It might also wipe us out because we slightly get in the way of some goal it's pursuing. Humans have very complex values, and it is quite difficult to match an AI's values to human values. I am somewhat worried that an AI would kill us all not because it hates us but because we are a minor nuisance to its pursuit of unrelated goals. 

When humans bulldoze an ant hill in order to make a highway, it's not because we hate the ants or are threatened by them. It's because they're in the way of what we're trying to do. Humans tend to want to control the future, so if I were an advanced AI trying to optimize for some values, and they weren't the same exact values humans have, it might be easiest to just get rid of the competition – we're not that hard to kill.

I think this is one story of why AI poses existential risk, but there are many more. For further reading, I quite like  Carlsmith's piece! Again, welcome to the forum! 

Comment by Aaron_Scher on The Vultures Are Circling · 2022-04-06T18:28:58.148Z · EA · GW

Thanks for this comment, Mauricio. I always appreciate you trying to dive deeper – and I think it's quite important here. I largely agree with you. 

Comment by Aaron_Scher on New GPT3 Impressive Capabilities - InstructGPT3 [1/2] · 2022-03-14T06:58:27.021Z · EA · GW

Looking forward to the second post! I enjoy reading the fun/creative examples and hearing about how this differs from past models.

Comment by Aaron_Scher on A Gentle Introduction to Long-Term Thinking · 2022-03-10T00:06:30.031Z · EA · GW

This is great, I enjoyed reading it. Regarding Footnote #8, I would consider mentioning the following example for why discounting makes no sense:

Robert Wiblin: I think we needn’t dwell on this too long, because as you say, it has basically 0% support among people who seriously thought about it, but just to give an idea how crazy it is, if you applied a time preference of just 1% per annum, pure rate of time preference of just 1% per annum, that would imply that the welfare of Tutankhamun was more important than that of all seven billion humans that are alive today, which I think is an example of why basically no one, having thought about this properly, believes that this is a sensible moral, philosophical view.

From this podcast.

Comment by Aaron_Scher on On presenting the case for AI risk · 2022-03-09T03:47:14.034Z · EA · GW

Thanks for writing this up, it’s fantastic to get a variety of perspectives on how different messaging strategies work.

  1. Do you have evidence or a sense of if people you have talked to have changed their actions as a result? I worry that the approach you use is so similar to what people already think that it doesn’t lead to shifts in behavior. (But we need nudges where we can get them)
  2. I also worry about anchoring on small near term problems and this leading to a moral-licensing type effect for safety (and a false sense of security). It is unclear how likely this is. As in, if people care about AI Safety but lack the big picture, they might establish a safety team dedicated to say algorithmic bias. If the counter factual is no safety team, this is likely good. If the counter factual is a safety team focused on interpretability, this is likely bad. It could be that “having a safety team” makes an org or the people in it feel more justified in taking risks or investing less in other elements of safety (seems likely); this would be bad. To me, the cruxes here are something like: “what do people do after these conversations” “are the safety things they work on relevant to big problems” “how does safety culture interact with security-licensing or false sense of security”. I hope this comment didn’t come off aggressively. I’m super excited about this approach and particularly the way you meet people where they’re at, which is usually a much better strategy than how messaging around this usually comes off.
Comment by Aaron_Scher on Comments for shorter Cold Takes pieces · 2022-03-05T02:45:49.344Z · EA · GW

For those particularly concerned with counterfactual impact, this is an argument to work on problems or in fields that are just beginning or don’t exist yet in which many of the wins haven’t been realized; this is not a novel argument. I think the bigger update is that “ideas get harder to find” indicates that you may not need to have Beethoven’s creativity or Newton’s math skills in order to make progress on hard problems which are relatively new or have received little attention. In particular, AI Safety seems like a key place where this rings true, in my opinion.

Comment by Aaron_Scher on Some thoughts on vegetarianism and veganism · 2022-02-15T02:20:16.997Z · EA · GW

Thanks for writing this! Epistemic note: I am engaging in highly motivated reasoning and arguing for veg*n. 

  1. As BenStewart mentioned, virtue ethics seems relevant. I would similarly point to Kant’s moral imperative of universalizability: "act only in accordance with that maxim through which you can at the same time will that it become a universal law.” Not engaging in moral atrocities is a case where we should follow such an ideal in my opinion. We should at least consider the implications under moral uncertainty and worldview diversification. 
  2. My journey in EA has in large part been a journey of “aligning my life and my choices to my values,” or trying to lead a more ethical life. To this end, it is fairly clear that being veg*n is the ethical thing to do relative to eating animal products (I would note I’m somewhere between vegan and vegetarian, and I think moving toward veganism is ethically better).
  3. The signaling effect of being veg*n seems huge at both an individual and community level. As Luke Freeman mentioned, it would be hard to take EA seriously if we were less veg*n than average. Personally, I would likely not be in EA if being veg*n wasn’t relatively normal. This was a signal to me that these people really care and aren’t just in it when it’s convenient for them. This point seems pretty important and one of the things that hopefully sets EA apart from other communities oriented around doing good. I want to call back Ben Kuhn’s idea from 2013 of trying vs. pretending to try in terms of EA:

“A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors.”

3.5. At an individual level, when I tell people about Intro to EA seminars I can say things like “In Week 3 we read about expanding our moral consideration and animal welfare. I realized that I wasn’t giving animals the moral consideration I think they deserve, and now I try to eat fewer animal products to align my values and my actions.” (I’ve never said it this eloquently). While I haven’t empirically tested it, people seem to like anecdotes like this. 

4. I think as a community we’re asking for a lot of trust; something like “we want to align AI to not kill us all, and nobody else is doing it, so you have to trust us to do it.” Maybe this is an argument for hedging under moral uncertainty, or similarly trying to be less radical. I feel like an EA that is mostly veg*n is less radical than one with no veg*ns due to some of the other ethical claims we make (e.g., strong longtermism). Being less radical while still upholding our values sounds like a  reasonable spot to be in when (implicitly) asking for the reigns to the future.

4.5 In this awesome paper, Evan Williams argues that hedging against individual possible moral catastrophes is quite difficult. In this case, it appears to me that we can still hedge here, and we should, given our position of influence. 

5. Intuitively, diversifying across a range of activities that might be valuable seems useful. So having some things that are 0.01% chance of avoiding x-risk, some things that are 10% chance of reducing animal suffering, some things that are 95% chance of reducing malaria deaths, and 50% chance of reducing the number of animals suffering on factory farms. I need to write out my thoughts on this in more detail, but I think it’s useful to diversify across {chance of having any impact at all}, and not eating animals is a place where we can be pretty sure we’re having an impact in the long term. 

Comment by Aaron_Scher on Idea: Red-teaming fellowships · 2022-02-03T20:07:38.207Z · EA · GW

Thanks for writing this up. It seems like a good idea, and you address what I view as the main risks. I think that (contingent on a program like this going well) there is a pretty good chance that it would generate useful insights (Why #3). This seems particularly important to me for a couple reasons. 

  1. Having better ideas and quality scrutiny = good
  2. Relatively new EAs who do a project like this and have their work be received as meaningful/valuable would probably feel much more accepted/wanted in the community 

I would therefore add what I think is helpful structure, the goal being to increase the chances of a project like this generating useful insights. In your Desiderata you mention 

“Red-teaming targets should ideally be actual problems from EA researchers who would like to have an idea/approach/model/conclusion/… red-teamed against.” 

I propose a stronger view here: topics are chosen in conjunction with EA researchers or community members who want a specific idea/approach/model/conclusion/… red-teamed against and agree to provide feedback at the end. Setting up this relationship from the beginning seems important if you actually want the right people to read your report. I think with a less structured format, I'm worried folks might construct decent arguments or concerns in their red-team write up, but nobody or not the right people read them, so it's useless. 

Note 1: maybe researchers are really busy so this is actually "I will provide feedback on a 2 page summary"

Note 2: asking people what they want red-teamed is maybe a little ironic when a goal is good epistemic norms. This makes me quite uncertain that this is a useful approach, but it also might be that researchers are okay providing feedback on anything. But it seems like one way of increasing the chances of projects like this having actual impact.

This idea makes me really excited because I would love to do this!

I agree that this gets around most of the issues with paying program participants. 

Comment by Aaron_Scher on We should be paying Intro Fellows · 2022-01-29T08:04:16.675Z · EA · GW

Thanks for your response, Akash! I know I'm late to reply, so forgive me. 

Especially thanks for bringing up 1.2 as a failure mode where people aren't engaged but continue coming. This seems worrisome, and I think I didn't consider it because it's not something I've noticed in my facilitating. But it's obviously very important. 

I agree that there would be lots of variability across groups, but I'm not unsure what this implies. I am not totally against high risk, high reward strategies, and this probably depends on existential risk timelines as well as what the status quo (or counterfactual) looks like. If Uni groups are already getting ~80% of the people they want, high risk/reward strategies are not so good, but if it's more like 20% this flips. I should probably figure out what I think it is. 

Anyway, thanks for your thoughts, I have found them very helpful.

Comment by Aaron_Scher on We should be paying Intro Fellows · 2022-01-29T07:53:52.058Z · EA · GW

Hey Michael! I read your comment when you wrote it, but am only replying now :/ 

Thank you for your thoughts, you raise important questions. One I want to hone in on is: 

if EA is so focused on effectiveness, why does it make sense to pay people to just learn about EA

In a way, this seems like the classic question of "how can we convert money into X?", where X is sometimes organizer time. Here, X is "highly engaged EAs who use an EA mindset to determine their career". One proposed answer is to give out tons of books. I'm not sure if we have good cost-effectiveness estimates of book giveaways, but we also don't have cost-effectiveness estimates of paying program participants. Giving out books is reputationally much safer, but in principle it is also "spend money so people learn about EA."

The second part of your question is "especially when no other student group is paying people for things." which also seems important. I think I am more and more believing that we should not think about EA groups like other student groups. This transition (and messaging around it) here seems complicated, but I think if done right, we should totally give ourselves space to do things no other student groups are doing.

Comment by Aaron_Scher on Aaron_Scher's Shortform · 2022-01-28T22:15:26.147Z · EA · GW

Hey Ed, thanks for your response. I have no disagreement on 1 because I have no clue what the upper end of people applying is – simply that it's much higher than the number who will be accepted and the number of people (I think) will do a good job. 

2. I think we do disagree here. I think these qualities are relatively common in the CBers and group organizers I know (small sample). I agree that short app timeline will decrease the number of great applicants applying, also unsure about b, c seems like the biggest factor to me. 

Probably the crux here is what proportion of applicants have the skills you mention, and my guess is ⅓ to ⅔, but this is based on the people I know which may be higher than in reality.

Comment by Aaron_Scher on Aaron_Scher's Shortform · 2022-01-28T20:09:06.516Z · EA · GW

Thanks for your response! I don't think I disagree with anything you're saying, but I definitely think it's hard. That is, the burden of proof for 1, 2, and 3 is really high in progressive circles, because the starting assumption is charity does not do 1, 2, or 3. To this end, simplified messages are easily mis-interpreted. 
I really like this: "The reason being that they redistribute power, not just resources."

Comment by Aaron_Scher on Aaron_Scher's Shortform · 2022-01-28T19:56:01.948Z · EA · GW

Yes, I agree that this is unclear. Depending on AI timelines, the long-term might not matter too much. To add to your list:

- What do you or others view as talent/skill gaps in the EA community; how can you build those skills/talents in a job that you're more likely to get? (I'm thinking person/project management, good mentoring, marketing skills, as a couple examples)

Comment by Aaron_Scher on Aaron_Scher's Shortform · 2022-01-26T19:14:55.839Z · EA · GW

Random journaling and my predictions: Pre-Retrospective on the Campus Specialist role.
 Applications for the Campus Specialist role at CEA close in like 5 days. Joan Gass's talk at EAG  about this was really good, and it has led to many awesome, talented people believing they should do Uni group community building full time. 20-50 people are going to apply for this role, of which at least 20 would do an awesome job. 

Because the role is new, CEA is going to hire like 8-12 people for this role; these people are going to do great things for community building and likely have large impacts on the EA community in the next 10 years. Many of the other people who apply will feel extremely discouraged and led on. I'm not sure what they will do, but for the ~10 (or more) who were great fits for the Campus Specialist program but didn't get it, they will do something much less impactful in the next 2 years.

I have no idea what the effects longer-term will be, but definitely not good. Probably some of these people will leave the EA community temporarily because they are confused, discouraged, and don't think their skill set fits well with what employers in the EA community care about right now. 

This is avoidable if CEA expands the number of people they hire and the system for organizing this role. I think the strongest argument against doing so is that the role is fairly experimental and we don't know how it will work out. I think that the upside of having more people in this role totally overshadows the downsides. The downsides seem to mainly be money (as long as you hire competent, agentic people). The role description suggests an impact of counterfactually moving ~10 people per year into high impact careers. I think even if the number were only 5, this role would be well worth it, and my guess is that the next 10 best applicants would still have such an effect (even at less prestigious universities). 

Disclaimer: I have no insider knowledge. I am applying for the Campus Specialist role (and therefore have a personal preference for more people getting the job). I think there is about a 2/3 chance of most of the above problem occurring, and I'm least confident about paragraph 3 (what the people who don't get the role do instead).

Comment by Aaron_Scher on Partnerships between the EA community and GLAMs (galleries, libraries, archives, and museums) · 2021-12-26T22:24:37.259Z · EA · GW

Love this idea, and your suggestion of talks with AMNH, it seems like there could be lots interesting content around longtermism or existential risk with a colab there. A small idea would be asking libraries to buy EA and rationality related books (if they don’t have them), and make sure that they’re included with other related books. Like the “business self-help” and “how to be a top CEO” sections should probably include the 80k book imo.

Comment by Aaron_Scher on Pilot study results: Cost-effectiveness information did not increase interest in EA · 2021-12-21T01:31:44.899Z · EA · GW

Thanks for your thorough comment! Yeah I was shooting for about 60 participants, but due to time constraints and this being a pilot study I only ended up with 44, so even more underpowered.

Intuitively I would expect a larger effect size, given that I don't consider the manipulation to be particularly subtle; but yes, it was much subtler than it could have been. This is something I will definitely explore more if I continue this project; for example, adding visuals and a manipulation check might do a better job of making the manipulation salient. I would like to have a manipulation check like "What is the difference between average and highly cost-effective charities?" And then set it up so that participants who get it wrong have to try again.

The fact that Donation Change differed significantly between Info groups does support that second main hypothesis, suggesting that CE info affects effective donations. This result, however, is not novel. So yes, the effect you picked up on is probably real – but this study was underpowered to detect it at a level of p<.05 (or even marginal significance).

In terms of CE info being ineffective, I'm thinking mainly about interest in EA – to which there really seems to be nothing going on, "There was no significant difference between the Info (M = 32.52, SD = 5.92) and No Info (M = 33.12, SD = 4.01) conditions, F(1, 40) = .118, p = .733, ηp2 = .003." There isn't even a trend in the expected direction. This was most important to me because, as far as I know, there is no previous empirical evidence to suggest that CE info affects interest in EA. It's also more relevant to me as somebody running an EA group and trying to generate interest from people outside the group.

Thanks again for your comment! Edit: Here's the previous study suggesting CE info influences effective donations: http://journal.sjdm.org/20/200504/jdm200504.pdf

Comment by Aaron_Scher on What are the best (brief) resources to introduce EA & longtermism? · 2021-12-20T01:03:55.191Z · EA · GW

I really like Ajeya Cotra’s Intro EA talk (https://youtu.be/48VAQtGmfWY) (35 mins 1x speed). I also like this article on longtermism (https://80000hours.org/articles/future-generations/) although it took me about 25 mins to read. This is a really important question, I’m glad you’re asking it, and I would really like to see more empirical work on it rather than simply “I like this article” or “a few people I talked to like this video” which seems to be the current state. I’m considering spending the second semester of my undergrad thesis on trying to figure out the best ways to introduce longtermism.

Also worth considering MacAskill’s video What we Owe the Future (https://youtu.be/vCpFsvYI-7Y) 40 mins at 1x speed.

Comment by Aaron_Scher on Supporting Video, Audio, and other non-text media on the Forum · 2021-12-19T21:11:09.771Z · EA · GW

Having more types of content on the forum is appealing to me. There's probably discussion of this elsewhere, but would it be difficult to have audio versions of all posts? Like a built in text to speech component option.

Comment by Aaron_Scher on A case for the effectiveness of protest · 2021-11-29T21:24:26.359Z · EA · GW

Thank you for looking into this! This strikes me as really important!! Your post is long so I didn't read it – sorry – but this made me think of an article that I didn't see you cite which might be relevant: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/136610C8C040C3D92F041BB2EFC3034C/S000305542000009Xa.pdf/agenda_seeding_how_1960s_black_protests_moved_elites_public_opinion_and_voting.pdf

Comment by Aaron_Scher on The Explanatory Obstacle of EA · 2021-11-28T11:29:54.608Z · EA · GW

Here you go: https://forum.effectivealtruism.org/posts/GKSYJ9rLnBdtXGAog/aaron_scher-s-shortform?commentId=LLiK7vLmaTdYmev4E

Comment by Aaron_Scher on Aaron_Scher's Shortform · 2021-11-28T11:29:24.182Z · EA · GW

Progressives might be turned off by the phrasing of EA as "helping others." Here's my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when you can and you ask for assistance when you need it; it is also reciprocal because benefiting the community is inherently benefiting oneself. This model implies a level field of power among everybody in the community. Unlike charity, mutual aid relies on social relations and being in community to fight institutional and societal structures of oppression (https://ssw.uga.edu/news/article/what-is-mutual-aid-by-joel-izlar/).

"[Mutual Aid Funds] aim to create permanent systems of support and self-determination, whereas charity creates a relationship of dependency that fails to solve more permanent structural problems. Through mutual aid networks, everyone in a community can contribute their strengths, even the most vulnerable. Charity maintains the same relationships of power, while mutual aid is a system of reciprocal support." (https://williamsrecord.com/376583/opinions/mutual-aid-solidarity-not-charity/).

Within this framework, the idea of "helping people" often relies on people with power aiding the helpless, but doing so in a way that reinforces power difference. To help somebody is to imply that they are lesser and in need of help, rather than an equal community member who is particularly hurt by the system right now. This idea also reminds people of the White Man's Burden and other examples of people claiming to help others but really making things worse.

I could ask my more progressive friends if they think it is good to help people, and they would probably say yes – or at least I could demonstrate that they agree with me given a few minutes of conversation – but that doesn't mean they wouldn't be peeved at hearing "Effective Altruism is about using evidence and careful reasoning to help others the best we can"

I would briefly note that mutual aid is not incompatible with EA to the extent that EA is a question; however, requiring that we be in community with people in order to help them means that we are neglecting the world's poorest people who do not have access to (for example) the communities in expensive private universities.

Comment by Aaron_Scher on The Explanatory Obstacle of EA · 2021-11-28T00:07:58.638Z · EA · GW

Great post, I totally agree that we need more work in this area. Also agree with other commenters that volunteering isn’t a main focus of EA advice, but it probably should be – given the points Mauricio made.

Nitpicky, but it would have been nice to have a summary at the start of the post.

I want to second Bonus #2, I think EA is significantly about a toolkit for helping others effectively, and using examples of tools seems helpful for an engaging pitch. Is anybody familiar with a post or article listing the main EA tools? One of my side-projects is developing a workshop on these, because I think it could be a really good first introduction to EA for newcomers; even if they don’t want to get further involved, they’ve learned something (we’ve added value to their life) and therefore (hopefully) have a positive attitude toward EA.

The phrasing “helping others” will turn off some progressives. I’m not sure how to deal with this, but it is worth being aware of. This might help explain why (tho I only skimmed it): https://sojo.net/articles/mutual-aid-changing-way-we-help-each-other

Comment by Aaron_Scher on We need alternatives to Intro EA Fellowships · 2021-11-21T02:25:08.428Z · EA · GW

Again, thank you for some amazing thoughts. I'll only respond to one piece:

\begin{quotation}But, anecdotally, it seems like a big chunk (most?) of the value EA groups can provide comes from:

  • Taking people who are already into weird EA stuff and connecting them with one another
  • And taking people who are unusually open/receptive to weird EA stuff and connecting them with the more experienced EAs \end{quotation}

I obviously can't disagree with your anecdotal experience, but I think what you're talking about here is closely related to what I see as one of EA's biggest flaws: lack of diversity. I'm not convinced that weird people know how to do good better than anybody else, but by not creating a way for other people to be involved in this awesome movement, we lose the value they would create for us and the value we would create for them. There also seems to be a suspicious correlation between these kind of "receptive to EA ideas" people and white men, which appears worrisome. That is, even if our goal is to target marketing to weird EAs or receptive to EA-s, it seems like the way we're doing that might have some bias that has led our community to disproportionately white and male relative to most general populations.

On that note, I think learning about EA has made my life significantly better, and I think this will be the case for many other people. I think everybody who does an Intro Fellowship (and isn't familiar with EA) learns something that could be useful to their life – even if they don't join the community for become more involved. I don't want to miss out on these people, even if it's a more efficient allocation of time/resources to only focus on people we expect will become highly engaged.

Shortform post coming soon about this 'projects idea' where I'll lay out the pros and cons.

Comment by Aaron_Scher on We need alternatives to Intro EA Fellowships · 2021-11-20T18:47:07.993Z · EA · GW

Good points. We should have explained what our approach is in a separate post that we could link to; because I didn't explain it too well in my comment. We are trying to frame the project like so: This is not the end goal. It is practice at what this process looks like, it is a way to improve our community in a small but meaningful way. Put another way, the primary goals are skill building and building our club's reputation on campus. Another goal is to just try more stuff to help meta-EA-community building; even though we have a ton of resources on community building, we don't (seem) to have all that many trials or examples of groups doing weird stuff and seeing what happens.

Some of the projects we are considering are related to global problems (e.g., carbon labeling on food in dining hall). I like the project ideas you suggest and we will consider them.

One reason we're focusing on local is that the "international charity is colonialism" sentiment is really strong here. I think it would be really bad for the club if we got strongly associated with that sentiment. Attempting to dispel this idea is also on my to-do list, but low.

Another point of note is that some of what the EA community does is only good in expectation. For instance, decreasing extinction risk by 0.5% per century is considered a huge gain for most EAs. But imagine tabling at a club fair and saying "Oh what did we actually accomplish last year? We trained up students to spend their careers working on AI safety in the hopes of decreasing the chance of humanity ending from robots by 0.02%". Working on low probability, high impact causes and interventions is super important, but I think it makes for crappy advertising because most people don't think about the world in Expected Value.

Side point to the side point: I agree that a dollar would go much further in terms of extreme poverty than college students, but I'm less sure about an hour of time. I am in this college community; I know what its needs are. I would spend 5 minutes of the hour figuring out what needs to be done and the rest of the time actually helping folks. If I spent an hour on global poverty, it's unclear I would actually "do" anything. I would spend most the time either researching or explaining to my community why it is morally acceptable to do international charity work at all. But, again, we are considering some relevant projects.

Comment by Aaron_Scher on We need alternatives to Intro EA Fellowships · 2021-11-20T18:18:34.080Z · EA · GW

Yes. Will do an end of the year assessment of what worked and what didn't. Focus will likely be on Winter Break Programming and Project Fellowships.

Comment by Aaron_Scher on We need alternatives to Intro EA Fellowships · 2021-11-20T00:52:08.486Z · EA · GW

Thanks for posting this! One worry I have, particularly relevant to a Project Based Fellowship, is that it would not involve sufficiently learning key ideas. Mauricio discussed this, but I think there's even more to it than is obvious. In this critique of EA (https://www.lesswrong.com/posts/CZmkPvzkMdQJxXy54/another-critique-of-effective-altruism), it is brought up that we frequently "Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities." The less content presented in a fellowship, the more likely we are to go down that route, I think; EA is really really complex, and one thing I like about the Intro Fellowship is that you can end it thinking "I have the basics, but there is so much more to know," and I worry with a shorter fellowship participants may not realize how little they've scratched the surface. They may come to identify EA with just RCT-backed global poverty related work; it almost feels better if people think of EA as global poverty + animal welfare + AI + longtermism + pandemics and climate change – even though this is cause areas and not principles. Anecdotally, I've found that many folks just learning about EA are turned off by what feels like armchair cause prio that is too theoretical; giving them specific causes makes more sense for many folks, and if you give them enough causes, they will internalize that EA is actually about the principles which lead to such diversity in causes.

While I share your worry of EA becoming defined by cause areas than principles, it feels much more likely that we would get a situation like Mauricio mentioned of "vaguely EA-related project ideas" and people who walk away from the fellowship without actually understanding EA very well. On this note, conversations with students not involved in EA often go like so: Them: "What does your club do?" Me: "We discuss way of improving the world most effectively and prepare students to do something really valuable with their lives" Them: "do you do anything besides talking?!" Me: "Do career workshops count?..."

At least at the Claremont Colleges, students are really excited about actually doing stuff. And this can be difficult to reconcile with EA. This semester, we decided to do Effectively Altruistic projects limited into the scope of our school (e.g., what can we do to improve student wellness the most? Decrease the school's carbon footprint? etc.). We've been working on Cause Prioritization for this, narrowing down a large list into a small one. And we're going to have small groups of students tackle these projects in the spring. Will follow up with forum posts afterward to report on how it went.

However, I don't think doing this alone is a good idea; it doesn't actually give folks a sense of what EA is all about unless they already had good background knowledge. So, this Winter Break, we're doing a bunch of programming that we are pushing super hard. Mainly, taking the 8 week Intro Fellowship and squishing it into 3.5-4 weeks; this is the main program we want people to do. The idea is, folks learn about EA ideas during break when they're not stressed about class, then we come back to school and the post-fellowship engagement is Project Based Fellowship (I expect for most people this will be good), and Career Planning. I'm optimistic about this plan for a bunch of reasons, and it potentially presents one solution to the problem.

Pros of doing this: students don't have fellowship overlapping with school, fairly intense and fast which has the benefits you discuss, keep students connected to one another and mentally engaged during break (very good in my opinion/experience cuz I get lonely and lazy).

This is similar to the 3 week fellowship sprint you suggest, except that I do not think of this as at all about identifying promising fellows. I need to write up my thoughts on this more thoroughly in a shortform post, but pretty much I think the content of the Intro Fellowship would be useful to like 50-80% of students, even if only 20% continue engaging with EA afterward. EA has really good ideas that are useful to almost everybody, and the emphasis on highly promising people seems elitist and holds us back from impacting more students in a smaller way.