Posts

How much time should EAs spend engaging with other EAs vs with people outside of EA? 2021-01-18T03:20:47.526Z
[Podcast] Rob Wiblin on self-improvement and research ethics 2021-01-15T07:24:30.833Z
Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? 2021-01-15T06:56:20.644Z
Books / book reviews on nuclear risk, WMDs, great power war? 2020-12-15T01:40:04.549Z
Should marginal longtermist donations support fundamental or intervention research? 2020-11-30T01:10:47.603Z
Where are you donating in 2020 and why? 2020-11-23T08:47:06.681Z
Modelling the odds of recovery from civilizational collapse 2020-09-17T11:58:41.412Z
Should surveys about the quality/impact of research outputs be more common? 2020-09-08T09:10:03.215Z
Please take a survey on the quality/impact of things I've written 2020-09-01T10:34:53.661Z
What is existential security? 2020-09-01T09:40:54.048Z
Risks from Atomically Precise Manufacturing 2020-08-25T09:53:52.763Z
Crucial questions about optimal timing of work and donations 2020-08-14T08:43:28.710Z
How valuable would more academic research on forecasting be? What questions should be researched? 2020-08-12T07:19:18.243Z
Quantifying the probability of existential catastrophe: A reply to Beard et al. 2020-08-10T05:56:04.978Z
Propose and vote on potential tags 2020-08-04T23:49:47.992Z
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence 2020-08-04T11:38:48.816Z
Crucial questions for longtermists 2020-07-29T09:39:17.144Z
Moral circles: Degrees, dimensions, visuals 2020-07-24T04:04:02.017Z
Do research organisations make theory of change diagrams? Should they? 2020-07-22T04:58:41.263Z
Improving the future by influencing actors' benevolence, intelligence, and power 2020-07-20T10:00:31.424Z
Venn diagrams of existential, global, and suffering catastrophes 2020-07-15T12:28:12.651Z
Some history topics it might be very valuable to investigate 2020-07-08T02:40:17.734Z
3 suggestions about jargon in EA 2020-07-05T03:37:29.053Z
Civilization Re-Emerging After a Catastrophic Collapse 2020-06-27T03:22:43.226Z
I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. 2020-05-11T09:35:22.543Z
Existential risks are not just about humanity 2020-04-28T00:09:55.247Z
Differential progress / intellectual progress / technological development 2020-04-24T14:08:52.369Z
Clarifying existential risks and existential catastrophes 2020-04-24T13:27:43.966Z
A central directory for open research questions 2020-04-19T23:47:12.003Z
Database of existential risk estimates 2020-04-15T12:43:07.541Z
Some thoughts on Toby Ord’s existential risk estimates 2020-04-07T02:19:31.217Z
My open-for-feedback donation plans 2020-04-04T12:47:21.582Z
What questions could COVID-19 provide evidence on that would help guide future EA decisions? 2020-03-27T05:51:25.107Z
What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? 2020-03-27T03:05:46.791Z
Fundraising for the Center for Health Security: My personal plan and open questions 2020-03-26T16:53:45.549Z
Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking? 2020-03-19T06:07:03.834Z
[Link and commentary] Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 2020-03-14T09:04:10.955Z
Suggestion: EAs should post more summaries and collections 2020-03-09T10:04:01.629Z
Quotes about the long reflection 2020-03-05T07:48:36.639Z
Where to find EA-related videos 2020-03-02T13:40:18.971Z
Causal diagrams of the paths to existential catastrophe 2020-03-01T14:08:45.344Z
Morality vs related concepts 2020-02-10T08:02:10.570Z
What are information hazards? 2020-02-05T20:50:25.882Z
Four components of strategy research 2020-01-30T19:08:37.244Z
When to post here, vs to LessWrong, vs to both? 2020-01-27T09:31:37.099Z
Potential downsides of using explicit probabilities 2020-01-20T02:14:22.150Z
[Link] Charity Election 2020-01-19T08:02:09.114Z
Making decisions when both morally and empirically uncertain 2020-01-02T07:08:26.681Z
Making decisions under moral uncertainty 2020-01-01T13:02:19.511Z
MichaelA's Shortform 2019-12-22T05:35:17.473Z

Comments

Comment by michaela on A list of EA-related podcasts · 2021-01-27T00:33:24.166Z · EA · GW

Two book-length series of rationality-related posts by Eliezer Yudkowsky have been made into podcast versions:

(Not sure if those are the most useful links. Personally I just found the podcasts via searching the Apple Podcasts app.)

I found Rationality: From AI to Zombies very useful and quite interesting, and HPMOR fairly useful and very surprisingly engaging. I've ranked them as the 4th and 30th (respectively) most useful EA-related books I've read so far.

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T23:56:21.302Z · EA · GW

I like this answer.

I think it's basically a nonsense to try to compare "cause areas" without reference to specific things you can do, aka solutions. Hence, when we say we're comparing "cause areas" what we are really doing is assessing the best solution in each cause area "bucket" and evaluating their cost-effectiveness. The most important cause = the one with the very most cost-effective intervention.

Maybe a minor point, but I don't think this is quite right, because: 

  • I don't think we know what the best solution in each "bucket" is
  • I don't think we have to in order to make educated guesses about which cause area will have the best solution, or will have the best "identifiable positive outliers" (or mean, or median, or upper quartile, or something like that)
  • I don't think we only care about the best solution; I think we also care about other identifiable positive outliers. Reasons for that include the facts that:
    • we may be able to allocate enough resources to an area that the best would no longer be the best on the margin
    • some people may be sufficiently better fits for something else that that's the best thing for them to do
    • (And there are probably cases in which we have to or should "invest" in a cause area in a general way, not just invest in one specific intervention. So it's useful to know which cause area will be able to best use a large chunk of a certain type of resources, not just which cause area contains the one intervention that is most cost-effective given generic resources on the current margin.)

For example, let's suppose for the sake of discussion that technical AI safety research is the best solution within the x-risk cause area, that deworming is the best solution in the global health & development[1] cause area, and that technical AI safety is better than deworming.[2] In that case, in comparing the cause areas (to inform decisions like what skills EAs should skill up in, what networks we should build, what careers people should pursue, and where money should go), it would still be useful to know what the other frontrunner solutions are, and how they compare across cause areas.

(Maybe you go into all that and more in your thesis, and just simplified a bit in your comment.)

[1] The fact that this is a reply to you made it salient to me that the term "global health & development" doesn't clearly highlight the "wellbeing" angle. Would you call Happier Lives Institute's cause area "global wellbeing"?

[2]  Personally, I believe the third claim, and am more agnostic about the other two, but this is just an example.

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T12:03:01.011Z · EA · GW

(That link seems to lead back to this question post itself - I'm guessing you meant to link to this other post?)

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T11:02:40.886Z · EA · GW

I'm not sure I understand. I don't think what I said above requires that it be the case that "[most or all] different tasks within a promising cause area are generally good" (it sounds like you were implying "most or all"?). I think it just requires that the mean prioritisation-worthiness of tasks in some cause, or the prioritisation-worthiness of the identifiable positive outliers among tasks in some cause, are substantially better than the equivalent things for another cause area.

I think that phrasing is somewhat tortured, sorry. What I'm picturing in my head is bell curves that overlap, but one of which has a hump notably further to the right, or one of which has a tail that extends further. (Though I'm not claiming bell curves are actually the appropriate distribution; that's more like a metaphor.)

E.g., I think that one will do more good if one narrows one's search to "longtermist interventions" rather than "either longtermist or present-day developed-world human interventions". And I more tentatively believe the same when it comes to longtermist vs global health & dev. But I think it's likely that some interventions one could come up with for longtermist purposes  would be actively harmful, and that others would be worse than some unusually good present-day-developed-world human interventions. 

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T06:02:20.957Z · EA · GW

I agree with your first two sentences. I feel unsure precisely what you mean by the sentence after that.

E.g., are you saying that no research organisations are spending resources trying to help people prioritise between different broad cause areas (e.g., longtermism vs animal welfare vs global health & development)? Or just that there's no research org solely/primarily focused on that?

My impression is that: 

  • There were multiple orgs that were primarily focused on between-cause prioritisation research in the past
  • But most/all have now decided on one or more cause areas as their current main focus(es) for now, and so now spend more of their effort on within-cause-area work
  • But many still do substantial amounts of work that's focused on or very relevant to between-cause prioritisation, and may do more of that again later. E.g.:
    • Open Phil do worldview investigations
    • 80,000 Hours continue to put some hours (e.g.) into non-longtermist issues even if primarily longtermist issues are definitely their main focus
    • GPI are currently focused mostly on global priorities research that's relevant to longtermism. But much of that is directly about how much to prioritise longtermism in the first place (partly to make a better case for longtermism, but I think also partly just because they're genuinely unsure on that). And I imagine much of their work is also relevant to prioritising between other cause areas, and that they may diversify their focuses more in future.
      • Though I don't actually know a huge amount about GPI's work
  • And there is also a bunch of non-research effort aimed at helping individuals think about which broad causes they want to/should focus on
Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T05:49:06.724Z · EA · GW

Two links with relevant prior discussion:

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T05:44:29.429Z · EA · GW

In practice, Open Philanthropy Project (which is apparently doing cause prioritization) has fixed a list of cause areas, and is prioritizing among much more specific opportunities within those cause areas. (I'm actually less sure about this as of 2021, since Open Phil seems to have made at least one recent hire specifically for cause prioritization.)

Open Phil definitely does have a list of cause areas, and definitely does spend a lot of their effort prioritising among much more specific opportunities within those cause areas.

But I think they also spend substantial effort deciding how much resources to allocate to each of those broad cause areas (and not just with the 2021 hire(s)). Specifically, I think their worldview investigations are, to a substantial extent, intended to help with between-cause prioritisation. (Though it seems like they'd each also help with within-cause decision-making, e.g. how much to prioritise AI risk relative to other longtermist focuses and precisely how best to reduce AI risk.)

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T05:35:46.845Z · EA · GW

[I think the following comment sounds like I'm disagreeing with you, but I'm not sure whether/how much we really have different views, as opposed to just framing and emphasising things differently.]

So it feels like "cause prioritization" is just a first step, and by the end it might not even matter what cause areas are. It seems like what actually matters is producing a list of individual tasks ranked by how effective they are.

I agree that cause prioritization is just a first step. But it seems to me like a really useful first step. 

It seems to me like it'd be very difficult, inefficient, and/or unsuccessful to try to produce a ranked list of individual tasks without first narrowing our search down by something like "cause area" seems like it'd be wildly impractical. And the concept of "cause area" also seems useful to organise our work and help people find other people who might have related knowledge, values, goals, etc.

To illustrate: I think it's a good idea for most EAs to: 

  • Early on, spend some significant amount of time (let's say 10 hours-1,000 hours) thinking about considerations relevant to which broad cause area to prioritise
    • E.g., the neglectedness of efforts to improve lives in developing vs developed countries countries, the astronomical waste argument, arguments about the sentience or lack thereof of nonhuman animals
  • Then gradually move to focusing more on considerations relevant to prioritising and "actually acting" within a broad cause area, as well as focusing more on "actually acting"

And I think it'd be a much less good idea for most EAs to: 

  • Start out brainstorming a list of tasks that might be impactful without having been exposed to any considerations about how the scale, tractability, and neglectedness of improving wellbeing among future beings compares to that of improving wellbeing among nonhumans etc.
    • What would guide this brainstorming?
    • I expect by default this would involve mostly thinking of the sort of tasks or problems that are commonly discussed in general society
  • Then try to evaluate and/or implement those those tasks
    • I'm again not really sure how one would evaluate those things
    • I guess one could at this point think about things like how many beings the future might contain and whether nonhumans are sentient, and then, based on what one learns, adjust the promisingness of each task separately
      • But it would in many cases seem more natural to adjust the value of all/most future-focused interventions together, and of all/most animal-focused interventions together, etc.

All that said, as noted above, I don't think cause areas" should be the only unit or angle of analysis; it would also be useful to think about things like intervention areas, as well as what fields one has or wants to develop expertise in and what specific tasks that expertise is relevant to. 

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T05:33:44.084Z · EA · GW

As explained (EA Forum link; HT Edo Arad) by Owen Cotton-Barratt back in 2014, there are at least two meanings of "cause area". My impression is that since then, effective altruists have not really distinguished between these different meanings, which suggests to me that some combination of the following things are happening: (1) the distinction isn't too important in practice; (2) people are using "cause area" as a shorthand for something like "the established cause areas in effective altruism, plus some extra hard-to-specify stuff"; (3) people are confused about what a "cause area" even is, but lack the metacognitive abilities to notice this.

As noted above, personally, I usually find it most useful to think about cause areas in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help. 

I think it'd be useful to also "revive" Owen's suggested term/concept of "An intervention area, i.e. a cluster of interventions which are related and share some characteristics", as clearly distinguished from a cause area. 

E.g., I think it'd be useful to be able to say something like "Political advocacy is an intervention area that could be useful for a range of cause areas, such as animal welfare and longtermism. It might be valuable for some EAs to specialise in political advocacy in a relatively cause-neutral way, lending their expertise to various different EA-aligned efforts." (I've said similar things before, but it will probably be easier now that I have the term "intervention area" in mind.)

Comment by michaela on Why "cause area" as the unit of analysis? · 2021-01-26T05:33:08.386Z · EA · GW

My main thoughts on this:

  • I share the view that EAs often seem unclear about precisely what they mean by "cause area", and that it seems like there are multiple somewhat different meanings floating around
    • This also therefore makes "cause prioritisation" a somewhat murky term as well
  • I think it would probably be valuable for some EAs to spend a bit more time thinking about and/or explaining what they mean by "cause area"
  • I personally think about cause areas mostly in terms of a few broad cause areas which describe what class of beneficiaries one is aiming to help
    • If future beings: Longtermism
    • If nonhuman animals (especially those in the near-term): Animal welfare
    • If people in developing countries: Global health & development
    • We can then subdivide those cause areas into narrower cause areas (e.g. human-centric longtermism vs animal-inclusive longtermism; farm animal welfare vs wild animal welfare)
    • This is somewhat similar to Owen Cotton-Barratt's "A goal, something we might devote resources towards optimising"
      • But I think "a goal" makes it much less clear how granular we're being (e.g., that could mean there's a whole cause area just for "get more academics to think about AI safety"), compared to "class of beneficiaries"
    • Caveats:
      • There are also possibilities other than those 3
        • e.g., near-term humans in the developed world
      • And there are also things I might normally "cause areas" that aren't sufficiently distinguished just by the class of beneficiaries one aims to help
        • e.g., longevity/anti-ageing
      • I don't mean to imply that broad cause areas are just a matter of a person's views on moral patienthood; that's not the only factor influencing which class of beneficiaries one focuses on helping
        • E.g., two people might agree that it's probably good to help both future humans and chickens, but disagree about empirical questions like the current level of x-risk, or about methodological/epistemological questions like how much weight to place on chains of reasonings (e.g., the astronomical waste argument) vs empirical evidence
  • I'm very confident that it's useful to have the concept of "cause areas", to sometimes carve up the space of all possible altruistic goals into at least the above 3 cause areas, and to sometimes have the standard sorts of cause prioritisation research and discussion
  • I think the above-mentioned concept of "cause areas" should obviously not be the only unit of analysis
    • E.g., I think most EAs should spend most of their lifetime altruistic efforts prioritising and acting within broad cause areas like longtermism or animal welfare
      • E.g., deciding whether to work on reducing risks of extinction, reducing other existential risks, or improving the longterm future in other ways
        • And also much narrower decisions, like precisely how best to craft and implement some specific nuclear security policy

I'll add some further thoughts as replies to this answer. 

Comment by michaela on Possible gaps in the EA community · 2021-01-24T09:50:52.317Z · EA · GW

Misc small comments

For example, when you inherit money might be a good time to make a significant donation: if the money isn’t part of your usual revenue stream, you might not need all of it.

This does seems like a good idea to me, but I think Generation Pledge might already be doing something like that? (That said, I don't know much about them, and I don't necessarily think that one org doing ~X means no other org should do ~X.)

Also, for people thinking about this broader idea of potentially setting up pledges (or whatever) that cover things GWWC isn't designed for, it may be useful to check out A List of EA Donation Pledges (GWWC, etc).

It could be cool to have a point person for an area who does things like: chats to people considering moving into that area (to help them decide), regularly checks in with people working in the area (to support them in their journey), and connects people who could productively collaborate.

I know very little about Animal Advocacy Careers, but this sounds like the sort of thing they might do? And if they don't do it, then maybe they could start doing so for the animal space (which could be useful directly and also could provide a model others could learn from)? And if they raise strong specific reasons to be inclined against doing that (rather than just reasons why it's not currently their top priority), that could be useful to learn from as well.

But I think that pressure is ultimately counterproductive, because I think we’ll only be able to do the best we can if we consider a broad array of options and think about them carefully. 

Yeah, I think it'd be pretty terrible if people took EA's focus on prioritisation, critical thinking, etc. as a reason to not raise ideas that might turn out to be uninteresting, low-quality, low-priority, or whatever. It seems best to have a relatively low bar for raising an idea (along with appropriate caveats, expressions of uncertainty, etc.), even if we want to keep the bar for things we spend lots of resources on quite high. We'll find better priorities if we start with a broad pool of options.

(See also babble and prune [full disclosure: I don't know if I've actually read any of those posts].)

(Obviously some screening is needed before even raising an idea - we won't literally say any random sequence of syllables, and we should probably not bother writing about every idea that seemed potentially promising for a moment but not after a minute of thought. But it basically seems)

Comment by michaela on Possible gaps in the EA community · 2021-01-24T09:35:24.882Z · EA · GW

Same here.

The idea of "academic institutes set up by EAs in disciplines such as psychology and history" also sounds potentially exciting to me. And I wrote some semi-relevant thoughts in the post Some history topics it might be very valuable to investigate (and other posts tagged History may be relevant too).

Comment by michaela on Possible gaps in the EA community · 2021-01-24T09:30:03.860Z · EA · GW

But almost all of the impactful positions in the world are at organisations which don’t identify as EA. So it’s important for us to find ways to make sure that wherever they work, people can still have a sense of being often around people with similar values and who help them figure out their path.

I share the view that this seems potentially really valuable. Anecdotally, I know an EA who seems like they could do well in roles at EA orgs, or could potentially rise to fairly high positions in government roles in a country that's not a major EA hub. There are of course many consideration influencing their thinking about which path to pursue, but one notable one is that the latter just understandably sounds less fun, less satisfying over the long term, and more prone to value drift.

I think efforts to address this issue might ideally also try to address the issue that status, validation, etc. within the EA movement are easier to access by working at EA orgs than at other orgs, and probably especially hard to access by working at orgs outside the major EA hubs (e.g., a key department of a government agency in an Asia country rather than in the UK or US). 

We tried to brainstorm some ideas for how EA in general could support people like this EA I know to happily pursue roles that where (by default) there'd be no EAs in their orgs and maybe only a few in their city/country as well. Some (not necessarily good) ideas, from memory:

  • Have more EA conferences in these not-currently-EA-hubs, so that the people living there can sometimes get "booster shots" of EA interactions
  • Provide funding for these people to occasionally travel to EA conferences / EA hubs
  • Make the EA movement more geographically distributed, e.g. by some EA orgs moving to places that aren't currently hubs

Some (also not necessarily good) ideas that come to mind now:

  • Support more EA community building in these areas
  • Support the creation of organisations like HIPE in these areas
    • This could be seen as supporting community building that's more targeted in terms of sector/career, yet not necessarily explicitly EA-branded. It could build a network of people with similar values and a desire to help each other, even if few/none explicitly identify as part of the EA movement.
    • (I don't actually know much about HIPE)
  • Some sort of virtual community building stuff?
    • Things like the EA Anywhere group?
    • Things like online coworking spaces?
    • (There's obviously a lot that could be done in this broad bucket)
  • Efforts to just make EAs less concerned about status, validation, etc. within the EA movement (or more concerned about those things from outside the EA movement)
    • (No big ideas for this immediately come to my mind)
  • Efforts to just make status, validation, etc. make those things easier to access for people who work at non-EA orgs and outside of EA hubs
    • This could include EAs sharing info about a broader range of organisations, geographical areas, career paths, etc., so that more EAs can easily see why a wider range of things are impactful
Comment by michaela on Possible gaps in the EA community · 2021-01-24T09:13:41.206Z · EA · GW

What I’d have liked was: 

  • A succinct summary of what seemed good and bad about the change to give me an idea of whether I agreed with it.
  • A really clear action plan if I wanted to help in some way. That might include, for example: sample letters to send to your MP, some considerations on what makes letters to your MP more/less likely to succeed (are emails better than physical letters, or vice versa?), a link to where you can find out who your local MP is and what the best way to contact them is.

This seems like a good idea to me. And the second idea seems to me like a potential Task Y, meaning something which has some or all of the properties: 

  • "Task Y is something that can be performed usefully by people who are not currently able to choose their career path entirely based on EA concerns*.
  • Task Y is clearly effective, and doesn't become much less effective the more people who are doing it.
  • The positive effects of Task Y are obvious to the person doing the task."

Relatedly, that second idea also seems like something anyone could just start and provide value in right away - no need for permission, special resources, or unusual skills. (My local EA group actually discussed similar things previously in the context of climate change, and took some minor actions in this direction.)

Comment by michaela on Possible gaps in the EA community · 2021-01-24T09:00:35.912Z · EA · GW

A long quibbly tangent

I think one way we could make the world far better in decades’ time is by making it the case that all major decision makers (politicians, business leaders etc) use ‘will this most improve wellbeing over the long run?’ as their main decision criterion. 

I'd say there’s a >50% chance that this would indeed be good, and that it’s plausible it'd be very good. But it also seems to me plausible that this would be bad or very bad. This is for a few reasons:

  1. You didn't say what you meant by wellbeing. A decision maker might say "wellbeing" and mean only the wellbeing of humans, or of people in countries like theirs (e.g., predominantly English-speaking liberal democracies), or of people in their country, or of an in-group of theirs within their country (e.g., people with the same political leaning or race as them).
    • This could be because they explicitly believe that only those people are moral patients, or just because that's who they implicitly focus on.
    • If the decision makers do have a narrow subset of all moral patients in mind when they they think about increasing wellbeing, would probably at least reduce the benefits of decision makers having that as their main criterion. It might also lead to that criterion being net harmful, if it means people are consequentialist altruists for one group only, having stripped away the norms and deontological constraints that often help prevent certain bad behaviours.
    • Maybe this is just a nitpick, as you could just edit your statement to incorporate some sort of impartiality. But then you'd have to grapple with exactly how to do that - do we want the criteria decision makers use to come pre-loaded with our current best guesses about moral patienthood and weights? Or with some particular of handling moral uncertainty? Or with some general principles for thinking about how to handle moral uncertainty? 
  2. I have an intuition that just making people more consequentialist and more altruistic-in-some-sense, without also making them more rational, reflective, cautious, etc., has a decent chance of being harmful. I think the (overlapping) drivers of this intuition are:
    • The fact doing that would move a seemingly important variable into somewhat uncharted territory, so we should start out pretty uncertain about what outcomes it would have, and thus predict a nontrivial chance of fairly bad outcomes
    • The various potential ways people have suggested naive consequentialism could cause harms (even from a consequentialist perspective)
    • There seeming to have been some historical cases where people have been mobilised to do bad things by consequentialist and altruistic-in-some-sense arguments ("for the greater good")
    • A sort of Chesterton's fence / Secrets of Our Success-style argument for thinking very carefully before substantially changing anything that currently seems like a major part of how the world runs (even if it seems at first glance like the consequences of the change would be good)

[The above statements of mine are pretty vague, and I can try to elaborate if that’d be useful.]

So I'd favour thinking more about precisely what sort of changes we want to make to future decision-makers’ values, reasoning, and criteria for decision-making, and doing so before we make any major pushes on those fronts. 

And that generic "more research needed" statement, I'd favour trying to package increases in consequentialism and generic altruism with more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, and probably some other things like that. 

The following posts and their comment sections contain some relevant prior discussion:

...but, I think all of this might be pretty much just a tangent. That’s because I think we could just change the sentence of yours that I quoted at the start of this comment to make it reflect a broader package of attributes we want to change in future leaders, and your other points would still stand. E.g., teaching at universities could try to inculcate not just consequentialism and generic altruism but also more reflection on moral circles, more reflectiveness in general, various rationality skills and ideas, etc.

Comment by michaela on Possible gaps in the EA community · 2021-01-24T08:59:45.740Z · EA · GW

Thanks for this post! I think I basically share the view that all of those prompts are useful and all of those "gaps" are worth seriously considering. I'll share some thoughts in separate comments.

(FWIW, I think maybe the idea I feel least confident is worth having an additional person focus ~full-time on - considering what other activities are already being done - is creating "some easy way for someone who’s about to make their yearly donation to chat to another person about it.")

Regarding influencing future decision-makers

Something which would make that most likely to happen is having EA ideas discussed in courses in all top universities. That led me to wonder whether we’re currently neglecting supporting and encouraging lecturers to do that.

Both of those claims match my independent impression

On the first claim: This post using neoliberalism as a case study seems relevant (I highlight that mainly for readers, not as new evidence, as I imagine that article probably already influenced your thinking here). 

On the second claim: When I was a high school teacher and first learned of EA, two of the main next career steps I initially considered were:

  • Try to write a sort of EA textbook
  • Try to become a university lecturer who doesn't do much research, and basically just takes on lots of teaching duties
    • My thinking was that:
      • I'd seen various people argue that it's a shame that so many world-class researchers have to spend much of their time teaching when that wasn't their comparative advantage (and in some cases they were outright bad at it)
      • And I'd also heard various people argue that a major point of leverage over future leaders may be influencing what ideas students at top unis are exposed to
      • So it seemed like it might be worth considering trying to find a way to specialise in taking teaching load off top researchers' plates while also influencing future generations of leaders
    • I didn't actually look into whether jobs along those lines exist. I considered that maybe, even if they don't exist, one could be entrepreneurial and convince a uni to create one, or adapt another role into that.
      • Though an obstacle would probably be the rigidity of many universities.

I ultimately decided on other paths, partly due to reading more of 80k's articles. And I do think the decisions I made make more sense for me. But reading this post has reminded me of those ideas and updated me towards thinking it could be worth some people considering the second one in particular.

Supporting teaching of effective altruism at universities

I feel quite good about the ideas in this section - I'd definitely be excited for one or more things along those lines to be done one or more people who are good fits for that.

Some of those activities sound like they might be sort-of similar to some of the roles people involved in other EA education efforts (e.g., Students for High-Impact Charity, SPARC) and Effective Thesis have played. So maybe it'd be valuable to talk to such people, learn about their experiences and their perspectives on these ideas, etc.

Comment by michaela on A list of EA-related podcasts · 2021-01-24T02:52:53.544Z · EA · GW

Something I'm surprised neither I nor anyone else has mentioned yet: the Slate Star Codex Podcast. This consists almost entirely of audio versions of SSC articles, along with a handful of recordings of SSC meetups (presentations + Q&As). 

(I think this is my second favourite EA-related podcast, with the 80k podcast being first.)

Comment by michaela on EA Forum feature suggestion thread · 2021-01-24T02:43:09.951Z · EA · GW

Oh, whoops! Yeah, I must've seen previews of comments hundreds of times, yet forgot they existed while writing the above comment. (I had these feature ideas while getting to sleep, and it seems I did not take a moment to re-evaluate them when I woke up...)

Comment by michaela on EA Forum feature suggestion thread · 2021-01-24T01:42:43.569Z · EA · GW
  1. The option to tag individual shortform posts (not just a user's whole shortform page, which may feature a large number of shortform posts on a variety of very different topics)
  2. Previews for shortform posts showing up when the shortform posts are linked to elsewhere on the Forum, in the same way previews for regular posts show up [ETA: as Habryka notes below, this is already the case]

(I find the shortform feature really valuable, and I think these two things would make it even more valuable.)

Comment by michaela on What are some high impact companies to invest in? · 2021-01-24T00:19:24.836Z · EA · GW

[The following isn't an answer to your question of which specific companies to invest in with impact in mind; instead, it’s about general pros, cons, and strategies for impact investing. Hopefully that's somewhat useful, and hopefully someone else will also jump in to more directly answer your question.]

The EA researchers John Halstead and Hauke Hillebrandt looked into the pros and cons of impact investing. Here's an EA Forum post from Hillebrandt, which also links to the report they wrote together: Impact investing is only a good idea in specific circumstances

Hillebrandt writes in that post: "We find that effective impact investing is very hard and, to maximize social impact, it is usually much more effective to donate." (Though note the "usually"; from memory, there were various caveats/nuances/uncertainties.)

And here's a talk from Halstead: Is impact investing impactful?

Other have also discussed the topic on the Forum, e.g. here. You might find other relevant posts by searching "impact investing" and/or looking at the Investing tag.

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T11:23:24.008Z · EA · GW

I also still find the concept of complex cluelessness slippery, and am under the impression that many EAs misunderstand and misuse the term compared to Greaves' intention. But if you haven't seen it already, you may find this talk from Greaves' helpful.

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T06:55:16.975Z · EA · GW

One other thing on this section of the interview: Ajeya and Rob both say that the way the SSA leads to the doomsday argument seems sort-of "suspicious". Ajeya then says that, on the other hand, the way the SIA causes an opposing update also seems suspicious. 

But I think all of her illustrations of how updates based on the SIA can seem suspicious involved infinities. And we already know that loads of things involving infinities can seem counterintuitive or suspicious. So it seems to me like this isn't much reason to feel that SIA in particular can cause suspicious updates. In other words, it seems like maybe the "active ingredient" causing the suspiciousness in the examples she gives is infinity, not SIA. Whereas the way the SSA leads to the doomsday argument doesn't have to involve infinity, so there it seems like SSA is itself suspicious.

I'm not sure whether this is a valid or important point, but maybe it is? (I obviously don't think we should necessarily dismiss things just because they feel "suspicious", but it could make sense to update a bit away from them for that reason, and, to the extent that that's true, a difference in the suspiciousness of SSA vs SIA could matter.)

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T06:49:49.787Z · EA · GW

The doomsday argument, the self-sampling assumption (SSA), and the self-indication assumption (SIA)

The interview contained an interesting discussion of those ideas. I was surprised to find that, during that discussion, I felt like I actually understood what the ideas of SSA and SIA were, and why that mattered. (Whereas there've been a few previous times when I tried to learn about those things, but always ended up mostly still feeling confused. That said, it's very possible I currently just have an illusion of understanding.)

While listening, I felt like maybe that section of the interview could be summarised as follows (though note that I may be misunderstanding things, such that this summary might be misleading):

"We seem to exist 'early' in the sequence of possible humans. We're more likely to observe that if the sequence of possible humans will actually be cut off relatively early than if more of the sequence will occur. This should update us towards thinking the sequence will be cut off relatively early - i.e., towards thinking there will be relatively few future generations. This is how the SSA leads to the doomsday argument.

But, we also just seem to exist at all. And we're more likely to observe that (rather than observing nothing at all) the more people will exist in total - i.e., the more of the sequence of possible humans will occur. This should update us towards thinking the sequence won't be cut off relatively early. This is how the SIA pushes against the doomsday argument.

Those two updates might roughly cancel out [I'm not actually sure if they're meant to exactly, roughly, or only very roughly cancel out]. Thus, these very abstract considerations have relatively little bearing on how large we should estimate the future will be."

(I'd be interested in people's thoughts on whether my attempted summary seems accurate, as well as on whether it seems relatively clear and easy to follow.)

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T04:26:25.566Z · EA · GW

One other part of those sections that feels worth highlighting:

Rob Wiblin: Is there anything you can say to people who I guess either don’t think it’s possible they’ll get hired by Open Phil and maybe were a bit disappointed by that, or have applied and maybe didn’t manage to get a trial?

Ajeya Cotra: Yeah. I guess my first thought is that Open Phil is not people’s only opportunity to do good. Even doing generalist research of the kind that I think Open Phil does a lot of, especially for that kind of research, I think it’s a blessing and a curse, but you just need a desk and a computer to do it. I would love to see people giving it a shot more, and I think it’s a great way to get noticed. So when we write reports, all the reports we put out recently have long lists of open questions that I think people could work on. And I know of people doing work on them and that’s really exciting to me. So that’s one way to just get your foot in the door, both in terms of potentially being noticed at a place like Open Phil or a place like FHI or GPI, and also just get a sense of what does it feel like to do this? And do you like it? Or are the cons outweighing the pros for you?

I sort-of effectively followed similar advice, and have been very happy with the apparent results for my own career. And I definitely agree that there are a remarkable number of open questions (e.g., here and here) which it seems like a variety of people could just independently have a crack at, thereby testing their fit and/or directly providing useful insights.

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T04:22:32.955Z · EA · GW

Is expanding beyond our solar system necessary for achieving a long period with very low extinction risk?

As part of the discussion of "Effective size of the long-term future", Ajeya and Rob discussed the barriers to and likelihood of various forms of space colonisation. I found this quite interesting. 

During that section, I got the impression that Ajeya was implicitly thinking that a stable, low-extinction-risk future would require some kind of expansion beyond our solar system. (Though I don't think she said that explicitly, so maybe I'm making a faulty inference. Perhaps what she actually had in mind was just that such expansion could be one way to get a stable, low-extinction-risk future, such that the likelihood of such expansion was one important question in determining whether we can get such a future, and a good question to start with.)

If she does indeed think that, that seems a bit surprising to me. I haven't really thought about this before, but I think I'd guess that we could have a stable, low-extinction-risk future - for, let's says, hundreds of millions of years - without expanding beyond our solar system. Such expansion could of course help[1], both because it creates "backups" and because there are certain astronomical extinction events that would by default happen eventually to Earth/our solar system. But it seems to me plausible that the right kind of improved technologies and institutions would allow us to reduce extinction risks to negligible levels just on Earth for hundreds of millions of years. 

But I've never really directly thought about this question before, so I could definitely be wrong. If anyone happens to have thoughts on this, I'd be interested to hear them.

[1] I'm not saying it'd definitely help - there are ways it could be net negative. And I'm definitely not saying that trying to advance expansion beyond our solar system is an efficient way to reduce x-risk.

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T04:07:01.552Z · EA · GW

A separate comment I had been writing about that section of the interview:

  • Ajeya and Rob discussed "Fairness agreements". This seemed to me like a novel and interesting approach that could be used for normative/moral uncertainty (though Open Phil seem to be using it for worldview uncertainty, which is related but a bit different)
    • I currently feel more inclined towards some other approaches to moral uncertainty
      • But at this stage where the topic of moral uncertainty has received so little attention, it seems useful to come up with additional potential approaches
      • And it may be that, for a while, it remains useful to have multiple approaches one can bring to bear on the same question, to see where their results converge and diverge
    • On a meta level, I found it interesting that the staff of an organisation primarily focused on grantmaking appear to have come up with what might be a novel and interesting approach to normative/moral uncertainty
      • That seems like the sort of abstract theoretical philosophy work that one might expect to only be produced by academic philosophers, rather than people at a more "applied" org

A more direct response to your comment:

  • I haven't heard of the idea before, and had read a decent amount on moral uncertainty around the start of 2020. That, plus the way the topic was introduced in this episode, makes me think that this might be a new idea that hasn't been publicly  written up yet.
    • (See also the final bullet point here)
  • I think it's understandable to have been a bit confused by that part; I don't think I fully understood the idea myself, and I got the impression that it was still at a somewhat fuzzy stage
    • (I'd guess that with an hour of effort I could re-read that part of the transcript and write an ok explainer, but unfortunately I don't have time right now. But hopefully someone else will be able to do that, ideally better and more easily than I could!)
Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T03:53:22.696Z · EA · GW

Somewhat relatedly, Ajeya seems to sort-of imply that "the animal-inclusive worldview" is necessarily neartermist, and that "the longtermist worldview" is necessarily human-centric. For example, the above quote about longtermism focuses on "people", which I think would typically be interpreted as just meaning humans, and as very likely excluding at least some beings that might be moral patients (e.g., insects). And later she says:

And then within the near-termism camp, there’s a very analogous question of, are we inclusive of animals or not?

But I think the questions of neartermism vs longtermism and animal-inclusivity vs human-centrism are actually fairly distinct. Indeed, I consider myself an animal-inclusive longtermist.

I do think it's reasonable to be a human-centric longtermist. And I do tentatively think that even  animal-inclusive longtermism should still prioritise existential risks, and still with extinction risks as a/the main focus within that. 

But I think animal-inclusivity makes at least some difference (e.g., pushing a bit in favour of prioritising reducing risks of unrecoverable dystopias). And it might make a larger difference. And in any case, it seems worth avoiding implying that all longtermists must be focused only or primarily on benefitting humans, since that isn't accurate.

(But as with my above comment, I expect that Ajeya knows these things, and that the fact she was speaking rather than producing edited written content is relevant here.)

Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T03:05:43.369Z · EA · GW

Thanks for making this linkpost, Evelyn! I did have some thoughts on this episode, which I'll split into separate comments so it's easier to keep discussion organised. (A basic point is that the episode was really interesting, and I'd recommend others listen as well.)

A bundle of connected quibbles: 

  • Ajeya seems to use the term "existential risk" when meaning just "extinction risk"
  • She seems to imply totalism is necessary for longtermism
  • She seems to imply longtermism is only/necessarily focused on existential risk reduction
  • (And I disagree with those things.)

An illustrative quote from Ajeya:

I think I would characterise the longtermist camp as the camp that wants to go all the way with buying into the total view — which says that creating new people is good — and then take that to its logical conclusion, which says that bigger worlds are better, bigger worlds full of people living happy lives are better — and then take that to its logical conclusion, which basically says that because the potential for really huge populations is so much greater in the future — particularly with the opportunity for space colonisation — we should focus almost all of our energies on preserving the option of having that large future. So, we should be focusing on reducing existential risks.

But "existential risks" includes not just extinction risk but also includes risks of unrecoverable collapse, unrecoverable dystopia, and some (but not all) s-risks/suffering catastrophes. (See here.) 

And my understanding is that, if we condition on rejecting the totalism: 

  • Risk of extinction does becomes way less important
  • Risk of unrecoverable collapse probably becomes way less important (though this is a bit less clear)
  • Risk of unrecoverable dystopia and s-risks still retain much of their importance

(See here for some discussion relevant to those points.)

So one can reasonably be a non-totalist yet still prioritise reducing existential risk - especially risk of unrecoverable dystopias. 

Relatedly, a fair number of longtermists are suffering-focused and/or prioritise s-risk reduction, sometimes precisely because they reject the idea that making more happy beings is good but do think making more suffering beings is bad.

Finally, one can be a longtermist without prioritising reducing either reduction of extinction risk or reducing of other existential risks. In particular, one could prioritise work on what I'm inclined to call "non-existential trajectory changes". From a prior post of mine:

But what if some of humanity’s long-term potential is destroyed, but not the vast majority of it? Given Ord and Bostrom’s definitions, I think that the risk of that should not be called an existential risk, and that its occurrence should not be called an existential catastrophe. Instead, I’d put such possibilities alongside existential catastrophes in the broader category of things that could cause “Persistent trajectory changes”. More specifically, I’d put them in a category I’ll term in an upcoming post “non-existential trajectory changes”. (Note that “non-existential” does not mean “not important”.)

(Relatedly, my impression from a couple videos or podcasts is that Will MacAskill is currently interested in thinking more about a broad set of trajectory changes longtermists could try to cause/prevent, including but not limited to existential catastrophes.)

I expect Ajeya knows all these things. And I think it's reasonable for a person to think that extinction risks are far more important than other existential risks, that the strongest argument for longtermism rests on totalism, and that longtermists should only/almost only prioritise existential/extinction risk reduction. (My own views are probably more moderate versions of those stances.) But it seems to me that it's valuable to not imply that those things are necessarily true or true by definition

(Though it's of course easy to state things in ways that are less than perfectly accurate or nuanced when speaking in an interview rather than producing edited, written content. And I did find a lot of the rest of that section of the interview quite interesting and useful.)

Comment by michaela on Propose and vote on potential tags · 2021-01-23T02:07:48.229Z · EA · GW

Fermi Paradox

Arguments for having this tag:

  • Seems a potentially very important macrostrategy question
  • There are at least some posts relevant to it

Arguments against:

  • Not sure if there are more than a few posts highly relevant to this
  • Maybe this is not a prominent enough topic to get its own tag, rather than just being subsumed under the Space and Global Priorities Research tags
Comment by michaela on Propose and vote on potential tags · 2021-01-23T02:07:20.129Z · EA · GW

Simulation Argument

Arguments for having this tag:

  • Seems a potentially very important macrostrategy question
  • There are at least some posts relevant to it

Arguments against:

  • Not sure if there are more than a couple posts highly relevant to this
  • Maybe this is not a prominent enough topic to get its own tag, rather than just being subsumed under the Global Priorities Research tag
Comment by michaela on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T02:03:20.398Z · EA · GW

The sections "Biggest challenges with writing big reports" and "What it’s like working at Open Phil" were interesting and relatable

A lot of what was said in these sections aligned quite a bit with my own experiences from researching/writing about EA topics, both as part of EA orgs and independently. 

For example, Ajeya said:

One thing that’s really tough is that academic fields that have been around for a while have an intuition or an aesthetic that they pass on to new members about, what’s a unit of publishable work? It’s sometimes called a ‘publon’. What kind of result is big enough? What kind of argument is compelling enough and complete enough that you can package it into a paper and publish it? And I think with the work that we’re trying to do — partly because it’s new, and partly because of the nature of the work itself — it’s much less clear what a publishable unit is, or when you’re done. And you almost always find yourself in a situation where there’s a lot more research you could do than you assumed naively, going in. And it’s not always a bad thing.

It’s not always you’re being inefficient or you’re going down rabbit holes, if you choose to do that research and just end up doing a much bigger project than you thought you were going to do. I think this was the case with all of the timelines work that we did at Open Phil. My report and then other reports. It was always the case that we came in, we thought, I thought I would do a more simple evaluation of arguments made by our technical advisors, but then complications came up. And then it just became a much longer project. And I don’t regret most of that. So it’s not as simple as saying, just really force yourself to guess at the outset how much time you want to spend on it and just spend that time. But at the same time, there definitely are rabbit holes, and there definitely are things you can do that eat up a bunch of time without giving you much epistemic value. So standards for that seemed like a big, difficult issue with this work.

I think most of the EA-related things I've started looking into and writing up, except those that I deprioritised very early on, ending up growing and spawning spinoff tangent docs/posts. And then those spinoffs often ended up spawning their own spinoffs, and so on. And I think this was usually actually productive, and sometimes the spinoffs were more valuable than the original thing, but it definitely meant a lot of missed deadlines, changed plans, and uncertainties about when to just declare something finished and move on.

I don't have a lot of experience with research/writing on non EA-related topics, so maybe this is just a matter of my own (perhaps flawed) approach, or maybe it's just fairly normal. (One thing that comes to mind here is that - if I recall correctly - Joe Henrich says in his newest book, The WEIRDest People in the World, that his previous book - Secret of Our Success - was all basically just meant to be introductory chapters to WEIRDest People. And the prior book is itself quite long and quite fascinating!) 

But I did do ~0.5FTE years of academic psychology research during my Honours year. There I came up with the question and basic design before even starting, and the final product really had stuck pretty closely to that, and on schedule, with no tangents. So there's at least weak evidence that my more recent tangent-heavy approach (which I think I actually endorse) isn't just an approach I'd adopt even in more established fields. 

A few other things Ajeya said in those sections that resonated with me:

So a lot of the feeling of collaboration and teamyness and collegiality is partly driven by like, does each part of this super siloed organisation have its own critical mass.

[...]

And then [in terms of what I dislike about my job], it comes back to the thing I was saying about how it’s a pretty siloed organisation. So each particular team is quite small, and then within each team, people are spread thin. So there’s one person thinking about timelines and there’s one person thinking about biosecurity, and it means the collaboration you can get from your colleagues — and even the feeling of team and the encouragement you can get from your colleagues — is more limited. Because they don’t have their head in what you’re up to. And it’s very hard for them to get their head in what you’re up to. And so people often find that people don’t read their reports that they worked really hard on as much as they would like, except for their manager or a small set of decision makers who are looking to read that thing.

And so I think that can be disheartening. 

It was interesting - and sort of nice, in a weird way! - to hear that even someone with a relatively senior role at one of the most prominent and well-resourced EA orgs has those experiences and perceptions.

(To be clear, I've overall been very happy with the EA-related roles I've worked in! Ajeya also talked about a bunch of stuff about her job that's really positive and that also resonated with me.)

Comment by michaela on [link] Centre for the Governance of AI 2020 Annual Report · 2021-01-23T00:53:41.920Z · EA · GW

Thanks for that link!

Comment by michaela on EA Forum feature suggestion thread · 2021-01-22T08:36:29.079Z · EA · GW

It could be cool if the EA Forum allowed for boxes of text that start off collapsed but can be expanded, in the way that e.g. Gwern's site does (here's a random example). This could be used for long sections that the author wants to signal (a) are sort-of digressions and/or (b) may be worth skipping for some people. 

There are a few things authors can already do that serve a similar purpose:

  • Have a section that explicitly says at the top "I think this section will be of interest to far fewer people than the rest of this post, so feel free to skip it."
  • Move a section to the end and call it an appendix
  • Just link to a google doc that sort of serves as the expandable box/appendix
  • Move the section into a footnote

But the first two of those options seem to less clearly signal "We really think fewer people should read this than should read the rest of this post", compared to having a collapsed but expandable box of text. 

And the third option might sometimes signal that too strongly, and also doesn't allow things to show up when you use the Forum's search function.

And the fourth option doesn't seem to work well for fairly long sections of text; more than a few paragraphs in a single footnote would be unusual and might be a little annoying (due to the small text). Also, that would remove the option of the author including footnotes within that section of text.

(I originally raised this idea here, in the context of whether it'd be best to include full transcripts from 80k podcast episodes when link posting them to the EA Forum. I think it could make sense to include the transcripts as collapsed but expandable boxes of text, so that terms from the transcript will appear when doing searches on the Forum - which wouldn't happen if the transcript wasn't included at all - but people don't feel like they have to read the whole transcript before they comment on the post.)

Comment by michaela on Propose and vote on potential tags · 2021-01-21T11:48:54.581Z · EA · GW

EA fellowships

I think it might be useful to have a post on EA fellowships, meaning things like the EA Virtual Programs, which "are opportunities to engage intensively with the ideas of effective altruism through weekly readings and small group discussions over the course of eight weeks. These programs are open to anyone regardless of timezone, career stage, or anything else." (And not meaning things like summer research fellowships, for which there's the Research Training Programs tag.)

I think this'd be a subset of the Event strategy tag.

But I'm not sure if there are enough posts that are highly relevant to EA fellowships for it to be worth having this tag in addition to the Event strategy tag. And maybe a somewhat different scope would be better (i.e., maybe something else should be bundled in with this).

Comment by michaela on What is going on in the world? · 2021-01-20T09:16:32.564Z · EA · GW

Another possible story, which could underpin some efforts along the lines of patient altruism / punting to the future: "There will probably be key actions that need taking in the coming decades, centuries, or millennia, which will have a huge influence over the whole rest of the future. There are some potential ways to set up future people to take those actions better in expectation, yet very few people are thinking strategically and working intensely on doing that. So that's probably the best thing we can do right now."

Those "potential ways" of punting to the future could be things like building a community of people with good values and epistemics or increasing the expected future wealth or influence of such people.

And this story could involve thinking there will be a future time that's much "higher leverage" / more "hingey" / more "influential", or thinking that there are larger returns to some ways of "punting to the future", or both. 

(See also.)

(Personally, I find this sort of story at least plausible, and it influences me somewhat.)

Comment by michaela on What is going on in the world? · 2021-01-20T09:07:54.315Z · EA · GW

Another potential story could go something like this: "Advances in artificial intelligence, and perhaps some other technologies, have begun to have major impacts on the income, wealth, and status of various people, increasing inequality and sometimes increasing unemployment. This then increases dissatisfaction and instability with our political and economic systems. These trends are all likely to increase in future, and this could lead to major upheavals and harms."

I'm not sure if all those claims are accurate, and don't personally see that as one of the most important stories to be paying attention to. But it seems plausible and somewhat commonly believed among sensible people.

Comment by michaela on What is going on in the world? · 2021-01-20T09:04:02.496Z · EA · GW

AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots: ...

I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from what's stated there. For example, I think I'm more inclined towards the following generalised version of that story:

AI systems will control the future or simply destroy our future, and how our actions influence the way that plays out is the only thing about our time that will matter in the long run. Major subplots: ...

This version of the story could capture: 

  • The possibility that the AI systems rapidly lead to human extinction but then don't really cause any other major things in particular, and have no [other] goals
    • I feel like it'd be odd to say that that's a case where the AI systems "control the future"
  • The possibility that the AI systems who cause these consequences aren't really "agents" in a standard sense
  • The possibility that what matters about our time is not simply "which [agents] we create", but also things like when and how we deploy them and what incentive structures we put them in

One thing that that "generalised story" still doesn't clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent that's aligned with the actor, or a set of AI services/tools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)

Comment by michaela on What is going on in the world? · 2021-01-20T08:54:19.477Z · EA · GW

Personally, the simple stories that I pretty much endorse, and that are among the stories within which my choices would make sense, are basically "low-confidence", "expected value", and/or "portfolio" versions of some of these (particularly those focused on existential risks). One such story would be:

There's a non-trivial chance that there are risks to the future of humanity (‘existential risks’), and that vastly more is at stake in these than in anything else going on. Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously. So, in expectation, it'd be a really, really good idea if some people acted to reduce these risks.

("Non-trivial" probably understates my actual beliefs. When I forced myself to try to estimate total existential risk by 2120, I came up with a very tentative 13%. But I think I might behave similarly even if my estimate was quite a bit lower.)

What I mean by "portfolio" versions is basically that I think I'd endorse tentative versions of a wide range of the stories you mention, which leads me to think there should be at least some people focused on basically acting as if each of those stories are true (though ideally remembering that that's super uncertain). And then I can slot into that portfolio in the way that makes sense on the margin, given my particular skills, interests, etc.

(All that said, I think there's a good argument for stating the stories more confidently, simply, and single-mindedly for the purposes of this post.)

Comment by michaela on What is going on in the world? · 2021-01-20T08:43:45.690Z · EA · GW

The following statements from Luke Muehlhauser feel relevant:

Basically, if I help myself to the common (but certainly debatable) assumption that “the industrial revolution” is the primary cause of the dramatic trajectory change in human welfare around 1800-1870, then my one-sentence summary of recorded human history is this:

>Everything was awful for a very long time, and then the industrial revolution happened.

(The linked post provides interesting graphs and discussion to justify/flesh out this story.)

Though I guess that's less of a plot of the present moment, and more of a plot of the moment's origin story (with hints as to what the plot of the present  moment might be).

Comment by michaela on What is going on in the world? · 2021-01-20T08:41:04.372Z · EA · GW

Yeah, I agree with that. 

On this, I really like this brief post from Our World in Data: The world is much better; The world is awful; The world can be much better. (Now that I have longtermist priorities, I feel like another useful slogan in a similar spirit could be something like "The world could become so much better; The world could end or become so much worse; We could help influence which of those things happens.")

Comment by michaela on Propose and vote on potential tags · 2021-01-19T13:34:39.845Z · EA · GW

This seems plausibly useful to me.

Obviously it’d overlap a lot with the Forecasting tag. But if it’s the case that several posts include Elicit forecasts but most posts tagged Forecasting don’t include Elicit forecasts, then I imagine a separate tag for Elicit forecasts could be useful. (Basically, what I’m thinking about is whether there would be cases in which it’d be useful for someone to find / be sent a collection of links to just posts with Elicit forecasts, with the Forecasting tag not covering their needs well.)

But maybe a better option would be to mirror LessWrong in having a tag for posts about forecasting and another tag for posts that include actual forecasts (see here)? (Or maybe the latter tag should only include posts that quite prominently include forecasts, rather than just including them in passing here and there.) Because maybe people would also want to see posts with Metaculus forecasts in them, or forecasts from Good Judgement Inc, or just forecasts from individual EAs but not using those platforms. And I’d guess it’d make more sense to have one tag where all of these things can be found than to try to have a separate tag for each.

(That’s just my quick thoughts in a tired state, though.)

It could also be handy to have a tag for posts relevant to “Ought / Elicit” - I think it’d probably be good to bundle them together but note Elicit explicitly - similarly to how there’s now tags for posts relevant to each of a few other orgs (e.g. Rethink Priorities, FHI, GPI, QURI). So maybe the combination of a tag for posts that contain actual forecasts and a tag for Ought / Elicit would serve the role a tag for posts containing Elicit forecasts would?

Comment by michaela on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-19T13:28:11.583Z · EA · GW

That all makes sense to me - thanks!

Comment by michaela on Training Bottlenecks in EA (professional skills) · 2021-01-19T02:28:53.361Z · EA · GW

One small point of superficial feedback on this post: I feel like it could be useful to change the title to something like "Professional Skills Training Bottlenecks in EA", or "Professional Skills Training: A Bottleneck in EA"? 

I say this because I think training bottlenecks for things other than professional skills - e.g., learning about EA concepts or academic fields or research skills - are also quite important topics that are often discussed on the Forum. (E.g., in the posts tagged Research Training Programs.) And I think it'd also be good to someday have a post collecting ideas for addressing that broader set of training bottlenecks in EA.

So even though at the start you said "I’m focusing on professional skills such as fundraising or management, rather than learning about concepts in effective altruism, for which there seems to be a number of excellent programs happening", I think I still kept sort-of forgetting that the scope was intended to be limited in that way as I read the rest of the post. And I think that might've been due to how the title set my expectations. 

(Though maybe this is just me and my sleepy brain!)

Comment by michaela on Training Bottlenecks in EA (professional skills) · 2021-01-19T02:14:31.440Z · EA · GW

On the point of finding useful courses, Edo Arad asked earlier What are some good online courses relevant to EA?, mentioned a couple, and got a few more answers. Though most weren't focused on "professional skills such as fundraising or management". 

Maybe if people have additional course ideas, they could mention them as answers to that question post? Or maybe someone should make a new post to collect such course suggestions?

(This would be very much only a partial solution, but could be helpful.)

Comment by michaela on Training Bottlenecks in EA (professional skills) · 2021-01-19T02:10:14.465Z · EA · GW

Thanks for this post.

Some thoughts on the "Systematising mentorship" stuff:

  • My impression is that WANBAM and Effective Thesis have essentially done some things that are similar to some of the things proposed in this section of this post. I imagine other EA- or EA-aligned orgs/people might've done so as well, in addition to presumably many non-EA orgs/people. So there might be a lot that could be learned from their approaches, successes, failures, thinking, etc.
  • In a recent podcast interview, Rob Wiblin and Spencer Greenberg proposed a potentially easier way to capture part of the benefits that "Systematising mentorship" would also aim to capture: Just setting up weekly meetings with someone else who's at roughly the same level of seniority and who also wants more "management"/"mentorship"
    • I summarised some of what they said about that here
    • This of course wouldn't capture all the benefits of getting mentorship from a person with more expertise in an area than the mentee has, but it might capture the benefits that basically just require any "line manager" type person
  • "A paid mentorship program could harm other mentorship programs and organic mentoring in the community by setting up an expectation that people providing mentorship be paid." That does seem to me plausible and worth noting.
    • But it also seems plausible that a paid mentorship could lead to people coming to see mentorship in general (even when provided for free) as more a more valuable, desired, "substantial" way of being helpful and having impact. And that could perhaps lead to more mentorship being offered, it being offered by higher-calibre mentors, or mentors being more motivated (again, even when the mentorship is free.)
    • On the other hand, the existence of some paid mentorship could also lead to people implicitly assuming that free mentorship is lower quality or something like that, which could have bad effects (e.g., leading to less supply of and demand for free mentorship).
    • (I'm therefore not sure what the net effect of this consideration would be.)
Comment by michaela on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-19T00:45:18.231Z · EA · GW

I don't especially care whether the content is written by community members, though I suppose that's slightly preferable (as community members are much more likely to respond to comments on their work).

>an article on nuclear risk from a non-EA academic

Heck yeah.

The main reason I sort-of suggest "written by community members" as a possible criterion for deciding whether to linkpost things here is that it seems like, without that criterion, it might be very hard to decide how much to linkpost here. There are huge numbers of articles on nuclear risk from non-EA academics. If someone decided to linkpost all of them here, or to linkpost all of the peer-reviewed non-EA articles on any one of many other EA-relevant topics, that batch of linkposts might suddenly become a large fraction of all posts that year. 

We could go with something like "linkpost all especially high quality articles on nuclear risk that are especially relevant to the most extreme risk scenarios (not just e.g. the detonation of 1 or a few bombs by terrorists)". But that's a murkier principle, and it seems like it could easily end up "going too far" (or at least seeming weird). And I think worrying that I'm going too far might lead me to hold back more than is warranted.

Maybe this could be phrased as "Making the decision partly based on whether the content was created by an EA could help in establishing a Schelling fence that avoids a slippery slope. And the existence of that fence could help people be more comfortable with beginning to travel down the slope, knowing they won't slip too far."

Comment by michaela on Lessons from my time in Effective Altruism · 2021-01-18T04:02:27.098Z · EA · GW

(Btw, this post and comment thread has inspired me to make a question post to hopefully collect links and views relevant to how much time EAs should spend engaging with people inside vs outside the EA community.)

Comment by michaela on How much time should EAs spend engaging with other EAs vs with people outside of EA? · 2021-01-18T03:55:56.807Z · EA · GW

Some factors that could influence answers to this cluster of questions:

(This is just a very quick, non-exhaustive list, with very little commentary. I'm hoping other people can add a bunch more factors, commentary, etc.)

  • Concerns about value drift
  • The value of bringing people who are currently outside of EA into the EA movement
  • The value of introducing people who are (and might remain) outside of EA to EA ideas
  • The value of getting a diversity of views
  • How much a person should be focused on working at EA vs non-EA orgs
    • There are various ways this could be relevant
      • One is that, if one is more focused on working at non-EA orgs, that might increase the value of building strong networks in specific communities outside of EA (e.g., in specific policy or academic communities).
  • Specific things to do with personal preferences and circumstances
Comment by michaela on How much time should EAs spend engaging with other EAs vs with people outside of EA? · 2021-01-18T03:49:37.652Z · EA · GW

Some related questions:

How much should EAs focus on working at EA orgs vs at other orgs? 

  • And how does this vary based on various aspects of the individual/situation in question? And what are the factors driving answers to these questions?
  • We could also ask about working in roles explicitly in vs not explicitly in the EA community, if we wanted to also capture independent work, founding new orgs, etc.
  • Prior discussion:
    • There's a lot, and I haven't taken the time to compile it here
    • One recent discussion occurred in this post and some of the comments on it: My mistakes on the path to impact

How much time should EAs spend consuming content (e.g., papers, articles, podcasts) created by EAs vs content not created by EAs?

  • I assume this has been discussed before in various places, though I can't immediately recall where

(Presumably many other questions could also be mentioned.)

Comment by michaela on How much time should EAs spend engaging with other EAs vs with people outside of EA? · 2021-01-18T03:42:23.998Z · EA · GW

Some prior statements/discussion of this cluster of questions:

(I'm sure there are many other relevant links that have slipped my mind or that I never saw, which is why this is a question post rather than just a post!)