My personal cruxes for focusing on existential risks / longtermism / anything other than just video games 2021-04-13T05:50:22.145Z
On the longtermist case for working on farmed animals [Uncertainties & research ideas] 2021-04-11T06:49:05.968Z
The Epistemic Challenge to Longtermism (Tarsney, 2020) 2021-04-04T03:09:10.087Z
New Top EA Causes for 2021? 2021-04-01T06:50:31.971Z
Notes on EA-related research, writing, testing fit, learning, and the Forum 2021-03-27T09:52:24.521Z
Notes on Henrich's "The WEIRDest People in the World" (2020) 2021-03-25T05:04:37.093Z
Notes on "Bioterror and Biowarfare" (2006) 2021-03-01T09:42:38.136Z
A ranked list of all EA-relevant (audio)books I've read 2021-02-17T10:18:59.900Z
Open thread: Get/give feedback on career plans 2021-02-12T07:35:03.092Z
Notes on "The Bomb: Presidents, Generals, and the Secret History of Nuclear War" (2020) 2021-02-06T11:10:08.290Z
Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.? 2021-02-02T03:52:43.821Z
Notes on Schelling's "Strategy of Conflict" (1960) 2021-01-29T08:56:24.810Z
How much time should EAs spend engaging with other EAs vs with people outside of EA? 2021-01-18T03:20:47.526Z
[Podcast] Rob Wiblin on self-improvement and research ethics 2021-01-15T07:24:30.833Z
Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? 2021-01-15T06:56:20.644Z
Books / book reviews on nuclear risk, WMDs, great power war? 2020-12-15T01:40:04.549Z
Should marginal longtermist donations support fundamental or intervention research? 2020-11-30T01:10:47.603Z
Where are you donating in 2020 and why? 2020-11-23T08:47:06.681Z
Modelling the odds of recovery from civilizational collapse 2020-09-17T11:58:41.412Z
Should surveys about the quality/impact of research outputs be more common? 2020-09-08T09:10:03.215Z
Please take a survey on the quality/impact of things I've written 2020-09-01T10:34:53.661Z
What is existential security? 2020-09-01T09:40:54.048Z
Risks from Atomically Precise Manufacturing 2020-08-25T09:53:52.763Z
Crucial questions about optimal timing of work and donations 2020-08-14T08:43:28.710Z
How valuable would more academic research on forecasting be? What questions should be researched? 2020-08-12T07:19:18.243Z
Quantifying the probability of existential catastrophe: A reply to Beard et al. 2020-08-10T05:56:04.978Z
Propose and vote on potential tags 2020-08-04T23:49:47.992Z
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence 2020-08-04T11:38:48.816Z
Crucial questions for longtermists 2020-07-29T09:39:17.144Z
Moral circles: Degrees, dimensions, visuals 2020-07-24T04:04:02.017Z
Do research organisations make theory of change diagrams? Should they? 2020-07-22T04:58:41.263Z
Improving the future by influencing actors' benevolence, intelligence, and power 2020-07-20T10:00:31.424Z
Venn diagrams of existential, global, and suffering catastrophes 2020-07-15T12:28:12.651Z
Some history topics it might be very valuable to investigate 2020-07-08T02:40:17.734Z
3 suggestions about jargon in EA 2020-07-05T03:37:29.053Z
Civilization Re-Emerging After a Catastrophic Collapse 2020-06-27T03:22:43.226Z
I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. 2020-05-11T09:35:22.543Z
Existential risks are not just about humanity 2020-04-28T00:09:55.247Z
Differential progress / intellectual progress / technological development 2020-04-24T14:08:52.369Z
Clarifying existential risks and existential catastrophes 2020-04-24T13:27:43.966Z
A central directory for open research questions 2020-04-19T23:47:12.003Z
Database of existential risk estimates 2020-04-15T12:43:07.541Z
Some thoughts on Toby Ord’s existential risk estimates 2020-04-07T02:19:31.217Z
My open-for-feedback donation plans 2020-04-04T12:47:21.582Z
What questions could COVID-19 provide evidence on that would help guide future EA decisions? 2020-03-27T05:51:25.107Z
What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? 2020-03-27T03:05:46.791Z
Fundraising for the Center for Health Security: My personal plan and open questions 2020-03-26T16:53:45.549Z
Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking? 2020-03-19T06:07:03.834Z
[Link and commentary] Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 2020-03-14T09:04:10.955Z
Suggestion: EAs should post more summaries and collections 2020-03-09T10:04:01.629Z


Comment by MichaelA on [deleted post] 2021-04-22T18:23:56.159Z

My understanding is that:

So it'd be cool if someone could (eventually) edit this entry to be consistent with those points.

Comment by MichaelA on [deleted post] 2021-04-22T05:54:16.910Z

So I would say that we should adopt this as our practice, if others agree.

Yeah, that sounds good. 

Though I do think this is a case where there's a relevant difference between Wikipedia and the Forum Wiki (in a way that I'm less sure is so for the citation style, for example): Our entries are also tags. Wikipedia entry names only really need to be shown at the top of a single page, and maybe some low-traffic pages that just list lots of articles; every other link to them can use an abbreviation. But our entries will show up on many pages, right at the top. So I think that creates some reason to be a bit more inclined towards abbreviations than Wikipedia is.

(This could also be fixed by some change to the code such that the title shown on the Wiki entry doesn't have to be the tag name shown on posts, as you suggest earlier.)

I would be inclined to follow them here.

I think I weakly lean towards APPG for Future Generations for the brevity reason, but it's not a strong stance.

Comment by MichaelA on [deleted post] 2021-04-21T15:00:01.633Z

Maybe in future this entry should draw a bit on discussion (within or outside EA) of "unintended consequences" of the kinds described here.

Comment by MichaelA on [deleted post] 2021-04-21T14:59:15.646Z

The first sentence of this article had been:

Indirect long-term effects (also called flow-through effects (Karnofsky 2013; Karnofsky et al. 2013; Shulman 2013; Wiblin 2016), ripple effects (Beckstead 2013; Whittlestone 2017), knock-on effects (Gaensbauer 2016; Greaves 2016; Snowden 2017) and cascading effects) are effects on the long-run future from interventions targeted at the short-term.

But many of the terms in brackets were not necessarily limited to effects on the long-run future from interventions targeted at the short-term. E.g., I think some or all of those terms could've also been used to describe things like unintended effects in the coming decades of bednet distribution, such as (maybe) more meat consumption, more greenhouse gas emissions, more economic growth, or more innovation.

The sentence also fit a lot of info in brackets mid-way through it.

So I've now split it into two and tweaked it to be more consistent with the idea that those other terms might not be about a totally identical concept.

Comment by MichaelA on Propose and vote on potential tags · 2021-04-21T14:41:30.461Z · EA · GW

Yeah, I just spotted that and the fact I had a new notification at the same time, and hoped it was anything other than a reply here so I could delete my shamefully redundant suggestion before anyone spotted it :D

(I think what happened is that I used command+f on the tags portal before the page had properly loaded, or something.)

Comment by MichaelA on [deleted post] 2021-04-21T14:37:33.842Z

Some quick thoughts:

  • Brevity seems good, to avoid this one tag taking up weirdly much space compared to other tags when applied to a post
    • As we discussed here
  • I think there's no substantial reason to prefer "versus" over "vs." or "vs", so I prefer the latter options for brevity
  • Brevity also pushes in favour of "adjective1 vs adjective2 noun", rather than "adjective1 noun vs adjective2 noun", and I don't see a strong push in the other direction, so now I prefer the first approach
    • E.g., "Naive vs. sophisticated consequentalism" rather than "Naive consequentialism vs. sophisticated consequentialism"
    • I've now updated this tag's name to reflect that
  • Brevity also pushes in favour of just picking one or the other term rather than using both, but I think that can be outweighed in many cases
    • E.g., I think the primary topic of the broad vs narrow interventions entry really will be the distinction itself, not just broad interventions or narrow interventions, so the name should keep both
    • Whereas this entry might be primarily basically about "What is naive consequentialism, why is it bad, and how can you avoid it?", with sophisticated consequentialism only really coming into play as part of answering those questions
      • At least that's how I might see it
      • But it's not clear-cut in this case, which is why I kept both terms in the name for now
  • I think "vs." vs "and" should just be a matter of what's clearer and more appropriate for the case at hand?
    • E.g., "broad and narrow interventions" seems confusing; when I read that, I initially think we're describing one set of interventions that meets both criteria
Comment by MichaelA on Propose and vote on potential tags · 2021-04-21T14:27:42.697Z · EA · GW

Demandingness objection

I'd guess there are at least a few Forum posts quite relevant to this, and having a place to collect them seems nice, but I could be wrong about either of those points.

Comment by MichaelA on [deleted post] 2021-04-21T13:19:02.359Z

I think we should have an entry on something like this, so I grabbed the related EA Concepts title and text.

But maybe the entry should be called just Naive consequentialism, or maybe just Sophisticated consequentialism or something else.

Comment by MichaelA on [deleted post] 2021-04-21T13:11:36.754Z

Ah, I hadn't seen the indirect long-term effects entry - given the existence of that entry, I agree with your suggestion.

Comment by MichaelA on [deleted post] 2021-04-21T12:16:21.653Z

Is this style guide the right place for policies/norms about how to use tags? E.g., a policy about which posts should be tagged with a tag for an organisation, as discussed here?

Or is there/should there be some other place for such policies/norms?

Seems like that's more about "tagging" and less about "style for the wiki entries".

Comment by MichaelA on [deleted post] 2021-04-21T10:05:28.803Z

It seems to me that it'd be more natural to replace this entry with an entry on something like "flow-through effects"? That seems to be a more common term in EA than "future considerations", and seems to more clearly gesture at what I think is the core interesting thing in this entry?

Comment by MichaelA on [deleted post] 2021-04-21T06:51:07.803Z

Alternative name options:

  • Charitable pledges
  • Altruistic pledges
  • Giving pledges

Maybe the first two names are good in that they could capture pledges about resources other than money (e.g., time)? But I can't off the top of my head think of any non-monetary altruistic pledges. 

"Giving pledges" is probably bad because it could be confused with the Giving Pledge specifically. 

Comment by MichaelA on [deleted post] 2021-04-20T18:06:29.940Z

FWIW, I think that:

  • The new name seems to me like an improvement from the old name (which I wrote)
    • When making this tag, I'd forgotten that there was a relevant modesty vs humility distinction here; I had considered them synonymous when originally naming the tag
      • This is a bit weird, because I'd read the things Rob cites in his comment, and in fact I even cited one myself earlier today
        • I guess this is an indication that it's good I now use Anki sometimes (rather than never)!
      • It does seem good to avoid creating confusion by conflating the two terms here
  • The two articles should probably be merged
  • The new name for this tag seems better to me than "epistemology of disagreement"
    • I think the new name better fits what I originally had in mind as the scope of the tag
      • E.g., I wanted it to cover something like how people tend to update on other people's opinions and how that should affect the way we communicate (e.g., being clear about when we're reporting independent impressions vs all-things-considered beliefs)
    • One reason (maybe the main one): I think many situations involving "deference" don't really involve what we'd normally think of as "disagreement"
      • It might be going from having no opinion on a topic at all to now having the opinion Toby Ord has
Comment by MichaelA on [deleted post] 2021-04-20T16:07:24.619Z

Maybe eventually this entry should mention the idea of an ideological Turing test? And/or maybe the Epistemology entry should? And/or maybe that concept should get its own entry?

Here's one post that mentions the idea (I haven't thoroughly looked for others): 

Comment by MichaelA on [deleted post] 2021-04-20T12:16:18.083Z

Maybe this entry should also (briefly?) discuss Shapley values?

Comment by MichaelA on Propose and vote on potential tags · 2021-04-20T12:01:16.303Z · EA · GW

Update: I've now made this tag.

Charitable pledges or Altruistic pledges or Giving pledges (but that could be confused with the Giving Pledge specifically) or Donation pledges or similar

Maybe the first two names are good in that they could capture pledges about resources other than money (e.g., time)? But I can't off the top of my head think of any non-monetary altruistic pledges. 

This could serve as an entry on this important-seeming topic in general, and as a directory to a bunch of other entries or orgs on specific pledges (e.g., Giving Pledge, GWWC Pledge, Generation Pledge, Founders Pledge).

See also this post: 

Comment by MichaelA on Propose and vote on potential tags · 2021-04-20T11:53:52.140Z · EA · GW

Antimicrobial resistance

Not sure enough EAs care about this and/or have written about this on the Forum for it to warrant an entry/tag?

(I don't personally have much interest in this topic, but I'm just one person.)

Comment by MichaelA on [deleted post] 2021-04-20T11:48:17.690Z

It's possible that this entry is redundant since we already have entries on Existential risk and on Forecasting, so e.g. someone could just filter for both of those tags at once and get something similar to filtering for this tag.


  • People might not think to filter for two tags at once
  • People might also use a single tag/entry as a collection of posts on a topic, e.g. for sending to interesting people, and a combo of two tags doesn't seem to work properly for that purpose
  • That's all just about the tagging functionality, not the wiki functionality. This seems to me like an important and large enough topic to warrant its own entry.

The fact we have a specific entry for "AI forecasting" rather than just relying on the intersection of "AI alignment" (or whatever) and "Forecasting" seems in line with having a specific entry for this topic as well.

Comment by MichaelA on [deleted post] 2021-04-20T11:44:02.990Z

Some alternative name options:

  • Existential risk estimates
  • Estimation of existential risks
  • (Various permutations of these sorts of phrases)
Comment by MichaelA on [deleted post] 2021-04-20T11:43:05.818Z

I don't have time to write the text for this entry at the moment. Maybe I could in a few weeks, but not sure, and other editors should definitely feel free to go on without me!)

But I think the text could draw on some of the tagged posts and the stuff in Further reading. In particular, if I was writing this, I'd probably:

I'd also make sure to explicitly note that this is not necessarily just about extinction, and conversely that many of the tagged posts will also/only discuss estimates of less potentially extreme outcomes than existential catastrophes (e.g. GCRs).

Comment by MichaelA on Deference for Bayesians · 2021-04-20T08:27:05.215Z · EA · GW

Or, more realistically, they're optimising for publishing esteemable papers, and since they can't reference non-legible sources of evidence, they'll be less interested in attending to them.

I think this is broadly right.

The main reason academics suffer from "myopic empiricism" is that they're optimising for legibility (an information source is "legible" if it can be easily trusted by others), both in their information intake and output.

I don't think this is quite right.

It seems pretty unclear to me whether the approach academics are taking is actually more legible than the approach Halstead recommends. 

And this is whether we use "legible" to mean: 

  1. "how easily can others understand why they should trust this (even if they lack context on the speaker, lack a shared worldview, etc.)", or
  2. "how easily can others simply understand how the speaker arrived at the conclusions they've arrived at"
    1. The second sense is similar to Luke Muehlhauser's concept of "reasoning transparency"; I think that that post is great, and I'd like it if more people followed its advice.

For example, academics often base their conclusions mostly on statistical methods that almost no laypeople, policymakers, etc. would understand; often even use datasets they haven't made public; and sometimes don't report key parts of their methods/analysis (e.g., what questions were used in a survey, how they coded the results, whether they tried other statistical techniques first). Sometimes the main way people will understand how they arrived at their conclusions and why to trust them is "they're academics, so they must know what they're doing" - but then we have the replication crisis etc., so that by itself doesn't seem sufficient.

(To be clear, I'm not exactly anti-academia. I published a paper myself, and think academia does produce a lot of value.)

Meanwhile, the sort of reasoning Halstead gives in this post is mostly very easy to understand and assess the reasonableness of. This even applies to potentially assessing Halstead's reasoning as not very good - some commenters disagreed with parts of the reasoning, and it was relatively easy for them to figure out and explain where they disagreed, as the points were made in quite "legible" ways.

(Of course, Halstead probably deliberately chose relatively clear-cut cases, so this might not be a fair comparison.)

This comes back to me being a big fan of reasoning transparency.

That said, I'm not necessarily said that those academics are just not virtuous and that if I were in their shoes I'd be more virtuous - I understand that the incentives they face push against full reasoning transparency, and that's just an unfortunate situation that's not their fault. Though I do suspect that it'd be good for more academics to (1) increase their reasoning transparency a bit, in ways that don't conflict too much with the incentive structures they face, and to (2) try to advocate for more reasoning transparency by others and for tweaking the incentives. (But this is a quick hot take; I haven't spent a long time thinking about this.)

Comment by MichaelA on Deference for Bayesians · 2021-04-20T08:12:23.239Z · EA · GW

I certainly think it's unfortunate that the default information aggregation systems we have (headlines, social media, etc) are not quite up to the task of accurately representing experts. I think this is an important and (in the abstract) nontrivial point, and I'm a bit sad that our best solution here appears to be blaming user error.

Yeah, I think this seems true and important to me too. 

There are three, somewhat overlapping solutions to small parts of this problem that I'm excited about: (1) "Research Distillation" to pay off "Research Debt", (2) more summaries, and (3) more collections.

And I think we can also broaden the idea of "research distillation" to distilling bodies of knowledge other than just "research", like sets of reasonable-seeming arguments and considerations various people have highlighted.

I think the new EA Forum wiki+tagging system is a nice example of these three types of solutions, which is part of why I'm spending some time helping with it lately.

And I think "argument mapping" type things might also be a valuable, somewhat similar solution to part of the problem. (E.g., Kialo, though I've never actually used that myself.)

There was also a relevant EAG panel discussion a few years ago: Aggregating knowledge | Panel | EA Global: San Francisco 2016.

Comment by MichaelA on Propose and vote on potential tags · 2021-04-20T07:44:26.079Z · EA · GW

Something like Bayesianism

Arguments against having this entry/tag:

  • Maybe the topic is sufficiently covered by the entries on Epistemology and on Decision theory?
Comment by MichaelA on MichaelA's Shortform · 2021-04-20T07:41:36.777Z · EA · GW

I just re-read this comment by Claire Zabel, which is also good and is probably where I originally encountered the "impressions" vs "beliefs" distinction.

(Though I still think that this shortform serves a somewhat distinct purpose, in that it jumps right to discussing that distinction, uses terms I think are a bit clearer - albeit clunkier - than just "impressions" vs "beliefs", and explicitly proposes some discussion norms that Claire doesn't quite explicitly propose.)

Comment by MichaelA on [deleted post] 2021-04-20T06:41:56.392Z

I think this should probably be merged with the Nuclear weapons tag. Probably using the text from here, using the further reading and related entries from there, and keeping the tagged posts from there.

Comment by MichaelA on What material should we cross-post for the Forum's archives? · 2021-04-20T06:31:51.232Z · EA · GW

Pretty much all papers and blog posts from the GCRI site (except that some blog posts just discuss a single paper, in which case there should just be one cross-post covering both).

Comment by MichaelA on [deleted post] 2021-04-20T06:07:22.062Z

That reply seems reasonable.

[I could be wrong about all of the following. Also, this response at least slightly has the vibe of the sort of annoying and counterproductive "slapdown" Eliezer writes about here, partly because I don't have the time to provide a more constructive, detailed, object-level response, so my apologies for any frustration that that causes!]

I spent a few days, back when I started working on this project, exploring the existing formats (using this tool) and I wasn't able to find a format that handled all the problematic cases in a way I found satisfactory. This was many months ago, so I don't remember the details.

As noted above, I see two main potential arguments why that would be true, and I'm a bit skeptical of both for meta reasons. One thing I'd add is that the new citation style has only really been evaluated by its creator (you), I think, so it's possible that part of why it seems better is because of your idiosyncratic views. 

But of course neither of us have highlighted specific, object-level arguments (aside from you saying "not requiring URLs to be listed explicitly"), so this is somewhat hard to evaluate. I do think it's plausible that your proposed citation style would be better, and I don't mean to be taken as "slapping down" even the mere idea of trying to generate an alternative to the existing approaches. 

One other thing I'd note is that it might be possible to identify an existing citation style that's mostly good but has 5 issues in your view, 3 of which seem especially noteworthy and clearly problematic, and then have our citation style be "that one but with these 3 tweaks". E.g., "APA but you don't need to list URLs explicitly". That might avoid most of the costs I mention from a new citation style and most of the costs you think an existing citation style would have.

Considerable time has been spent already (mostly by Leo, my excellent assistant) in making sure that all citations conform to the current format. (Most of this time had already been spent by the time you raised this objection a few weeks ago.) The costs are sunk, so that is not in itself a reason, but it provides an estimate of the costs that would have to be incurred to make the citations conform to a new format.

At first glance, I think we should probably mostly focus, when making these decisions, on scenarios where the wiki becomes fairly widely used and edited for a long time. Those scenarios seems to account for most of the expected value of work on this project. In those scenarios, there will be much more time spent on entering and editing citations in future than has been spent so far. 

So I think the fact that those sunk costs were considerable mostly pushes in favour of thinking carefully about and getting feedback on the decision about citation styles within the next few days / weeks / maybe months. I think it also pushes a bit in favour of keeping the existing citation style, to avoid paying a cost to switch the existing citations to a new style, but I think that that push is probably smaller?

I would be open to the proposal if (1) I was presented with a concrete alternative that handled basic cases correctly (such as not requiring URLs to be listed explicitly) and (2) a quick, back-of-the-envelope calculation of the time that would be saved by adopting this new style. I could then ask Leo to estimate how much time he has spent fixing the citations, and by comparing the two estimates we can decide whether this is worth it.

It does sound like that'd be useful info, but I guess I feel like by default no one will think to and take the time to give you that info even if a different citation style would be better. I don't have time to do this soon myself. (Part of why this wouldn't be super quick is that I don't already have strong views on which citation style would be best; I more so have a meta-level epistemic-humility-style skepticism that any new style that's only really been evaluated by its creator will be better than all existing ones.) So I think I'd say the Forum/wiki team should take it upon themselves to try to work that stuff out or to actively solicit someone else to do so. 

In other words: Due to time constraints and lack of detailed cached thoughts on this question, I'm just going for a drive-by "this seems worth thinking about", rather than being able to compelling argue for any particular alternative. 

Comment by MichaelA on [deleted post] 2021-04-20T05:52:01.471Z

The two examples that spring to mine as places where this is relevant is this org (APPGFG) and ALLFED. I'd guess that both orgs' full names are as long as 2 or 3 regular tags, which does seem like it gives undue attention to that tag on an article in particular. And in the case for ALLFED at least, the shortened name is more widely known.

If I don't update my beliefs based on now knowing what Wikipedia does, then it seems to me like it'd be best for us to either:

  1. Use the best known name even when it's an abbreviation
  2. Use whatever name the org would introduce itself to new people as (e.g. I imagine ALLFED would often say ALLFED, but the LTFF would introduce itself as the Long-Term Future Fund, even if the acronym might be more commonly used in within-community discussions)
  3. Use the best known name except when that'd be really long, in which case we use a shortened version

But it does also seem good to have some tendency to mimic Wikipedia's policies, both so that people used to those will adapt more easily to the Forum wiki and because those policies were probably mostly chosen for good reasons.

So ultimately I feel unsure, and don't have a strong stance.

(You can also feel free to rename this specific entry.)

Comment by MichaelA on On future people, looking back at 21st century longtermism · 2021-04-19T18:39:44.782Z · EA · GW

I've only skimmed this thread, but I think you and Jack Malde both might find the following Forum wiki entries and some of the associated tagged posts interesting:

To state my own stance very briefly and with insufficient arguments and caveats:

  • I think it makes sense to focus on humans for many specific purposes, due to us currently being the only real "actors" or "moral agents" playing
  • I think it makes sense to think quite seriously about long-term effects on non-humans (including but not limited to nonhuman animals)
  • I think it might be the case that the best way to optimise those effects is to shepherd humans towards a long reflection
  • I think Jack is a bit overconfident about (a) the idea that the lives of nonhuman animals are currently net negative and (b) the idea that, if that's the case or substantially likely to be the case, that would mean the extinction of nonhuman animals would be a good thing
    • I say more about this in comments on the post of Jack's that you linked to
    • But I'm not sure this has major implications, since I think in any case the near-term effects we should care about most probably centre on human actions, human values, etc. (partly in order to have good long-term effects on non-humans)
Comment by MichaelA on On future people, looking back at 21st century longtermism · 2021-04-19T18:31:28.560Z · EA · GW

Thanks, I thought this was really interesting and well-written. (Well, weirdly enough, I got nothing out of the quoted poems, but I found the rest of it quite poetic and moving!)

I especially liked this passage:

I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary — that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection.

I imagine our descendants saying: “*Yes*. You can see it. Don’t look away. Don’t forget. Don’t mess up. The pieces are all there. Go slow. Be careful. It’s really possible.” I imagine them looking back through time at their distant ancestors, and seeing some of those ancestors, looking forward through time, at them. I imagine eyes meeting.

I immediately went and quoted that in a comment on the post What quotes do you find most inspire you to use your resources (effectively) to help others?, as I expect I'll find that passage inspiring in future and that other people might do so as well.

Comment by MichaelA on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2021-04-19T18:29:07.962Z · EA · GW

I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary — that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection.

I imagine our descendants saying: “*Yes*. You can see it. Don’t look away. Don’t forget. Don’t mess up. The pieces are all there. Go slow. Be careful. It’s really possible.” I imagine them looking back through time at their distant ancestors, and seeing some of those ancestors, looking forward through time, at them. I imagine eyes meeting.

-Joseph Carlsmith, On future people, looking back at 21st century longtermism

This post was only published recently and I only read it today, so I can't yet say that this inspired me to use my resources to effectively help others. But I think the sort of idea it points to has indeed inspired me, and I found that passage excellent and expect it will inspire me in future.

(Note that the original post contains some caveats, e.g. "To be clear: this is some mix between thought experiment and fantasy. It’s not a forecast, or an argument. In particular, the empirical picture I assumed above may just be wrong in various key ways.")

Comment by MichaelA on Propose and vote on potential tags · 2021-04-19T15:15:19.433Z · EA · GW

Cognitive biases/Cognitive bias, and/or entries for various specific cognitive biases (e.g. Scope neglect)

I feel unsure whether we should aim to have just a handful of entries for large categories of biases, vs one entry for each of the most relevant biases (even if this means having 5+ or 10+ entries of this type)

Comment by MichaelA on [deleted post] 2021-04-19T14:57:27.989Z

Thanks for the suggestion - I have now made this entry usable as a tag, rather than wiki-only, and have added it to that post.

Feel free to apply this tag to any other relevant posts you know of!

Comment by MichaelA on What material should we cross-post for the Forum's archives? · 2021-04-19T14:42:24.139Z · EA · GW

Pretty much all papers, technical reports, etc. from FHI (including GovAI), maybe starting with the Bostrom stuff.

Comment by MichaelA on [deleted post] 2021-04-19T11:52:07.118Z

Yeah, that policy definitely sounds good, and I already assumed it was the case. I guess what I should've said is that here I'm more uncertain what the right name, scope, and content would be than I am for the average entry I create.

So sort of "more starting-point-y than average" in terms of whether the current content should be there in the current way. (Though many other entries I make are more starting-point-y than this one in terms of them having almost no content.)

Comment by MichaelA on [deleted post] 2021-04-19T11:33:22.752Z

Things that could maybe be done in future:

  • Expand this entry by drawing on posts tagged Fun Theory on LessWrong, and/or crosspost some here and give them this tag
  • Expand this entry by drawing on the "AI Ideal Goverance" section of the GovAI research agenda, and/or crosspost that agenda here and give it this tag
  • Expand this entry by drawing on the Bostrom and Ord sources mentioned in Further reading
  • Draw on this shortform of mine, and particularly the following paragraph

Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with better chances of securing highly positive futures (not just avoiding existential catastrophes). It could also help us avoid negative futures that may not appear negative when superficially considered in advance. Finally, such positive visions of the future could facilitate cooperation and mitigate potential risks from competition (Dafoe, 2018 [section on "AI Ideal Governance"]). Researchers have begun outlining particular possible futures, arguing for or against them, and surveying people’s preferences for them.

Comment by MichaelA on [deleted post] 2021-04-19T11:31:01.599Z

I see what I've put here as a starting point. There are various reasons one might want to change it, such as:

  • Maybe the bullet point style isn't what the EA Wiki should aim for
  • Maybe a different name would be better
  • What I've got here is like my own take, based loosely on various sources but not really modelled super directly on them

You can see my original thinking for this entry here.

Comment by MichaelA on [deleted post] 2021-04-19T06:25:30.972Z

Ok, given that that suggestion got a vote, I've now shortened the name. But I chose a middle ground - APPG on Future Generations. That seems to be what the group themselves default to in communications (based on the most relevant Forum post and a quick look at their site), and it keeps "future generations" in the name which seems handy for unfamiliar readers.

It's still possible that the shorter or longer names would be better; people could change it again later.

Comment by MichaelA on Our plans for hosting an EA wiki on the Forum · 2021-04-19T06:22:32.305Z · EA · GW


1. An equivalent of pingbacks on wiki entries for when other entries link to them

2. Pingbacks for when a wiki entry links to a post

On 1: I recently made an entry on the APPG on Future Generations. In Related entries, I included the entry on Institutions for future generations. That latter entry is the larger thing of which APPG on Future Generations is just one example, so it doesn't necessarily seem worth adding APPG on Future Generations to the Related entries section of the Institutions for future generations entry, but it'd still be nice if people on that entry could see a list of the other entries that link to it.

This also helps in cases where a person just doesn't think to or don't have time to add to Related entries in both directions even when doing so would be good.

On 2: And it might also be interesting to see pingbacks on regular posts to indicate whether any entries have linked to them (as distinct from the post being tagged). In theory, that'd indicate to reader that that entry is especially relevant to this post and that this post is an especially important or clear discussion of that entry's topic.

E.g., A proposed adjustment to the astronomical waste argument is linked to from the entries Trajectory changes and Speeding up development, and it's indeed a relatively "canonical" source in relation to those two topics.

(I'm not 100% confident that these suggestions are good ideas, and they're probably low priority anyway, especially the second idea.)

Comment by MichaelA on [deleted post] 2021-04-19T06:08:44.165Z

External links should only be used in the Bibliography and External links sections of the article, and never in the lead or body sections (see the Organization of articles section).

I think external links should be allowed in the lead or body sections. And in discussions on a draft of this style guide, I seem to recall that you (Pablo) moved to agreeing with that view? Should that line of the style guide be changed?

(Let me know if I should explain my reasoning for this position again.)

Comment by MichaelA on [deleted post] 2021-04-18T16:36:23.411Z

The connection was actually that the Simon Institute post mentioned the APPGFG. As discussed in some other thread somewhere, I think org tags should also cover posts that discuss an org, even if they aren't by the org. (Aaron indicated agreeing, and maybe had a stronger version of this view.)

But now that I look closer, I see the mention is fairly brief. That might be fine if this tag was already quite populated, but it seems confusing when only 5 posts have the tag. (It was also definitely confusing that that post showed up first here - I've now strong upvoted the tag for a more relevant APPGFG to head off that problem in future.)

So I've now removed the tag from the Simon Institute.

Comment by MichaelA on Making More Sequences · 2021-04-18T16:32:13.626Z · EA · GW

Thanks for doing this!

Those also sound like good suggestions to me; glad to see/hear that some have already been implemented.

Another suggestion: Maybe the top part that shows the sequence name and the left and right arrows should appear on all posts that are part of sequences, even if you found your way to the post in some way other than via the sequence?

Currently, with Lukas Gloor's anti-realism posts and Luisa Rodriguez's nuclear posts, if I click the link to the posts from within the sequence, I see that top part. But if I get to the post some other way (via tags, their user profile, a search, another link, etc.), I don't. This means there's no way for someone who gets to those posts in most ways to even know they're part of a sequence.

A downside of this suggestion is that the sequences in questions weren't made by the users themselves, so it's possible that the users don't want that top part to appear. In these two cases, I'd guess the users would be happy with that, but it'd be possible for someone to make a sequence whose scope, framing, order, etc. seems weird to the authors involved (e.g., if the person strings together posts from different authors).

Another complication is that, in some cases, a post might be added to more than one sequence. Maybe if that happens, the mods should get a notification and should use their best judgement as to which sequence is used for the top part by default?

(I'm not sure if mods will look at this comment by default - maybe I should message one?)

Comment by MichaelA on Making More Sequences · 2021-04-18T15:50:51.871Z · EA · GW

Ah, I didn't think to check that page. (Edit: Also, I hadn't actually read your post, and now that I'm reading it I see that you mention this sequence in the post itself, which is a tad embarrassing, haha.)

I think it'd be nice if this showed up on Lukas's page, as long as he's happy with that, so might indeed be worth messaging the mods?

Comment by MichaelA on Making More Sequences · 2021-04-18T10:45:02.384Z · EA · GW

Ah, great, thanks!

I didn't spot it earlier since it doesn't show up on either your profile or Lukas' - any idea why that is?

Comment by MichaelA on [deleted post] 2021-04-18T10:37:11.217Z

I expect I and/or other people at RP could indeed expand this entry.

Do you have one or more examples of good entries on orgs which we could somewhat mimic (e.g., in terms of structure, length, and which type of info to provide)?

Comment by MichaelA on [deleted post] 2021-04-18T09:44:47.839Z

I think the first sentence is confusing/misleading/unclear; it seems closer to pure time discounting than to temporal discounting in general.

I suggest changing or removing that.

Comment by MichaelA on [deleted post] 2021-04-18T09:31:21.194Z

I think this and anti-ageing research should probably be merged?

But this isn't my area, so maybe there's an important distinction which I'm missing.

Comment by MichaelA on Propose and vote on potential tags · 2021-04-18T09:23:27.959Z · EA · GW

Nonlinear Fund

Maybe it's too early to make a tag for that org?

Comment by MichaelA on [deleted post] 2021-04-18T09:20:51.818Z

Maybe the tag name should just be APPGFG?

Comment by MichaelA on Making More Sequences · 2021-04-18T09:18:17.608Z · EA · GW

I think it's great that you've made these sequences!

Could you make one for this sequence:

(I ask partly to save myself time but mainly because I don't actually subscribe anti-realism and I haven't made any other sequences, so me having that as the only sequence on my profile could give the wrong impression about my own views. Whereas you've got lots of sequences, so it's less likely to seem to represent your views.)