What would a pre-mortem for the long-termist project look like? 2020-04-11T14:51:43.331Z


Comment by Azure on What should we actually do in response to moral uncertainty? · 2022-05-30T16:54:44.721Z · EA · GW

I think the idea is to assign credences to plausible theories, where plausible is taken to mean some subset of the following:

  • Has been argued for in good faith by professional philosophers
  • Has relevant and well-reasoned arguments in favour of it
  • Accords at least partially with moral intuitions
  • Is consistent/parsimonious/not metaphysically untoward/precise/ etc (the usual desiderata for explanations/theories)
  • Concerns the usual domain of moral theories(values, agents, decisions, etc)

Another equivalent way to proceed is to consider all possible theories, but the credence given to the (completely) implausible theories is 0 or sufficiently close to it.

Comment by Azure on Introducing Asterisk · 2022-05-26T10:40:21.532Z · EA · GW

Is there a potential naming collision with ? Are you going with 'asteriskmag' as the domain name perhaps?

Comment by Azure on Propose and vote on potential EA Wiki articles / tags [2022] · 2022-05-25T13:53:06.813Z · EA · GW

Human Challenge Trials as a wiki entry(less so as a tag)

An idea I think a sizeable portion of people here are sympathetic to and the entry could act as a good companion to entries like 1Day Sooner and COVID-19 pandemic

Comment by Azure on Propose and vote on potential EA Wiki articles / tags [2022] · 2022-05-24T11:22:54.483Z · EA · GW

Thanks Pablo! Looks great!  I really appreciate your work on the wiki.

Comment by Azure on Propose and vote on potential EA Wiki articles / tags [2022] · 2022-05-23T11:02:59.945Z · EA · GW

Windfall Clause (under Global Catastrophic Risk (AI))


  1. Important as a wiki topic to give short description of this policy proposal  and relevant links/papers/discussion- as it seems like an important output of AI Governance literature/studies.
  2. Tag as potential future posts may discuss/critique the idea(e.g. second post below) 

Posts that it could apply to:

Comment by Azure on [deleted post] 2022-05-15T20:11:49.408Z

Thanks for your help and guidance. I agree that for now it's not worth it!

Comment by Azure on [deleted post] 2022-05-15T17:15:02.109Z

I didn't really have a preference to be honest! I was just curious and a  little confused by the fact that some posts and  one of the "further reading links" used the "anti-aging" terminology.

Thank you for point about the general format being "[Area] research" - that makes sense and will be useful to me for potential future wiki edits. Also thank you to the other comment for the "cancer research" analogy  - that makes sense too. 

Is it worth updating the style guide for the  "[Area] research"  convention or is it too niche and may add unnecessary bloat?

Comment by Azure on Against “longtermist” as an identity · 2022-05-14T08:28:28.970Z · EA · GW

Thank your for this piece Lizka!

To what extent do you agree with the following?

Strong identities  are epistemically sub-optimal(i.e. if you are an agent whose exclusive concern is to hold true beliefs, then holding strong identities is likely to harm that project) - but they may be socially expedient (perhaps necessary) in that they facilitate cooperation (by  encapsulating views and motives)

Comment by Azure on [deleted post] 2022-05-13T13:43:47.123Z

May I ask for the reasoning for the title being "Aging research" as opposed to "Anti-aging research"? 

I must assume it's because the former is the name established in the academic literature? Or is it to maintain some kind of fact/value distinction? 

Thanks in advance!

Comment by Azure on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T11:17:17.378Z · EA · GW

Patient philanthropy?

Comment by Azure on [deleted post] 2020-08-11T09:43:23.647Z

Hi Peter.

Thank you very much for this! It's much appreciated and I'm glad my comments were somewhat helpful.

Perhaps you may wish to submit the new version as a new, separate post?

I think I would also contact Aaron Gertler, the forum moderator, to get some feedback if you chose to post the above as a separate post. All the best.

Comment by Azure on [deleted post] 2020-08-10T16:50:43.603Z

Hi Peter!

Thank you for the write up!

You're currently getting downvoted(unfortunately I think!), but I thought I would try to flesh out some reasons why this is the case currently, potentially to spur on discussion:

1. Whether unintentional or not, the 'flat earth' images do not seem to be a favourable presentation of your ideas and do not seem necessary to make the claims you are making.

2. There is not much structure to the post. I think we would appreciate it if you had some introduction and conclusion on what you are trying to address and how you've done so.

3. Some of the explanations are quite confusing (at least to me), e.g. it's not clear what you mean exactly by

'It can brighten - improving the enterpretation[sic] of a given sentience from a darker to a brighter sentience'

Does this mean 'higher utility/welfare'?

4. I don't think the post is sufficiently self-contained and free standing to make a credible case.

Also keen to hear whether people agree/disagree with the above!

Comment by Azure on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-14T09:07:11.863Z · EA · GW

Adding one more (hopefully relevant) link:

Dylan Matthews on “Global poverty has fallen, but what should we conclude from that?”

which is more or less a podcast version of the Vox Article by Dylan Matthew, where the link (and Hickel's response) can be found in Max_Daniel's very helpful list of links.

Comment by Azure on X-risks to all life v. to humans · 2020-06-03T20:21:11.687Z · EA · GW

Hey! Your link sends us to this very post. Is this intentional?

Comment by Azure on X-risks to all life v. to humans · 2020-06-03T20:18:28.518Z · EA · GW

Thank you for this post! Very interesting.

(1) Is this a fair/unfair summary of the argument?

P1 We should be indifferent on anti-speciesist grounds whether humans or some other intelligence life form enjoy a grand future.

P2 The risk of extinction of only humans is strictly lower than the risk of extinction of humans + all future possible (non human) intelligent life form.

C Therefore we should revise downwards the value of avoiding the former/raise the value of the latter.

(2) Is knowledge about current evolutionary trajectories of non-human animals today likely to completely inform us about 're-evolution'? What are the relevant considerations?

Comment by Azure on What would a pre-mortem for the long-termist project look like? · 2020-04-12T09:10:53.234Z · EA · GW

Additionally, is it not likely that those scenarios are correlated?