What (other) posts are you planning on writing?

post by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-04-04T06:18:46.618Z · EA · GW · 1 comment

This is a question post.


  Posting Resources
    23 MichaelDickens
    5 MichaelA
    5 vaidehi_agarwalla
    4 vaidehi_agarwalla
    2 vaidehi_agarwalla
1 comment

This is the second round [EA · GW] of a questionI asked last year about what posts you are planning on writing, so that people can share progress and get community feedback and support.


Posting Resources


answer by MichaelDickens · 2020-08-19T22:30:11.255Z · EA(p) · GW(p)

I have about 60 EA-related ideas right now. This list includes some of the most promising ones, broken down by category. I am interested in feedback on which ideas people like the best.

Plus signs indicate how well thought-out an idea is:

  • + = idea seems interesting, but I have no idea what to say about it
  • ++ = partially formed concept, but still a bit fuzzy
  • +++ = fully-formed concept, just need to figure out the details/actually do it

Fundamental problems

  • "Pascal's Bayesian Prior Mugging": Under "longtermist-friendly" priors, if a mugger asks for $5 in exchange for an unspecified reward, you should give the $5 ++
  • If causes differ astronomically in EV, then personal fit in career choice is unimportant ++
  • EAs should focus on fundamental problems that are only relevant to altruists (e.g., infinity ethics yes, explore/exploit no) +++
  • The case for prioritizing "philosophy of priors" ++
  • How quickly do forecasting estimates converge on reality? (use Metaculus API) +++

Investing for altruists

  • Alternate version of How Much Leverage Should Altruists Use? that assumes EMH +++
  • How risk-averse should altruists be (and how does it vary by cause)? +
  • Can patient philanthropists take advantage of investors' impatience? +

Giving now vs. later

  • Reverse-engineering the philanthropic discount rate from observed market rates +++
  • Optimal behavior in extended Ramsey model that allows spending on cash transfers or x-risk reduction +++
  • If giving later > now, what does that imply for talent vs. funding constraints? +
  • Is movement-building an expenditure or an investment? +
  • Fermi estimate of the cost-effectiveness of improving the EA spending rate +++
  • Prioritization research might need to happen now, not later ++

Long-term future

  • If technological growth linearly increases x-risk but logarithmically increases well-being, then we should stop growing at some point ++
  • Estimating P(existential catastrophe) from a list of near-catastrophes +++
  • Thoughts on doomsday argument +
  • Value of the future is dominated by worlds where we are wrong about the laws of physics ++
  • If x-risk reduction is permanent and people aren't longtermist, we should give later +++


  • How should we expect future EA funding to look? +
  • Can we use prediction markets to enfranchise future generations? (Predict what future people will want, and then the government has to follow the predictions) +
  • Altruistic research might have increasing marginal utility ++
  • "Suspicious convergence" is not that suspicious because people seek out actions that look good across multiple assumptions +++
comment by Alex HT · 2020-08-20T06:52:02.823Z · EA(p) · GW(p)

I'd really like to see "If causes differ astronomically in EV, then personal fit in career choice is unimportant"

comment by Ozzie Gooen (oagr) · 2020-08-20T12:47:04.036Z · EA(p) · GW(p)

I like that these generally seem quite clear and focused.

In terms of decision relevance and benefit, I get the impression that several funders and meta EA orgs feel a crunch in not having great prioritization, and if better work emerges, they may change funding fairly quickly. I'm less optimistic about career change type work, mainly because it seems like it would take several more years to apply (it would take some time from convincing someone to having them start producing research). 

I'm skeptical of how much research into investments will change investments in the next 2-10 years. I don't get the impression OpenPhil or other big donors are closely listening to these topics here.

Therefore I'm more excited about the Giving Now/Later and Long-Term Future work.

Another way of phrasing this is that I think we have a decent discount rate (maybe 10% a year), plus I think that high-level research prioritization is a particularly useful field if done well. 

A few years back a relatively small amount of investigation into AI safety (maybe 20 person years?) led to a huge change from OpenPhil and a bunch of EA talent. 

I would be curious to hear directly from them. I think that work that influences the big donors is the highest leverage at this point, and I also get the impression that there is a lot of work that could change their minds. But I could be wrong.

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-08-20T09:15:00.724Z · EA(p) · GW(p)

I'd be interested in basically all of the Giving Now vs Later but especially:

  • Reverse-engineering the philanthropic discount rate from observed market rates +++
  • If giving later > now, what does that imply for talent vs. funding constraints? +
  • Is movement-building an expenditure or an investment? +
  • Prioritization research might need to happen now, not later ++
answer by MichaelA · 2020-04-07T02:51:36.800Z · EA(p) · GW(p)

A bunch of posts related to The Precipice

I recently finished Toby Ord's The Precipice, and thought it was an excellent and very important book. I plan to write a bunch of posts that summarise, comment on, or take inspiration from various parts of it. Most are currently very early-stage, but the working titles are below.

Key uncertainties/questions:

  • Is there anyone who's already planning to write similar things? I probably won't have time to write all the things I've planned. So if someone else is already likely to pursue ideas similar to some of these, we could potentially collaborate, or I could share my notes and thoughts, let you take that particular topic from there, and allocate my time to other things.

Working titles:

  • Defining existential risks and existential catastrophes
  • My thoughts on Toby Ord's policy & research recommendations
  • Existential security
  • Civilizational collapse and recovery: Toby Ord's views and my doubts
  • The Terrible Funnel: Estimating odds of each step on the x-risk causal path (this title is especially "working")
    • The idea here would be to adapt something like the "Great Filter" or "Drake Equation" reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or could've passed certain "steps" on certain causal chains to catastrophe [EA · GW].
    • E.g., even though we've never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each "step" to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
    • This idea seems sort of implicit in the Precipice, but isn't really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
    • This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks don’t apply to natural pandemics. Or that might be a separate post.
  • Developing - but not deploying - drastic backup plans
  • “Macrostrategy”: Attempted definitions and related concepts
    • This would relate in part to Ord’s concept of “grand strategy for humanity”
  • Collection of notes
  • A post summarising the ideas of existential risk factors and existential security factors?
    • I suspect I won’t end up writing this, but I think someone should. For one thing, it’d be good to have something people can reference/link to that explains that idea (sort of like the role EA Concepts serves).
answer by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-04-05T04:18:12.720Z · EA(p) · GW(p)

Local Career Advice Bottlenecks

Status: First draft, series of 3 posts.

The Local Career Advice Network ran a group organisers' survey [EA · GW] to evaluate overall career advice bottlenecks in the community. There will likely be 3 write-ups on the following topics:

  • the main bottlenecks group organisers' observe their members' facing
  • the main bottlenecks group organisers face when trying to give high quality careers advice
  • evaluation of career advice events and activities run by groups

Key uncertainties/questions

  • Nothing as of now. I'll add to this comment as thoughts arise.
answer by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-04-05T04:12:37.509Z · EA(p) · GW(p)

Running self-directed projects

Status: early stage

I've run a number of projects over the last few months and thought it might be useful to share my experiences and successes/failures and lessons learnt. I may also be presenting the insights from these projects at some point.

Key uncertainties/questions

  • How valuable would people find this?
comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-08-20T09:11:15.219Z · EA(p) · GW(p)

Update: I ended up writing & publishing this post. [EA · GW

answer by Vaidehi Agarwalla (vaidehi_agarwalla) · 2020-04-05T02:48:52.335Z · EA(p) · GW(p)

Career Change Interviews

Status: First Draft

This is a writeup on qualitative research I and Benjamin Skubi did in summer 2019 on 20 EAs at various stages of a career change process. It'll cover:

  • stages of our interviewee's EA journeys (an alternative perspective to the funnel model which will focus on an indivdual's journey)
  • what inspires a career change, what the change process looks like, commonly mentioned bottlenecks and useful resources
  • recommendations/useful tips for career changers and group organisers

Key uncertainties/questions

  • I'm not sure whether to keep the recommendations in the same writeup in a separate section or create a new post with the recommendations.

You can see the updates from my previous posts here [EA(p) · GW(p)]

comment by MichaelA · 2020-04-06T08:53:24.231Z · EA(p) · GW(p)

This sounds interesting!

Do you mean career change from "non-EA-influenced" paths to "EA-influenced" paths, career changes between "EA-influenced" paths, or career changes by in general?

Re having one post vs splitting recommendations out: I often use something like the following heuristic: "If the post contains multiple sets of ideas/points, which are relatively easy to understand without each other, which may offer value by themselves (i.e., without the other set), and which may be valuable/interesting to slightly different sets of people, it's probably worth splitting the post into multiple, more bitesized chunks."

So I'd guess that it may be best to split the recommendations out, if they can be understood out of context and if they're decently long (something like "at least 500 words, pretty confidently if over 1000 words").

One counterpoint is that this may be a bad idea when a set of ideas could be "understood" out of context, but maybe in a distorted form, or with too little emphasis on other considerations. (Like how it might be possible to get people to perfectly understand the earning-to-give concept without other context, but this could lead to it being emphasised too strongly such that 80k/EA more broadly is misunderstood, as discussed in the fidelity model).

1 comment

Comments sorted by top scores.

comment by MichaelA · 2020-04-07T02:42:41.626Z · EA(p) · GW(p)

Thanks for making this question, and last year's version! I think this is a great idea.

In last year's one, you wrote:

We think this will accomplish a few things:
1. Encourage people to publish the posts
2. Help them prioritize between post ideas based on community feedback
3. Get directed to useful readings/resources
4. (For everyone) Get a sense of what the community is working on

I think there's another benefit, related to (or perhaps meant as implicit) in the 4th item you list: Having this public record of what people are planning on writing can help people discover whether other people are planning on writing similar things to them. This could lead to:

  • them collaborating
  • one person just passing their notes/drafts/thoughts to the other and then moving on to other things
  • one person just dropping that plan and move onto other things (this seems less good, but still maybe better than doubled-up-effort)

It seems like it's relatively rare for two people to have very similar planned posts, but I've seen it happen at least occasionally. And there's of course no harm in two posts on a similar idea, and each person probably has a somewhat different angle. But it still seems better for the people to be made aware of each other, so they can collaborate and/or make informed decisions about whether pursuing their plan should still be their top priority.

Which suggests also a sixth, somewhat more speculative benefit: It seems possible that some people have good ideas worth writing up, but they're worried that maybe someone else has written them already or will do so soon. So they don't want to waste their effort. But if publicising one's planned posts here (or somewhere like it) becomes common practice, then these people could check here, see that no one's mentioned a similar idea, and feel more confident to invest time into the post. And if they post here, others could let them know if similar things have already been written about in the past (related to your 3rd item).

I'm planning to probably post about a lot of my planned posts here, primarily for those two extra benefits, as well as the 3rd item you mention.