Posts

How do you feel about the main EA facebook group? 2020-02-12T21:20:27.006Z · score: 19 (9 votes)
Only a few people decide about funding for community builders world-wide 2019-10-22T20:52:38.098Z · score: 57 (49 votes)
The evolutionary argument against cognitive enhancement research is weak 2019-10-16T20:46:09.253Z · score: 18 (8 votes)
The expected value of extinction risk reduction is positive 2018-12-09T08:00:00.000Z · score: 34 (23 votes)
An algorithm/flowchart for prioritizing which content to read 2017-11-11T19:17:37.615Z · score: 12 (14 votes)

Comments

Comment by janbrauner on Is anyone working on a comparative COVID-19 policy response dataset? · 2020-04-11T18:47:10.128Z · score: 1 (1 votes) · EA · GW

I'm not completely sure if I understand what you are looking for, but:

http://epidemicforecasting.org/containment

https://www.bsg.ox.ac.uk/research/research-projects/oxford-covid-19-government-response-tracker

https://www.acaps.org/projects/covid19/data

Comment by janbrauner on Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? · 2020-03-28T19:41:44.598Z · score: 2 (2 votes) · EA · GW

I wrote down some musings about this (including a few relevant links) in appendix 2 here.

Comment by janbrauner on Toby Ord’s ‘The Precipice’ is published! · 2020-03-11T20:43:47.610Z · score: 1 (1 votes) · EA · GW

I think I overheard Toby saying that the footnotes and appendices were dropped in the audiobook and that, yes, the footnotes and appendices (which make up 50% of the book) should be the most interesting part for people already familiar with the X-risk literature.

Comment by janbrauner on How do you feel about the main EA facebook group? · 2020-02-14T22:00:11.464Z · score: 3 (2 votes) · EA · GW

No

Comment by janbrauner on How do you feel about the main EA facebook group? · 2020-02-12T21:39:02.420Z · score: 47 (24 votes) · EA · GW

So this is my very personal impression. I might be super wrong about this, that's why I asked this question. Also, I remember liking the main EA facebook group quite a bit in the past, so maybe I just can't properly relate to how useful the group is for people that are newer to EA thinking.

Currently, I avoid reading the EA facebook group the same way I avoid reading comments under youtube videos. Reading the group makes me angry and sad because of the ignorance and aggression displayed in the posts and especially in the comments. I think many comments do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with. That's really no surprise, online discourse is not particularly known for high quality.

Overall, I feel like the main EA facebook group doesn't shine a great light on the EA movement. I haven't thought much about this, but I think I would prefer stronger moderation for quality.

Comment by janbrauner on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T20:05:14.464Z · score: 3 (2 votes) · EA · GW

I first thought that "counterproposal passed" means that a proposal very different to the one you suggested passed the ballot. But skimming the links, it seems that the counterproposals were actually similar to your original proposals?

Comment by janbrauner on Only a few people decide about funding for community builders world-wide · 2019-12-07T21:11:51.590Z · score: 20 (9 votes) · EA · GW

Thanks for bringing this to my attention, I modified the title and a respective part in the post.


I didn't have the time to check in with CEA before writing the post so I had to choose between writing the post as is or not writing it at all. That's why the first line says (in italics) "I’m not entirely sure that there is really no other official source for local group funding. Please correct me in the comments. "

I think I could have predicted that this is not enough to keep people from walking away with a false impression so I think I should have chosen a different headline.

Comment by janbrauner on The evolutionary argument against cognitive enhancement research is weak · 2019-10-24T23:14:18.027Z · score: 1 (1 votes) · EA · GW

That mostly seems to be semantics to me. There could be other things that we are currently "deficient" in and we could figure that out by doing cognitive enhancement research.

As far as I know, the term "cognitive enhancement" is often used in the sense that I used it here, e.g. relating to exercise (we are currently deficient in exercise compared to our ancestors), taking melatonin (we are deficient in melatonin compared to our ancestors), and so on...

Comment by janbrauner on Only a few people decide about funding for community builders world-wide · 2019-10-24T23:04:59.885Z · score: 7 (5 votes) · EA · GW

Great to hear that several people are involved with making the grant decisions. I also want to stress that my post is not at all intended as a critique of the CBG programme.

Comment by janbrauner on Only a few people decide about funding for community builders world-wide · 2019-10-24T22:55:42.575Z · score: 4 (5 votes) · EA · GW

I agree that there is more to movement building than local groups and that the comparison to AI safety was not on the right level.

I still stand by my main point and think that it deserves consideration:

My main point is that there is a certain set of movement building efforts for which the CEA community building grant programme seems to be the only option. This set includes local groups and national EA networks but also other things. Some common characteristics might be that these efforts are oriented towards the earlier stages of the movement building funnel (compared to say, EAG) and can be conducted by independent movement builders.

Ideally, there should be more diverse "official" funding for this set of movement building efforts. As things currently are, private funders should at least be aware that only one major official funding source exists.

(If students running student groups can get funded by the university, that is another funding source that I wasn't aware of before).

Comment by janbrauner on Latest EA Updates for September 2019 · 2019-09-30T19:17:51.712Z · score: 3 (2 votes) · EA · GW

Love the "Grants" section

Comment by janbrauner on What should Founders Pledge research? · 2019-09-22T23:40:33.904Z · score: 9 (3 votes) · EA · GW

cognitive enhancement research

Comment by janbrauner on Alien colonization of Earth's impact the the relative importance of reducing different existential risks · 2019-09-06T22:18:31.854Z · score: 4 (3 votes) · EA · GW

We wrote a bit about a related topic in part 2.1 here: https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive

In there, we also cite a few posts by people who have thought about similar issues before. Most notably, as so often, this post by Brian Tomasik:

https://foundational-research.org/risks-of-astronomical-future-suffering/#What_if_human_colonization_is_more_humane_than_ET_colonization

Comment by janbrauner on Are we living at the most influential time in history? · 2019-09-06T21:13:48.559Z · score: 13 (4 votes) · EA · GW

How I see it:

Extinction risk reduction (and other type of "direct work") affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a "punting to future generations that live in hingey times" component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential centuries.

(Then, by definition, if ours is not a very hingey time, direct work is not a very promising strategy for punting. The effect on people alive during the "most influential times" has to be small by definition. If direct work did strongly enable the people living in the most influential century (e.g. by strongly increasing the chance that they come into existence), it would also enable many other generations a lot. This would imply that the present was quite hingey after all, in contradiction to the assumption that the present is unhingey.)

Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.

Comment by janbrauner on Risk factors for s-risks · 2019-02-16T20:55:30.440Z · score: 2 (2 votes) · EA · GW

I don't have much to add, but I still wanted to say that I really liked this:

  • great perspective, risk factors seem to be a really useful concept here
  • Very clearly written
Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-30T19:13:07.974Z · score: 2 (2 votes) · EA · GW

These are all very good points. I agree that this part of the article is speculative, and you could easily come to a different conclusion.

Overall, I still think that this argument alone (part 1.2 of the article) points into the direction of extinction risk reduction being positive. Although the conclusion does depend on the "default level of welfare of sentient tools" that we are discussing in this thread, it more critically depends on whether future agents' preferences will be aligned with ours.

But I never gave this argument (part 1.2) that much weight anyway. I think that the arguments later in that article (part 2 onwards, I listed them in my answer to Jacy's comment) are more robust and thus more relevant. So maybe I somewhat disagree with your statement:

The expected value of the future could be extremely sensitive to beliefs about these sets (their sizes and average welfares). (And this could be a reason to prioritize moral circle expansion instead.)

To some degree this statement is, of course, true. The uncertainty gives some reason to deprioritize extinction risk reduction. But: The expected value of the future (with (post-) humanity) might be quite sensitive to these beliefs, but the expected value of extinction risk reduction efforts is not the same as the expected value of the future. You also need to consider what would happen if humanity goes extinct (non-human animals, S-risks by omission), non-extinction long-term effects of global catastrophes, option value,... (see my comments to Jacy). So the question of whether to prioritize moral circle expansion is maybe not extremely sensitive to "beliefs about these sets [of sentient tools]".

Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-30T18:45:18.156Z · score: 1 (1 votes) · EA · GW

Hey Jacy,

I have written up my thoughts on all these points in the article. Here are the links.

  • "The universe might already be filled with suffering and post-humans might do something against it."

Part 2.2

  • "Global catastrophes, that don't lead to extinction, might have negative long-term effects"

Part 3

  • "Other non-human animal civilizations might be worse

Part 2.1

The final paragraphs of each sections usually contain discussion of how relevant I think each argument is. All these sections also have some quantitative EV-estimates (linked or in the footnotes).

But you probably saw that, since it is also explained in the abstract. So I am not sure what you mean when you say:

It'd be great if at some point you could write up discussion of those other arguments,

Are we talking about the same arguments?

Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-22T18:23:16.622Z · score: 2 (2 votes) · EA · GW

Regarding your second point, just a few thoughts:
First of all, an important point is how you think values and morality work. If two-thirds of humanity, after thorough reflection, disagree with your values, does this give you a reason to become less certain about your values as well? Maybe adopt their values to a degree? ...

Secondly, I am also uncertain how coherent/convergent human values will be. There seem to be good arguments for both sides, see e.g. this blog post by Paul Christiano (and the discussion with Brian Tomasik in the comments of that post): https://rationalaltruist.com/2013/06/13/against-moral-advocacy/

Third: In a situation like the one you described above, at least there would be huge room for compromise/gains from trade/... So if future humanity would be split into the three factions you suggested, they would not necessarily fight a war until only one faction remains that can then tile the universe with their preferred version. Indeed, they probably would not, as cooperation seems better for everyone in expectation.

Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-22T17:44:11.402Z · score: 2 (2 votes) · EA · GW

Hi Michael,

By "in expectation random", do you mean 0 in expectation?

Yes, that's what we meant.

I am not sure I understand your argument. You seem to say the following:

  • Post-humans will put "sentient tools" into harsher conditions than the ones the tools were optimized for.
  • If "sentient tools" are put into these conditions, their welfare decreases (compared with the situations they were optimized for).

My answer: The complete "side-effects" (in the meaning of the article) on sentient tools comprises bringing them into existence and using them. The relevant question seems to be if this package is positive or negative, compared to the counterfactual (no sentient tools). Humanity might bring sentient tools into conditions that are worse for the tools than the conditions they were optimized for. Even these conditions might still be overall positive.

Apart from that, I am not sure if the two assumptions listed as bullet points above will actually hold for the majority of "sentient tools". I think that we know very little about the way tools will be created and used in the far future, which was one reason for assuming "zero in expectation" side-effects.

Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-22T17:28:42.140Z · score: 2 (3 votes) · EA · GW

Hey Jacy,

I have seen and read your post. It was published after my internal "Oh my god, I really, really need to stop reading and integrating even more sources, the article is already way too long"-deadline, so I don't refer to it in the article.

In general, I am more confident about the expected value of extinction risk reduction being positive, than about extinction risk reduction actually being the best thing to work on. It might well be that e.g. moral circle expansion is more promising, even if we have good reasons to believe that extinction risk reduction is positive.

I do think your "very unlikely that [human descendants] would see value exactly where we see disvalue" argument is a viable one, but I think it's just one of many considerations, and my current impression of the evidence is that it's outweighed.

I personally don't think that this argument is very strong on its own. But I think there are additional strong arguments (in descending order of relevance):

  • "The universe might already be filled with suffering and post-humans might do something against it."
  • "Global catastrophes, that don't lead to extinction, might have negative long-term effects"
  • "Other non-human animal civilizations might be worse"
  • ...
Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-22T17:07:42.767Z · score: 3 (3 votes) · EA · GW
Curious how you're thinking about efforts that are intended to reduce x-risk but instead end up increasing it.

Uhm... Seems bad? :-)

Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-22T17:05:47.224Z · score: 1 (1 votes) · EA · GW

Thanks for the comment. We added a navigable table of contents.

Comment by janbrauner on The expected value of extinction risk reduction is positive · 2018-12-16T22:21:56.809Z · score: 9 (6 votes) · EA · GW

Hi David, thanks for your comments.

1) This seems not to engage with the questions about short-term versus long-term prioritization and discount rates. I'd think that the implicit assumptions should be made clearer.

Yes, the article does not deal with considerations for and against caring about the long-term. This is discussed elsewhere. Instead, the article assumes that we care about the long-term (e.g. that we don't discount the value of future lives strongly), and analyses what implications follow from that view.

We tried to make that explicit. E.g., the first point under "Moral assumptions" reads:

Throughout this article, we base our considerations on two assumptions:
1. That it morally matters what happens in the billions of years to come. From this very long-term view, making sure the future plays out well is a primary moral concern.

2) It doesn't seem obvious to me that, given the universalist assumptions about the value of animal or other non-human species, the long term future is affected nearly as much by the presence or absence of humans. Depending on uncertainties about the Fermi hypothesis and the viability of non-human animals developing sentience over long time frames, this might greatly matter.

I think this point matters. Part 2.1 of the article deals with the implications of potential future non-human animal civilizations and extraterrestrials. I think the implications are somewhat complicated and depend quite a bit on your values, so I won't try to summarize them here.

4) S-risks are plausibly more likely if moral development is outstripped by growth in technological power over relatively short time frames, and existential catastrophe has a comparatively limited downside.

We don't try to argue for increasing the speed of technological progress.

Apart from that, it is not clear to me that extinction has "comparatively little downside" (compared to S-risks, you probably mean). It, of course, depends on your moral values. But even from a suffering-focused perspective, it may well be that we would - with more moral and empirical insight - come to realize that the universe is already filled with suffering. I personally would not be surprised if "S-risks by omission" (*) weighed pretty heavily in the overall calculus. This topic is discussed in part 2.2.

I don't have anything useful to say regarding your point 3).

(*) term coined by Lukas Gloor, I think.

Comment by janbrauner on Takeaways from EAF's Hiring Round · 2018-11-22T11:23:51.270Z · score: 4 (3 votes) · EA · GW

I think that also depends on the country. In my experience, references don't play such an important role in Germany as they do in UK/US. Especially the practice that referees have to submit their reference directly to the university is uncommon in Germany. Usually, referees would write a letter of reference for you, and then the applicant can hand it in. Also, having references tailored to the specific application (which seems to be expected in UK/US) is not common in Germany.

So, yes, I am also hesitant to ask my academic referees too often. If I knew that they would be contacted early in application processes, I would certainly apply for less positions. For example, I maybe wouldn't apply for positions that I probably won't get but would be great if they worked out.

Comment by janbrauner on Comparative advantage in the talent market · 2018-04-14T12:19:40.985Z · score: 0 (0 votes) · EA · GW

I really like that idea. It might also be useful to check whether this model would have predicted past changes of career recommendations.

Comment by janbrauner on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T17:07:37.425Z · score: 2 (2 votes) · EA · GW

Hey Holden, thanks for doing this. Suppose I applied for the research analyst position and didn't get it. Which of the following would then be more likely to eventually land me a job at OPP, and how much more likely (assuming I would perform well in both)?

a) becoming research analyst at GiveWell

b) doing research in one of OPP's focus areas (biosecurity/AI safety).

Comment by janbrauner on Is Effective Altruism fundamentally flawed? · 2018-03-13T09:02:02.927Z · score: 5 (5 votes) · EA · GW

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examples. In the real world, I think, your values fail to recommend anything. You can never know for certain how many people you are going help. Everything is probabilities and expected value:

Say, for the sake of the argument, you think that severe depression is the cause of most intense individual suffering. You could give your $10.000 to a mental health charity, and they will in expectation prevent 100 people (made up number) from getting severe depression.

However, if you give $10.000 to GiveDirectly, certainly that will affect they recipients strongly, and maybe in expectation prevent 0.1 cases of severe depression.

Actually, if you take your $10.000, and buy that sweet, sweet Rolex with it, there is a tiny chance that this will prevent the jewelry store owner from going bankrupt, being dumped by their partner and, well, developing severe depression. $10.000 to the jeweller prevent an expected 0.0001 cases of severe depression.

So, given your values, you should be indifferent between those.

Even worse, all three actions also harbour tiny chances of causing severe depression. Even the mental health charity, for every 100 patients they prevent from developing depression, will maybe cause depression in 1 patient (because interventions sometimse have adverse effects, ...). So if you decide between burning the money or giving it to the mental health charity, you decide between preventing 100 or 1 episodes of depression. An decision that you are, given your stated values, indifferent between.

Further arguments why approaches that try to avoid interpersonal welfare aggregation fail in the real world can be found here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781092

Comment by JanBrauner on [deleted post] 2018-01-03T16:50:24.669Z

You write: "In this discussion, there are two considerations that might at first have ap- peared to be crucial, but turn out to look less important. The first such consid- eration is whether existence is in general good or bad, `a la Benatar (2008). If existence really should turn out to be a harm, sufficiently unbiased descendants would plausibly be able to end it. This is the option value argument. In turn, option value itself might appear to be a decisive argument against doing some- thing so irreversible as ending humanity: we should temporise, and delegate this decision to our descendants. But not everyone enjoys option value, and those who suffer are relatively less likely to do so. If our descendants are selfish, and find it advantageous to allow the suffering of powerless beings, we may not wish to give them option value. If our descendants are altruistic, we do want civilisation to continue, but for reasons that are more general than option value."

Since the option value argument is not very strong, it seems to be a very important consideration "whether existence in general is good or bad" - or, less dichotomous, where the threshold for a life worth living lies. Space colonization means more (sentient) beings. If our descendants are altruistic (or have values that we, upon reflection, would endorse), everything is fine anyway. If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don't actively value causing harm, which seems unlikely). If they are selfish and the threshold is fairly high - i.e. a lot of things in a life have to go right in order to make the life worth living - then most powerless beings will probably have bad lives, possibly rendering overall utility negative.

Comment by janbrauner on An algorithm/flowchart for prioritizing which content to read · 2017-11-14T16:34:25.309Z · score: 1 (1 votes) · EA · GW

Sorry, I use plain old google docs as well :|

Comment by janbrauner on Multiverse-wide cooperation in a nutshell · 2017-11-03T11:51:00.460Z · score: 4 (4 votes) · EA · GW

This was really interesting and probably as clear as such a topic can possibly be displayed.

Disclaimer: I dont know how to deal with infinities mathematically. What I am about to say is probably very wrong.

For every conceivable value system, there is an exactly opposing value system, so that there is no room for gains from trade between the systems (e.g. suffering maximizers vs suffering minimizers).

In an infinite multiverse, there are infinite agents with decision algorithms sufficiently similar to mine to allow for MSR. Among them, there are infinite agents that hold any value system. So whenever I cooperate with one value system, I defect on infinite agents that hold the exactly opposing values. So infinity seems to make cooperation impossble??

Sidenote: If you assume decision algorithm and values to be orthogonal, why do you suggest to "adjust [the values to cooperate with] by the degree their proponents are receptive to MSR ideas"?

Best, Jan

Comment by janbrauner on Using a Spreadsheet to Make Good Decisions: Five Examples · 2017-10-13T09:34:38.782Z · score: 7 (7 votes) · EA · GW

Just wanted to say that I found this article really helpful and already sent it to many people who asked me for how they should make a decision. Please never take it down :D

Comment by JanBrauner on [deleted post] 2017-08-09T09:09:13.473Z

Seems interesting, how can one stay updated?

Comment by janbrauner on Reading recommendations for the problem of consequentialist scope? · 2017-08-02T09:02:48.795Z · score: 1 (1 votes) · EA · GW

http://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

This could be constructed as arguing for an approach that takes all perspectives that one can think of into account, and then discount them by uncertainty.

Comment by janbrauner on An Argument for Why the Future May Be Good · 2017-07-21T16:54:38.679Z · score: 3 (3 votes) · EA · GW

Here is another argument why the future with humanity is likely better than the future without it. Possibly, there are many things of moral weight that are independent of humanity's survival. And if you think that humanity would care about moral outcomes more than zero, then it might be better to have humanity around.

For example in many szenarios of human extinction, wild animals would continue existing. In your post you assigned farmed animals enough moral weight to determine the moral value of the future, and wild animals should probably have even more moral weight. There are 10 x more wild birds than farmed birds, 100-1000x more wild mammals than farmed animals (and of course many, many more fish or even invertebrates). I am not convinced that wild animals' lives are on average not worth living (= that they contain more suffering than happiness), but even without that, there surely is a huge amount of suffering. If you believe that humanity will have the potential to prevent/alleviate that suffering some time in the future, that seems pretty important.

The same goes for unknown unknowns. I think we know extremely little about what is morally good or bad, and maybe our views will fundamentally change in the (far) future. Maybe there are suffering non-intelligent extraterrestrials, maybe bacteria suffer, maybe there is moral weight in places were we would not have expected it (http://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), maybe something completely different.

Let's see what the future brings, but it might be better to have an intelligent and at least slightly utility-concerned species around, as compared to no intelligent species.

Comment by janbrauner on How can we best coordinate as a community? · 2017-07-13T18:20:44.842Z · score: 6 (6 votes) · EA · GW

With regards to "following your comparative advantage":

Key statement : While "following your comparative advantage" is beneficial as a community norm, it might be less relevant as individual advice.

Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.

Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.

From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner's dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann's and Ben's goals was if each continued in their respective areas of expertise. (Let's assume they could be motivated to do so)

However, from a personal perspective, let's look at Ann's situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: "I keep contributing to cause X, if you continue contributing to cause Y"

So these might be Ann's thoughts: "I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm"

So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.

I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.

So it seems that

  • it clearly makes sense (for CEA/ 80000 hours/ ...) to promote such a norm
  • it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.

All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, ...)

Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D

Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.