Givewell estimates that they directed or influenced about 161 million dollars in 2018. 64 million came from Good Ventures grants. Good Ventures is the philanthropic foundation founded and funded by Dustin and Cari. It seems like the 161 million directed by Give Well represents a comfortable majority of total 'EA' donation.
If you want to count OpenPhil's donations as EA donations, that majority isn't so comfortable. In 2018, OpenPhil recommended a bit less than 120 million (excluding Good Venture's donations to GiveWell charities) of which almost all came from Good Ventures, and they recommended more in both 2017 and 2019. This is a great source on OpenPhil's funding.
The fewer competitive organisations there are in the space where you're aiming to build career capital and the narrower the career capital you want to build (e.g. because you're unsure about cause prior or because the roles you're aiming at require wide skillsets), the less frequently changing roles makes sense.
Is this a typo? I expect uncertainty about cause prio and requirements of wide skillsets to favor less narrow career capital (and increased benefits of changing roles), not narrower career capital.
I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.
I would've expected you to cite the threshold for specialisation as longer than a year; as stated, I think most EAs would agree with the last sentence. Do you think that the gains from specialisation keep accumulating after a year, or do you think that someone switching roles every three years will achieve at least half as much as someone who keeps working in the same role? (This might also depend on how narrowly you define a "role".)
Why is that? I don't know much about the area, but my impression is that we currently don't know what space governance would be good from an EA perspective, so we can't advocate for any specific improvement. Advocating for more generic research into space-governance would probably be net-positive, but it seems a lot less leveraged than having EAs look into the area, since I expect longtermists to have different priorities and pay attention to different things (e.g. that laws should be robust to vastly improved technology, and that colonization of other solar systems matter more than asteroid mining despite being further away in time).
If you have images in your posts, you have to upload them somewhere on the internet (e.g. https://imgur.com/)
If you've put the images in a google doc, and made the doc public, then you've already uploaded the images to the internet, and can link to them there. If you use the WYSIWYG editor, you can even copypaste the images along with the text.
I'm not sure whether I should expect google or imgur to preserve their image-links for longer.
(Nearly) every insomniac I’ve spoken to knows multiple others
Just want to highlight a potential selection effect: If these people spontaneously tell you that they're insomniacs, they're the type of people who will tell other people about their insomnia, and thus get to know multiple others. There might also be silent insomniacs, who don't tell people they're insomniacs and don't know any others. You're less likely to speak with those, so it would be hard to tell how common they are.
Climate change by itself should not be considered a global catastrophic risk (>10% chance of causing >10% of human mortality)
I'm not sure if any natural class of events could be considered global catastrophic risks under this definition, except possibly all kinds of wars and AI. It seems pretty weird to not classify e.g. asteroids or nuclear war as global catastrophic risks, just because they're relatively unlikely. Or is the 10% supposed to mean that there's a 10% probability of >10% of humans dying conditioned on some event in the event class happening? If so, this seems unfair to climate change, since it's so much more likely than the other risks (indeed, it's already happening). Under this definition, I think we could call extreme climate change a global catastrophic risk, for some non-ridiculous definition of extreme.
It’s very difficult to communicate to someone that you think their life’s work is misguided
Just emphasizing the value of prudence and nuance, I think that this^ is a bad and possibly false way to formulate things. Being the "marginal best thing to work on for most EA people with flexible career capital" is a high bar to scale, that most people are not aiming towards, and work to prevent climate change still seems like a good thing to do if the counterfactual is to do nothing. I'd only be tempted to call work on climate change "misguided" if the person in question believes that the risks from climate change are significantly bigger than they in fact are, and wouldn't be working on climate change if they knew better. While this is true for a lot of people, I (perhaps naively) think that people who've spent their life fighting climate change know a bit more. And indeed, someone who have spent their life fighting climate change probably has career capital that's pretty specialized towards that, so it might be correct for them to keep working on it.
I'm still happy to inform people (with extreme prudence, as noted) that other causes might be better, but I think that "X is super important, possibly even more important than Y" is a better way to do this than "work on Y is misguided, so maybe you want to check out X instead".
It seems to say that the same level of applicant pool growth produces fewer mentors in mentorship-bottlenecked fields than in less mentorship-bottlenecked fields, but I don't understand why.
If a field is bottlenecked on mentors, it has too few mentors per applicants, or put differently, more applicants than the mentors can accept. Assuming that each applicant needs some fixed amount of time with a mentor before becoming senior themselves, increasing the size of the applicant-pool doesn't increase the number of future senior people, because the present mentors won't be able to accept more people just because the applicant-pool is bigger.
More people in the applicant-pool may lead to future senior people being better (because the best people in a larger pool are probably better).
It's not actually true that a fixed amount of mentor-input makes someone senior. With a larger applicantpool, you might be able to select for people who requires less mentor-input, or who has a larger probability of staying in the field, which will translate to more future senior people (but still significantly less than in applicant-bottlenecked fields).
My third point above: some people might be able to circumvent applying to the mentor-constrained positions altogether, and still become senior.
Did you make a typo here? "if simulations are made, they're more likely to be of special times than of boring times" is almost exactly what “P(seems like HoH | simulation) > P(seems like HoH | not simulation)” is saying. The only assumptions you need to go between them is that the world is more likely to seem like HoH for people living in special times than for people living in boring times, and that the statement "more likely to be of special times than of boring times" is meant relative to the rate at which special times and boring times appear outside of simulations.
people seem to put credence in it even before Will’s argument.
This is kind of tangential, but some of the reasons that people put credence in it before Will's argument are very similar to Will's argument, so one has to make sure to not update on the same argument twice. Most of the force from the original simulation argument comes from the intuition that ancestor simulations are particularly interesting. (Bostrom's trilemma isn't nearly as interesting for a randomly chosen time-and-space chunk of the universe, because the most likely solution is that nobody ever hade any reason to simulate it.) Why would simulations of early humans be particularly interesting? I'd guess that this bottoms out in them having disproportionately much influence over the universe relative to how cheap they are to simulate, which is very close to the argument that Will is making.
P(simulation | seems like HOH) = P(seems like HOH | simulation)*P(simulation) / (P(seems like HOH | simulation)*P(simulation) + P(seems like HOH | not simulation)*P(not simulation))
Even if P(seems like HoH | simulation) >> P(seems like HoH | not simulation), P(simulation | seems like HOH) could be much less than 50% if we have a low prior for P(simulation). That's why the term on the right might be wrong - the present text is claiming that our prior probability of being in a simulation should be large enough that HOH should make us assign a lot more than 50% to being in a simulation, which is a stronger claim than HOH just being strong evidence for us being in a simulation.
It's certainly true that fields bottlenecked on mentors could make use of more mentors, right now. If you're already skilled in the area, you can therefore have very high impact by joining/staying in the field.
However, when young people are considering whether they should join in order to become mentors, as you suggest, they should consider whether the field will be bottlenecked on mentors at the time when they would become one, in 10 years time or so. Since there are lots of junior applicants right now, the seniority bottleneck will presumably be smaller, then.
Moreover, insofar as the present lack of mentors is the main bottleneck preventing junior applicants from eventually becoming senior, adding an extra person to the pool of applicants (yourself) will create fewer counterfactual future mentors than if you were in a field that was less mentorship-constrained. (This doesn't mean it isn't worth doing, though. You adding yourself to the pool will still increase its value.)
It also implies that it can be extra valuable to move into the field if you're able to learn relevant skills without making use of present mentors (e.g. by being in a good and relevant PhD-program, or by doing focused studying that few others are doing).
Images can't be added to comments; is that what you were trying to find a workaround for?
It's possible to add images to comments by selecting and copying them from anywhere public (note that it doesn't work if you right click and choose 'copy image'). In this thread, I do it in this comment.
I see how I can't do it manually, though, by selecting text. I wouldn't expect it to be too difficult to add that possibility, though, given that it's already possible in another way?
With regards to images, I get flawless behaviour when I copy-paste from googledocs. Somehow, the images automatically get converted, and link to the images hosted with google (in the editor only visible as small cameras). Maybe you can get the same behaviour by making your docs public?
Actually, I'll test copying an image from a google doc into this comment: (edit: seems to be working!)
Copying all relevant information from the lesswrong faq to an EA forum faq would be a good start. The problem of how to make its existence public knowledge remains, but that's partly solved automatically by people mentioning/linking to it, and it showing up in google.
I'm by no means schooled in academic philosophy, so I could also be wrong about this.
I tend to think about e.g. consequentialism, hedonistic utilitarianism, preference utilitarianism, lesswrongian 'we should keep all the complexities of human value around'-ism, deontology, and virtue ethics as ethical theories. (This is backed up somewhat by the fact that these theories' wikipedia pages name them ethical theories.) When I think about meta-ethics, I mainly think about moral realism vs moral anti-realism and their varieties, though the field contains quite a few other things, like cole_haus mentions.
My impression is that HLI endorses (roughly) hedonistic utilitarianism, and you said that you don't, which would be an ethical disagreement. The borderlines aren't very sharp though. If HLI would have asserted that hedonistic utilitarianism was objectively correct, then you could certainly have made a metaethical argument that no ethical theory is objectively correct. Alternatively, you might be able to bring metaethics into it if you think that there is an ethical truth that isn't hedonistic utilitarianism.
(I saw you quoting Nate's post in another thread. I think you could say that it makes a meta-ethical argument that it's possible to care about things outside yourself, but that it doesn't make the ethical argument that you ought to do so. Of course, HLI does care about things outside themselves, since they care about other people's experiences.)
For whatever it's worth, my metaethical intuitions suggest that optimizing for happiness is not a particularly sensible goal.
Might just be a nitpick, but isn't this an ethical intuition, rather than a metaethical one?
(I remember hearing other people use "metaethics" in cases where I thought they were talking about object level ethics, as well, so I'm trying to understand whether there's a reason behind this or not.)
Has Kahneman actually stated that he thinks life satisfaction is more important than happiness? In the article that Habryka quotes, all he says is that most people care more about their life satisfaction than their happiness. As you say, this doesn't necessarily imply that he agrees. In fact, he does state that he personally thinks happiness is important.
(I don't trust the article's preamble to accurately report his beliefs when the topic is as open to misunderstandings as this one is.)
We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.
If there are many more undesirable configurations of the world than desirable ones, then we should, a priori, expect that our present configuration is an undesirable one. Also, if the only effect of disruption was to re-randomize the world order, then the only thing you'd need for disruption to be positive is for the current state to be worse than the average civilisation from the distribution. Maybe this is what you mean with "particularly bad state", but intuitively, I interpret that more like the bottom 15 %.
There are certainly arguments to make for our world being better than average. But I do think that you actually have to make those arguments, and that without them, this abstract model won't tell you if disruption is good or bad.
If you go to "Edit account", there's a check box that says "Activate markdown editor". If you un-check that one (I would've expected it to be unchecked by default, but maybe it isn't) you get formatting options just by selecting your text.
Although psychadelics is plausibly good from a short-termist view, I think the argument from the long-termist view is quite weak. Insofar as I understand it, psychadelics would improve the long term by
1. Making EAs or other well-intentioned people more capable.
2. Making people more well-intentioned. I interpret this as either causing them to join/stay in the EA community, or causing capable people to become altruistically motivated (in a consequentialist fashion) without the EA community.
Regarding (1), I could see a case for privately encouraging well-intentioned people to use psychadelics, if you believe that psychedelics generally make people more capable. However, pushing for new legislation seems like an exceedingly inefficient way to go about this. Rationality interventions are unique in that they are quite targeted - they identify well-intentioned people and give them the techniques that they need. Pushing for new psychadelic legislation, however, could only help by making the entire population more capable, including the much smaller population of well-intentioned people. I don't know exactly how hard it is to change legislation, but I'd be surprised if it was worth doing solely due to the effect on EAs and other aligned people. New research suffers from a similar problem: good medical research is expensive, so you probably want to have a pretty specific idea about how it benefits EAs before you invest a lot in it.
Regarding (2), I'd be similarly surprised if campaigning for new legislation -> more people use psychadelics -> more people become altruistically motivated -> more people join the EA community was a better way to get people into EA than just directly investing in community building.
For both (1) and (2), these conclusions might change if you cared less about EAs in particular, and thought that the future would be significantly better if the average person was somewhat more altruistic or somewhat more capabable. I could be interested in hearing such a case. This doesn't seem very robust to cluelessness, though, given the uncertainty of how psychedelics affect people, and the uncertainty about how increasing general capabilities affects the long term.
Meta note: that you got downvotes (I can surmise this from the number of votes and the total score) seems to suggest this is advice people don't want to hear, but maybe they need.
I don't think this position is unpopular in the EA community. You have more than one goal and that's fine got lots of upvotes, and my impression is that there's a general consensus that breaks are important and that burnout is a real risk (even though people might not always act according to that consensus).
I'd guess that it's getting downvotes because it doesn't really explain why we should be less productive: it just stakes out the position. In my opinion, it would have been more useful if it, for example, presented evidence showing that unproductive time is useful for living a fulfilled life, or presented an argument for why living a fulfilled life is important even for your altruistic values (which Jakob does more of in the comments).
Meta meta note: In general, it seems kind of uncooperative to assume that people need more of things they downvote.
If I remember correctly, 80,000 Hours has stated that they think 15% of people in the EA Community should be pursuing earning to give.
I think this is the article you're thinking about, where they're talking about the paths of marginal graduates. Note that it's from 2015 (though at least Will said he still thought it seemed right in 2016) and explicitly labeled with "Please note that this is just a straw poll used as a way of addressing the misconception stated; it doesn’t represent a definitive answer to this question".
2. For every reader, such a list would include many paths that they can’t take.
But it seems like there's another problem, closely related to this one: for every reader, the paths on such a list could have different orderings. If someone has a comparative advantage for a role, it doesn't necessarily mean that they can't aim for other roles: but it might mean that they should prefer the role that they have a comparative advantage for. This is especially true once we consider that most people don't know exactly what they could do and what they'd be good at - instead, their personal lists contains a bunch of things they could aim for, ordered according to different probabilities of having different amounts of impact.
In particular, I think it's a bad idea to take a 'big list', winnow away all the jobs that looks impossible, and then aim for whatever is on top of the list. Instead, your personal list might overlap with others', but have a completely different ordering (yet hopefully contain a few items that other people haven't even considered, given that 80k can't evaluate all opportunities, like you say).
This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.
Hm, I thought one of the main worries was that major global powers wouldn't have to agree, since any country would be able to launch a geoengineering program on their own, changing the climate for the whole planet.
Do you think that global governance is good enough to disincentivize lone states from launching a program, purely from fear of punishment? Or would it be possible to somehow reverse the effects?
Actually, would you even need to be a state to launch a program like this? I'm not sure how cheap it could become, or if it'd be possible to launch in secret.
I am not so sure about the specific numerical estimates you give, as opposed to the ballpark being within a few orders of magnitude for SIA and ADT+total views (plus auxiliary assumptions)
I definitely agree about some numbers. Maybe I should have been more explicit about this in the post, but I have low credence in the exact distribution of f (as well as fl, fi, and fs): it depends far too much on the absolute rate of planet formation and the speed at which civilisations travel.
However, I'm much more willing to believe that the average fraction of space that would be occupied by alien civilisations in our absence is somewhere between 30 % and 95 %, or so. A lot of the arbitrary assumptions that affects f cancels out when running the simulation, and the remaining parameters affects the result surprisingly little. My main (known) uncertainties are
Whether it's safe to assume that intergalactic colonisation is possible. From the perspective of total consequentialism, this is largely a pragmatic question about where we can have the most impact (which is affected by a lot of messy empirical questions).
How much the results would change if we allowed for a late increase in life more sudden than the one in Appendix C (either because of a sudden shift in planet formation or because of something like gamma ray bursts). Anthropics should affect our credence in this, as you point out, and the anthropic update would be quite large in favor. However, the prior probability of a very sudden increase seems small. That prior is very hard to quantify, and I think my simulation would be less reliable in the more extreme cases, so this possibility is quite hard to analyse.
Do you agree, or do you have other reasons to doubt the 30%-95% number?
This seems overall too pessimistic to me as a pre-anthropic prior for colonization
I agree that the mean is too pessimistic. The distribution is too optimistic about the impossibility of lower numbers, though, which is what matters after the anthropic update. I mostly just wanted a distribution that illustrated the idea about the late filter without having it ruin the rest of the analysis. f has almost exactly the same distribution after updating, anyway, as long as fs assigns negligible probability to numbers below 10−10.
Given that the risk of nuclear war conditional on climate change seems considerably lower than the unconditional risk of nuclear war
Do you really mean that P(nuclear war | climate change) is less than P(nuclear war)? Or is this supposed to say that the risk of nuclear war and climate change is less than the unconditional probability of nuclear war? Or something else?
Wealth almost entirely belongs to the old. The median 60-year-old has 45 times (yes, forty-five times) the net worth of the median 30-year-old.
Hm, I think income might be a better measurement than wealth. I'm not sure what they count as wealth, since the link is broken, but a pretty large fraction of that may be due to the fact that 60-year-olds needs to own their house and their retirement savings. If the real reason that 30-year-old lack wealth is that they don't need wealth, someone determined to give to charity might be able to gather money comparable to most 60-year-olds.
Carl's comment renders this irrelevant for CEA lotteries, but I think this reasoning is wrong even for the type of lotteries you imagine.
In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there's increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.
What you're forgetting is that in the 20 % of worlds where you get your donation, you'd rather have been in the pool without thoughtful people. If you were, you will get to regrant 50k smartly, and a thoughtful person will get to regrant 40k. However, if you were in the pool with thoughtful people, the thoughtful people won't get to regrant any money, and the 40k in the thoughtless group will go to some thoughtless cause.
When joining a group (under your assumptions, that aren't true for CEA), you increase the winnings of everyone while decreasing the probability that they win. In expectation, they all get to regrant the same amount of money. So the only situation where the decision between groups matter is if you have some very specific ideas about marginal utility, e.g. if you want to ensure that there exists at least one thoughtful lottery winner, and don't care much about the second.
Since the post is very long, and since a lot of readers are likely to be familiar with some arguments already, I think a table of contents in the beginning would be very valuable. I sure would like one.
Reports we’ve heard indicate that extrusion capacity is currently the limiting factor driving up costs for plant-based alternatives in the United States. As a result, we’d only want to pursue this path if we have strong reason to believe that our plant-based alternative was not displacing a better plant-based alternative in the market.
What's the connection between extrusion capacity and not displacing better alternatives?
To see how these two arguments rest on different conceptions of intelligence, note that considering Intelligence(1), it is not at all clear that there is any general, single way to increase this form of intelligence, as Intelligence(1) incorporates a wide range of disparate skills and abilities that may be quite independent of each other. As such, even a superintelligence that was better than humans at improving AIs would not necessarily be able to engage in rapidly recursive self-improvement of Intelligence(1), because there may well be no such thing as a single variable or quantity called ‘intelligence’ that is directly associated with AI-improving ability.
While I'm not entirely convinced of a fast take-off, this particular argument isn't obvious to me. If the AI is better than humans at every cognitive task, then for every ability that we care about X, it will be better at the cognitive task of improving X. Additionally, it will be better at the cognitive task of improving it's ability to improve X, etc. It will be better than humans at constructing an AI that is good at every cognitive task, and will thus be able to create one better than itself.
This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips.
This doesn't seem very unlikely to me. As a proof-of-concept, consider a paper-clip maximiser able to simulate several clever humans at high speeds. If it was posed a moral dilemma (and was motivated to answer it) it could perform at above human-level by simulating humans at fast speeds (in a suitable situation where they are likely to produce an honest answer to the question), and directly report their output. However, it wouldn't have to be motivated by it.
I definitely except that there are people who will lose out on happiness from donating.
Making it a bit more complicated, though, and moving out of the area where it's easy to do research, there are probably happiness benefits of stuff like 'being in a community' and 'living with purpose'. Giving 10 % per year and adopting the role 'earning to give', for example, might enable you to associate life-saving with every hour you spend on your job, which could be pretty positive (I think that feeling that your job is meaningful is associated with happiness). My intuition is that the difference between 10 % and 1 % could be important to be able to adopt this identity, but I might be wrong. And a lot of the gains from high incomes probably comes from increased status, which donating money is a way to get.
I'd be surprised if donating lots of money was the optimal thing to do if you wanted to maximise your own happiness. But I don't think there's a clear case that it's worse than the average person's spending.
Of course, a deep ecologist who sided with extinction would be hoping for a horrendously narrow event, between ‘one which ends all human life’ and ‘one which ends all life’. They’d still have to work against the latter, which covers the artificial x-risks.
I agree that it covers AI, but I'm not sure about the other artificial x-risks. Nuclear winter severe enough to eventually kill all humans would definitely kill all large animals, but some smaller forms of life would survive. And while bio-risk could vary a lot in how many species were susceptible to it, I don't think anyone could construct a pathogen that affects everything.
Seems like there's still self-selection going on, depending on how much you think 'a lot' is, and how good you are at finding everyone who have thought about it that much. You might be missing out on people who thought about it for, say, 20 hours, decided it wasn't important, and moved on to other cause areas without writing up their thoughts.
On the other hand, it seems like people are worried about and interested in talking about AGI happening in 20 or 30 or 50 years time, so it doesn't seem likely that everyone who thinks 10-year timelines are <10% stops talking about it.
I remain unconvinced, probably because I mostly care about observer-moments, and don't really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can't quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under 'Assumptions'.
However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.
I'd say pain experienced during 0.1 seconds is about 10 times less bad than pain experienced during 1 second. I don't see why we should discount it any further than that. Our particular human psychology might be better at dealing with injury if we expect it to end soon, but we can't change what the observer-moment S(t) expects to happen without changing the state of it's mind. If we change the state of it's mind, it's not a copy of S(t) anymore, and the argument fails.
In general, I can't see how this plan would work. As you say, you can't decrease the absolute number of suffering oberver-moments, so it won't do any good from the perspective of total utilitarianism. The closest thing I can imagine is to "dilute" pain by creating similar but somewhat happier copies, if you believe in some sort of average utilitarianism that cares about identity. That seems like a strange moral theory, though.