Posts

Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
Project Ideas in Biosecurity for EAs 2021-02-16T21:01:44.588Z
The Upper Limit of Value 2021-01-27T14:15:03.200Z
The Folly of "EAs Should" 2021-01-06T07:04:54.214Z
A (Very) Short History of the Collapse of Civilizations, and Why it Matters 2020-08-30T07:49:42.397Z
New Top EA Cause: Politics 2020-04-01T07:53:27.737Z
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-04T17:06:42.972Z
International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z
Is Suffering Convex? 2018-10-21T11:44:48.259Z

Comments

Comment by Davidmanheim on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T12:53:43.412Z · EA · GW

I don't think that legalization will solve racial bias in policing - I think the relevant question is whether Black and Latino people are arrested at higher or lower rates now for drug-related offenses than they were before legalization.

Comment by Davidmanheim on Is SARS-CoV-2 a modern Greek Tragedy? · 2021-05-10T12:44:20.850Z · EA · GW

This is an important question, and I certainly agree about the dangers of gain of function research. However, I think its critical to evaluate whether we really have sufficient evidence for the claim that SARS-CoV-2 was not of natural origin. Given that, I'm disappointed that only one side of the picture was presented here.  Yes, there are good points made, but the evidence you're presenting was collected by someone telling one side, and actively looking for counterarguments - like this one about why the article is wrong about furin cleavage sites, seems like a bare minimum for trying to honestly update your opinion.

Comment by Davidmanheim on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-06T11:00:25.859Z · EA · GW

As I mentioned in my other reply, I don't see as much value in responding to weak-man claims here on the forum, but agree that they can be useful more generally.

Regarding "secondary uncertainty, value of information, and similar issues," I'd be happy to point to sources that are relevant on these topics generally, especially Morgan and Henrion's "Uncertainty," which is a general introduction to some of these ideas, and my RAND dissertation chairs work on policy making under uncertainty, focused on US DOD decisions, but applicable more widely. Unfortunately, I haven't put together my ideas on this, and don't know that anyone at GPI has done so either - but I do know that they have engaged with several people at RAND who do this type of work, so it's on their agenda.

Comment by Davidmanheim on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-06T10:52:56.129Z · EA · GW

Agree that this is important, and it's something I've been thinking about for a while. But the last paragraph was just trying to explain what the paper said (more clearly) were evaluative practical predictions. I just think about that in more decision-theoretic terms, and if I was writing about this more, would want to formulate it that way.

Comment by Davidmanheim on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-06T10:49:11.615Z · EA · GW

I think we basically agree. 

And while I agree that it's sometimes useful to respond to what was actually said, rather than the best possible claims, that type of post is useful as a public response, rather than useful for discussion of the ideas. Given that the forum is for discussion about EA and EA ideas, I'd prefer to use steelman arguments where possible to better understand the questions at hand.

Comment by Davidmanheim on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-06T06:55:45.270Z · EA · GW

Yes, this seems to be a problem, but it's also a problem with naive expected value thinking that  prioritizes predictions without looking at adaptive planning or value of information. And I think Greaves and MacAskill don't really address these issues sufficiently in their paper - though I agree that they have considered them and are open to further refinement of their ideas.

But I don't beleive that it's clear we predict things about the long term "with above chance accuracy." If we do, it's not obvious how to construct the baseline probability we would expect to outperform.

Critically, the requirement for this criticism to be correct is that our predictions are not good enough to point to interventions that have higher expected benefit than more-certain ones, and this seems very plausible. Constructing the case for whether or not it is true seems valuable, but mostly unexplored.

Comment by Davidmanheim on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-06T06:49:17.882Z · EA · GW

This seems to agree with his criticism - that we care about the near-term only as it affects the long term, and can therefore justify ignoring even negative short term consequences of our actions if it leads to future benefits. It argues even more strongly for abandoning otherwise short term beneficial interventions with small longer term impacts.

Obvious examples of how this goes wrong include many economic planning projects of the 20th century, where the short term damage to communities, cities, and livelihoods was justified by incorrect claims about long term growth. 

Comment by Davidmanheim on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-06T06:40:54.919Z · EA · GW

This is true, but seems to be responding to tone rather than the substance of the argument. And given that (I think) we're interested in the substantive question rather than the social legitimacy of the criticism, I think that it is more useful to engage with the strongest version of the argument.

The actual issue that is relevant here, which isn't well identified, is that naive expected value fails in a number of ways. Some of these are legitimate criticisms, albeit not well formulated in the paper. Specifically I think that there are valuable points about secondary uncertainty, value of information, and similar issues that are ignored by Greaves and MacAskill in their sketch of the ideal decision-theoretic reasoning.

Comment by Davidmanheim on Is the current definition of EA not representative of hits-based giving? · 2021-04-28T11:16:26.770Z · EA · GW

I actually disagree with your definition. Will's definition allows for debate about what counts as evidence and careful reasoning, and whether hits based giving or focusing on RCTs is a better path. That ambiguity seems critical for capturing what EA is, a project still somewhat in flux and one that allows for refinement, rather than claiming there are 2 specific different things.

A concrete example* of why we should be OK with leaving things ambiguous is considering ideas like the mathematical universe hypothesis (MUH). Someone can ask; "Should the MUH be considered as a potential path towards non-causal trade with other universes?"  Is that  question part of EA? I think there's a case to make that the answer is yes (in my view correctly,) because it is relevant to the question of revisiting the "tentatively understanding" part of Will's definition.

*In the strangest sense of "concrete" I think I've ever used.

Comment by Davidmanheim on Defining Effective Altruism · 2021-04-27T10:24:37.003Z · EA · GW

Related to this post, and relying on it, I wrote a (more informal) post detailing why to avoid normative claims in EA, which I claim explains at least an implication of the above post, if not something Will was suggesting.

Comment by Davidmanheim on Is the current definition of EA not representative of hits-based giving? · 2021-04-27T10:19:30.942Z · EA · GW

Agreed - see my answer which notes that Will suggested a phrasing that omits "high-quality."

Comment by Davidmanheim on Is the current definition of EA not representative of hits-based giving? · 2021-04-27T10:17:41.499Z · EA · GW

First, I don't think that's the best "current" definition. More recently (2 years ago,) Will proposed the following
 

Effective altruism is:

(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and

(ii) the use of the findings from (i) to try to improve the world.


But Will said he's "making CEA’s definition a little more rigorous," rather than replacing it. I think the key reason to allow hits-based giving in both cases is the word "and" in the phrase "...evidence and careful reasoning." (Note that Will omits "high quality" from evidence, I'd suspect for the reason you suggested. I would argue that for a Bayesian, high-quality evidence doesn't require an RCT, but that's not the colloquial usage, so I agree Will's phrasing is less likely to mislead.)

And to be fair to the original definition, careful reasoning is exactly the justification for expected value thinking. Specifically, careful reasoning  leads to favoring making 20 "hits based" donations to high-risk-of-failure potential causes, where in expectation 10% of them end up with a cost per QALY of $5, and the others end up useless, rather than a single donation 20x as large to an organization we are nearly certain has a cost per QALY of $200. 

Comment by Davidmanheim on Announcing "Naming What We Can"! · 2021-04-11T19:39:36.775Z · EA · GW

Seems tractable to me; how much money would you need to add an initial?

Comment by Davidmanheim on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-11T07:14:02.783Z · EA · GW

Key thought on vector control: vector control is tricky. 
Mostly, we care about mosquitoes, where there is tons of work, and on mammalian carriers, where we know that people like farming / hunting, then eating the animals, so vector control is a bit different. There's lot of work on this, and a literature review for the forum might be a good thing for someone to write.

Comment by Davidmanheim on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-11T07:09:43.015Z · EA · GW

Strongly agree about many of these points. I think it's worth looking at our earlier post, and the paper we wrote on almost exactly this topic which we worked on in 2019 - which obviously aren't focused on post-COVID-19 ideas. 

On objections to trials, there is a large literature about the difficulty of assessing the impact of interventions, from Pearl's fundamental argument, here (pdf), to the entire corpus of work on generalizability and transferability in practice.

I think that further development of the suggested potential projects would be valuable - if you agree,  I'd be happy to discuss how to turn them into more concrete proposals. Though in fact, many of  these have already been done - a literature review (post is strongly recommended reading!) would probably find many pieces like this one that address many of your points.

Comment by Davidmanheim on The TUILS Framework for Improving Pro-Con Analysis · 2021-04-08T07:42:02.016Z · EA · GW

Strongly agree that the focus on Implementation is critical, and can easily be missed by those only superficially acquainted with I/N/T analyses. It's also good to focus on linkage - see Pearl's amusing  / correct paper on why applying scientific knowledge to actual decisions is useless. Overall, 9/10 on content.

At the same time, I think this post would be greatly improved with editing and simplifying the arguments. (I tend to need help with the same things; structure, leaving things out, making a clear case in the introduction, etc. So I very, very often ask for editing help.) I would give the post itself an unfortunate 3/10 on clarity of presentation, which is unfortunate given what I think is the usefulness of the argument.

All that said, I upvoted this, but am unsurprised, and nonetheless disappointed, to see that someone / other people have downvoted this without saying why. 

Comment by Davidmanheim on Announcing "Naming What We Can"! · 2021-04-08T07:24:29.125Z · EA · GW

Alternatively, repurposing the IIDM working group to focus on improving Ian David Moss.

Comment by Davidmanheim on Announcing "Naming What We Can"! · 2021-04-08T07:18:10.500Z · EA · GW

"Ian David Moss" -> "Ian I. David Moss", to reduce the incidence of accidentally not confusing him with IIDM.

Comment by Davidmanheim on Why do so few EAs and Rationalists have children? · 2021-03-18T19:28:12.303Z · EA · GW

Mostly agree - based on the post, I was thinking of "so few" as "basically none," but I wouldn't be at all surprised if it were less than average for the comparable group.

And I don't think SSC is anything like a perfect proxy, and I assume it somewhat over-represents people who are less involved in rationality/EA, and are more likely to have kids, but it's the closest proxy I could easily find.

Comment by Davidmanheim on Why do EAs have children? · 2021-03-15T22:39:30.432Z · EA · GW

As a parent with older kids, I'll point out that the demands differ, but (when there isn't COVID,) you get back to having "work time" when kids are away, without the sleep deprivation that happens in the first year. (Mostly. There are still the occasional night waking, but these are sporadic and get fairly rare, instead of being chronic and making you horrible sleep deprived overall.) And yes, kids will dominate your free time while they are you, but in an enjoyable way (Mostly enjoyable. How much depends on the kids, and the age.) 

And as they get older, the problems become much more like ones that you'd talk to a (younger, less mature) friend about, rather than being physical issues. Also, around the same time, they start to get more interesting to talk to, and you can teach them cool things, which is awesome. (And yes, I'm sure this changes again once they get to be teenagers. But I'm taking things a year at a time.)

Comment by Davidmanheim on Why do so few EAs and Rationalists have children? · 2021-03-15T13:12:32.339Z · EA · GW

Maybe you're wrong, a two-part answer.

First, rich westerners have kids late, and EA is young - people having a child in their mid-to-late 30s isn't uncommon.

Second, the EAs I know with kids don't necessarily talk about them, especially in EA circles. so your sample that implies there are very few EAs with kids is probably skewed. (Edit to add: 6% of SSC survey respondents have more than 2 kids. Another 10% have 2, and a further 7.5% have 1. And the average age of respondents is 33 - per the survey, more than half of people over 40 have a kid, and 10% of those in their 30s do.)

 

Comment by Davidmanheim on Why do EAs have children? · 2021-03-15T09:29:24.652Z · EA · GW

Group Rationality and Long-term Investment

Children are important for the future. But who should have them?

First, I imagine a world filled with people like myself. If they have children, these children will be raised by people like myself, who are mostly good and do a decent job. Alternatively, if most or all decided not to have kids, humanity would be far poorer (or extinct) in the future. In game-theoretic terms, the cooperate action is to have children. 

Moreover, if we imagine a world with two classes of people, which I'll call rational altruists and ineffective egoists. In this world, the ineffective egoists have children, due to different values, neglect of the long term, or even carelessness. Those children are unlikely to embrace rational altruist values. Because of this, the rational altruists have a choice about whether to have children to raise with their values, and in the long term, investing in children leads to a better long term world. 

Of course, given the second argument, the reasonable alternative is to propagate values via education and similar. This is perhaps more limited, since education has a limited scope to influence children, but also plausibly more scalable and effective. However, if the world begins to resemble the first proposed world, this counterargument no longer applies. (This is one of several answers.)
 

Comment by Davidmanheim on Why do EAs have children? · 2021-03-15T09:19:34.691Z · EA · GW

Partial Selfishness / Sustainability

I love my kids, they bring me joy, and I am partially selfish. Even though I'm happy to give money and work on EA causes, it's not my entire life, and I'm partially selfish. Kids are part of my "being selfish" budget.

More than that, I think it's a reasonable long term decision. Just like I feel that I would be short term and irresponsible to spend 100% of my disposable income on charitable giving, since it would be unsustainable, I feel that abstaining from things that make my life joyful would be a bad decision. At the very least, I am nearly certain that I would have regretted not doing so, and I think that I would have been upset with my past self's values for making that decision.
(This is one of several answers.)

Comment by Davidmanheim on Why do EAs have children? · 2021-03-15T09:14:26.918Z · EA · GW

Precommitment to having (more) Children.

Before I was involved in EA, I made decisions. These decisions include commitments, as well as emotional investments and building personal relationships with my wife and family. Given that my emotional and personal commitments cannot be changed without consequences to myself and to others, future decisions must be made in that context. Further, despite my somewhat changed personal values, from either a rule-utilitarian or contractualist perspective, as well as from most non-consequentialist deontological perspectives, it would be morally inexcusable to change certain of the plans to which I am committed. (This is one of several answers.)

Comment by Davidmanheim on Project Ideas in Biosecurity for EAs · 2021-02-28T17:28:20.717Z · EA · GW

Thanks. Having looked through the paper briefly, it seems like it pays no attention to any hazards or misuse, and doesn't actually address the challenges of developing or  supporting infrastructure for analysis, especially the things they suggest which relate to PII.

Comment by Davidmanheim on Project Ideas in Biosecurity for EAs · 2021-02-28T17:23:29.209Z · EA · GW

I'd be interested in more specific private feedback on which projects you don't think would not be useful, or ideas for other things you think people with those skill sets could do that would be more useful. Cross-checking you intuitions with others would be good - for each of these projects, someone else working actively in biosecurity thought the project would be useful.  And I think that it's easy to have a narrow view of what is useful - I wouldn't have thought people would want many of these answers until they explained that they did, and often why. 

That said, if someone is interested in working directly on things that are substantially important, and has a track record for doing so, there are people who want to hire them already, and they have plenty of opportunity for collaboration with biosecurity EAs. I didn't, and don't, think that we need to provide that set of people lists of things to work on - and I would agree that this isn't the set of highest priority tasks, many of which require funding and support, or have other reasons that people cannot pick them up as side projects. This list is geared towards things that people with a diverse skill set can do as an initial step, which active biosecurity researchers have said would show them someone is capable of doing useful research, while avoiding information hazards.

Comment by Davidmanheim on Be Specific About Your Career · 2021-02-24T12:26:04.497Z · EA · GW

Strongly endorsed, and I would go even further; a huge amount of job satisfaction is about what you do every hour, not what you do every week. If you like or dislike writing, or like meeting people, or like reading technical papers, pay attention to that - because spending lots of time doing something you dislike is painful even if you like the career in general. And this will guide a lot of more specific decisions - not just "should I work in Biorisk," but "should I take this specific class" or "will I enjoy this specific job."

Comment by Davidmanheim on The Outside Critics of Effective Altruism · 2021-02-22T16:54:31.084Z · EA · GW

But you can still have a partial ordering for incommensurable goods - if the natural world is incommensurable with money, you can have states that are strictly worse/better, as well as incommensurable ones. And that still isn't neutral - it's better and worse on different dimensions.

Comment by Davidmanheim on The Outside Critics of Effective Altruism · 2021-02-22T16:45:19.030Z · EA · GW

Running across this post quite a few years later, but our paper on the upper limit of value  addresses this incommensurability a bit, and cites Chang's "Incomparability and practical reason" which (we feel) addresses this fairly completely.

Secondarily, Mobiot's claim isn't really incommensurability, it's that those with power (mostly economic power,) value things he cares about too little, that the environment, which is a public good, is underprotected by markets, and that humanity isn't cautious enough of the environmental risks.  All reasonable points, but not really incommensurability.

Comment by Davidmanheim on Project Ideas in Biosecurity for EAs · 2021-02-17T10:39:29.503Z · EA · GW

Thanks for this - I have added the EA Concepts links to the post, and linked to this comment for more information.

Comment by Davidmanheim on Project Ideas in Biosecurity for EAs · 2021-02-17T10:36:01.705Z · EA · GW

For earlier stage discussions, I agree that some people are interested in providing general career guidance, and perhaps that means suggesting specific projects from this list - but that's different than requesting help getting started on a specific project. 

A key part of doing useful research is thinking about a question and figuring out what to do to investigate, and while I and others could flesh these out into project outlines ourselves, a large part of the goal in posting this is to let others show their capabilities for doing so themselves. Moreover, people who aren't able to at least start such a project are unlikely to be successful as researchers - and doing the initial steps is intended to be a gauge of both commitment and ability.

Providing guidance on how to start and work on these projects requires a significant investment of time - and I don't think it is fair to bother others with volunteers who haven't shown they are interested and at least somewhat capable. I and others are happy to provide guidance if researchers interested in these problems are stuck, but some work beyond "I think I'd like to do this"  is a prerequisite for getting feedback. 

With that said, I am happy to be a contact point for coordinating any work people are interested in doing, and I can put you in touch with others who are interested in the specific projects.

Comment by Davidmanheim on Introduction to Longtermism · 2021-02-04T14:09:21.233Z · EA · GW

This is great. I'd note, regarding your discussion of risk aversion and recklessness, that our (very) recent paper disputes (and I would claim refutes) the claim that you can justify putting infinities in the calculation.

See the forum post on the topic here: https://forum.effectivealtruism.org/posts/m65rH3D3pfJzsBMfW/the-upper-limit-of-value

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-28T08:26:05.119Z · EA · GW

As someone in their late 30s with kids often identified as one of the "older" EAs, I strongly agree with this.

And to quote Monty Python: "I'm thirty seven - I'm not old!"

Comment by Davidmanheim on The Upper Limit of Value · 2021-01-28T07:54:05.785Z · EA · GW

Thanks. I agree that we should have non-infinitesimal credence that physics is wrong, but to change the conclusion, we would need to "insist that modern physics is incorrect in very specific ways." Given the strength of evidence about the existence of many of the limits, regardless of their actual form or value, that is a higher bar. I also advise looking closely at the discussion of the "Pessimistic Meta-induction," and why we think that it's reasonable to be at least incredibly confident that these limits exist.

That doesn't guarantee their existence. But after accepting a non-zero credence in those specific types of incorrect theory, we need to pin our hopes for infinite value on those specific occurrences; we would need to maximize expected value conditional on that very small probability in order to find infinite value, and neglect the very large but finite value we are nearly certain exists in the physical universe. That seems difficult to me.

Comment by Davidmanheim on The Upper Limit of Value · 2021-01-28T07:46:09.340Z · EA · GW

Thanks, and thanks for posting this both places. I've responded on the lesswrong post, and I'm going to try to keep only one thread going, given my finite capacity to track things :)

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-20T18:14:57.009Z · EA · GW

Strongly endorsed.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-17T11:07:43.213Z · EA · GW

I want to point out that there's something unfair that you did here. You pointed out that AI safety is more important, and that there were two doctors that left medical practice. Ryan does AI safety now, but Greg does Biosecurity, and frankly, the fact he has an MD is fairly important for his ability to interact with policymakers in the UK.  So one of your examples is at least very weak, if not evidence for the opposite of what you claimed.

"A reliable way to actually do a lot of good as a doctor" doesn't just mean not practicing; many doctors are in research, or policy, making a far greater difference - and their background in clinical medicine can be anywhere from a useful credential to being critical to their work.

Comment by Davidmanheim on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-11T15:52:14.191Z · EA · GW

I agree that we agree ;)

I particularly endorse the claim about tractability and effectiveness of technical changes to internal nuclear weapon security  and contingency planning, both with moderate confidence.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-10T11:08:19.632Z · EA · GW

It's not contradictory, but it seems like your comment goes against his post's insistence on the nuance. Will was being careful about this sort of absolutism, and I think at least part of the reason for doing so - not alienating those who differ on specifics , and treating out conclusions as tentative - is the point I am highlighting. Perhaps I'm reading his words too closely, but that's the reason I wrote the introduction the way I did; I was making the point that his nuance is instructive.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-10T11:01:31.747Z · EA · GW

I think it would be good to be clearer in our communication and say that we don't consider local opera houses, pet sanctuaries, homeless shelters, or private schools to be good cause areas, but there might be other good reasons for you to donate to them.

 

I made a similar claim here, regarding carbon offsets:
https://forum.effectivealtruism.org/posts/brTXG5pS3JgTatP7i/carbon-offsets-as-an-non-altruistic-expense

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-10T10:58:08.579Z · EA · GW

At least for people I know it seems to have been really good advice, at least the doctor part.


It seems like this is almost certain to be true given post-hoc selection bias, regardless of whether or not it is good - it doesn't differentiate between worlds where it is alienating or bad advice, and some people leave the community, and ones where it is good.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:53:51.871Z · EA · GW

Strongly agree substantively about the adjacency of your point, and about the desire for a well-rounded world. I think it's a different thread of thought than mine, but it is worth being clear about as well. And see my reply to Jacob_J elsewhere in the comments, here, for how I think that can work even for individuals.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:50:33.026Z · EA · GW

I think that negative claims are often more polarizing than positive ones, but I agree that there is a reason to advocate for a large movement that applies science and reasoning to do some good. I just think it already exists, albeit in a more dispersed form than a single "EA-lite." (It's what almost every large foundation already does, for example.) 

I do think that there is a clear need for an "EA-Heavy," i.e. core EA, in which we emphasize the "most" in the phrase "do the most good." My point here is that I think that this core group should be more willing to allow for diversity of action and approach. And in fact, I think the core of EA, the central thinkers and planners, people at CEA, Givewell, Oxford, etc. already advocate this. I just don't think the message has been given as clearly as possible to everyone else.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:45:17.260Z · EA · GW

If you're pledging 10% of your income to EA causes, none of that money should go the local opera house or your kid's private school. (And if you instead pledge 50%, or 5%, the same is true of the other 50%, or 95%.)

What you do with the remainder of your money is a separate question - and it has moral implications, but that's a different discussion. I've said this elsewhere, but think it's worth repeating:
Most supporters of EA don't tell people not to go out to nice restaurants and get gourmet food for themselves, or not to go the the opera, or not to support local organizations they are involved with or wish to support, including the arts. The consensus simply seems to be that people shouldn't confuse supporting a local museum with attempting to effectively maximize global good with effective altruism.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-08T10:38:58.708Z · EA · GW

I think I agree with you on the substantive points, and didn't think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.

Comment by Davidmanheim on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-08T09:15:18.092Z · EA · GW

I certainly agree that this is worth thinking about, but I also think it's worth suggesting that the analysis here is be a bit myopic. Of course, it seems particularly relevant because many EAs are in the US. And it seems unconceivable that the world will change drastically in this one particular way, but far larger plausible changes are on the horizon.  (Though as I've noted n various conversations for a while, Americans might want to personally consider their options for where else they might want to live if the US decline continues.) 

And if the worst case happens, we're still likely looking at a decades-long process, during which most of the worst effects are mitigated by other countries taking up the slack, and pushing for the US's decline to be minimally disruptive to the world. Nations and empires have collapsed before, and in many cases it was was bad, even very bad. (Though in other cases, like the dissolution of the British empire, there were compensating changes, like the rise of the US and the far more egalitarian and peaceful post-WWII order.) So preventing a bad collapse is plausibly as important a cause as preventing another pandemic like COVID-19 - albeit far less certain to occur, and far less certain to be bad for the world. And it's not of the same order of magnitude of many other longtermist causes, since it's highly likely that conditional on the unlikely case of severe collapse in the US, humanity will be fine.

All that said, again, I don't disagree with the analysis overall - this is worth taking seriously.

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-06T16:42:01.475Z · EA · GW

Whoops! My apologies to both individuals - this is now fixed. (I don't know what I was looking at when I wrote this, but I vaguely recall that there was a second link which I was thinking of linking to which I can no longer find where Peter made a similar point. If not, additional apologies!)

Comment by Davidmanheim on The Folly of "EAs Should" · 2021-01-06T16:39:46.075Z · EA · GW

I am not suggesting avoiding the word "should" generally, as I said in the post. I thought it was clear that I am criticizing the way in which overly narrowing the ideal of what is and is not EA, and unreasonably narrowing what is normatively acceptable within the movement, which I keep seeing, is harmful. I think it's clear that this can be done without claiming that everything is EA, or refraining from making normative statements altogether.

Regarding criticising Givewell's reliance on RCTs, I think there is room for a diversity of opinion. It's certainly reasonable to claim that as a matter of decision analysis, non-RCT evidence should be considered, and that risk-neutrality and unbiased decision making require treating less convincing evidence as valid, if weaker. (I'm certainly of that opinion.)

On the other hand, there is room for some effective altruists  who prefer to be somewhat risk-averse to correctly view RCTs as more certain evidence than most other forms, and prefer interventions with clear evidence of that sort. So instead of saying that GiveWell should not rely as heavily on RCTs, or that EA organizations should do other things, I think  we can, and should, make the case that there is an alternative approach which treats RCTs as only a single type of evidence, and that the views of Givewell and similar EA orgs are not the only valid way to approach effective giving. (And I think that this view is at least understood, and partly shared by many EA organizations and individuals, including many at Givewell.)

Comment by Davidmanheim on The Fermi Paradox has not been dissolved · 2020-12-20T07:22:15.111Z · EA · GW

To respond to your substantive point, intergalactic travel is possible, but slow - on the order of tens of millions of years at the very fastest.  And the distribution of probable civilizations is tilted towards late in galactic evolution because of the need for heavier elements, so it's unclear that early civilizations are possible, or at least as likely.

And somewhat similar to your point, see my tweet from a couple years back:

"We don't see time travelers. This means either time travel is impossible, or humanity doesn't survive. 

Evidence of the theoretical plausibility of time travel is therefore strong evidence that we will be extinct in the nearer term future."

Comment by Davidmanheim on The Fermi Paradox has not been dissolved · 2020-12-13T08:50:33.001Z · EA · GW

I think the post is well reasoned and useful in pointing out a few shortcoming in the paper, but fails to make the point you're hoping for.

First and most importantly, with your preferred parameter choices, the 6% chance of no life in the Milky Way still almost certainly implies that the lack of alien signals is due to the fact that they are simply too far away to have been seen; the density of intelligence implied by that model is still very low. That means even your conclusion dissolves the initial "paradox." At the most, it leaves the likelihood of the existence of a future great filter, based on the evidence of not seeing alien signals, far weaker than was previously argued for.

Second, a number of your arguments seem to say that we could have counterfactual evidence, and should use that as evidence. For example, "as far as we know it is equally possible that we could have found ourselves on a 9 billion year old Earth..." (it cannot, given the habitability window for life on earth,) or "Presumably life could evolve multiple times on the same planet..." (True, but not relevant for the model, since once life has emerged it passes this step - and we see no evidence of it happening on earth..) Even if these were correct, they should be reflected in the prior or model structure, as Robin Hanson suggests ("try once steps.")

In any case, I think that a closer review of some of the data points is useful, and think the post was useful.