Posts

Noticing the skulls, longtermism edition 2021-10-05T07:08:28.304Z
Time-average versus Individual-average Total Utilitarianism 2021-09-26T07:05:42.403Z
Towards a Weaker Longtermism 2021-08-08T08:31:03.727Z
Maybe Antivirals aren’t a Useful Priority for Pandemics? 2021-06-20T10:04:25.592Z
Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
Project Ideas in Biosecurity for EAs 2021-02-16T21:01:44.588Z
The Upper Limit of Value 2021-01-27T14:15:03.200Z
The Folly of "EAs Should" 2021-01-06T07:04:54.214Z
A (Very) Short History of the Collapse of Civilizations, and Why it Matters 2020-08-30T07:49:42.397Z
New Top EA Cause: Politics 2020-04-01T07:53:27.737Z
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-04T17:06:42.972Z
International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z
Is Suffering Convex? 2018-10-21T11:44:48.259Z

Comments

Comment by Davidmanheim on Feedback on Meta Policy Research Questions to Help Improve the X/GCR Field’s Policy Engagement · 2021-10-21T10:32:18.150Z · EA · GW

Sounds great - and my guess is that lots of the most valuable work will be in "how can we use technique X for EA" for a variety of specific tools, rather than developing new methods, and will require deep dives into specifics.

Comment by Davidmanheim on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T06:44:33.029Z · EA · GW

(Strongly endorse you giving further critical feedback - this is a new area, and the more the other side is steelmanned, the better the decision that can be reached about whether and how to prioritize it. That said, I don't think this criticism is particularly good, per Ozzie's response.)

Comment by Davidmanheim on Lessons learned running the Survey on AI existential risk scenarios · 2021-10-18T13:37:32.144Z · EA · GW

I'd also suggest the Sage series on research methods as a good resource for non-experts who want at least a basic level of understanding of what to do. In this case, Fowler's "Survey Research Methods" would have provided most of these insights without trial and error - it's around 150 pages, but it's not heavy reading.

Comment by Davidmanheim on Effective Altruism, Before the Memes Started · 2021-10-17T09:58:32.155Z · EA · GW

I mostly agree, but the revision of the Longtermism white paper from the original "work in progress" version seems like exactly the type of response to some of the early claims you're requesting - see the discussion on fanaticism. And given how recent all of this is, further responses could still be forthcoming, as these types of conversations take time.

Comment by Davidmanheim on Effective Altruism, Before the Memes Started · 2021-10-16T18:13:58.129Z · EA · GW

See my response about the specific reason I think Will and others have not responded - and why I think they are right not to do so directly.

(And I'm still very much on speaking terms with Phil, and understand why he feels aggrieved, even though I don't agree with him either about his current approach, or the substantive criticisms, as I noted in the piece you linked.)

Comment by Davidmanheim on Effective Altruism, Before the Memes Started · 2021-10-16T18:11:17.095Z · EA · GW

This seems like an important criticism and warning - but I think that the response to the Torres piece has been dismissive for reasons largely unrelated to the discussion here. I've spoken to Phil recently, and he feels like he's been reasonable in personally attacking several people in EA, both because of how they treated him(1), and their supposedly dangerous / "genocidal" ideologies - and he isn't likely to change his mind. That seems to be why most of the people whose positions are being attacked aren't responding themselves - not only were they personally attacked, but it seems clear that substantive engagement with the specific criticisms is no longer a way to effectively respond or discuss this with Phil.

Otherwise, I think EA still does have a record of being very willing to engage in discussion, and I agree that we need to be zealous in protecting our willingness to do so - so thanks for this post!

1) I won't comment on what happened, other than to say that most of what is being complained about seems like typical drama where it's easy to blame anyone you'd like depending on the narrative you construct. 

Comment by Davidmanheim on Decomposing Biological Risks: Harm, Potential, and Strategies · 2021-10-14T12:14:51.909Z · EA · GW

Thanks Simon - this is great. I do want to add a few caveats for how and why the "One Country" idea might not be the best approach.

The first reason not to pursue the one-country approach from a policy perspective is that non-existential catastrophes seem likely, and investments in disease detection and prevention are a good investment from a immediate policy perspective. Given that, it seems ideal to invest everywhere and have existential threat detection be a benefit that is provided as a consequence of more general safety from biological threats. There are also returns to scale for investments, and capitalizing on them may require a global approach.

Second, a key question for whether the proposed "one country" approach is more effective than other approaches is whether we think early detection is more important than post-detection response, and what they dynamics of the spread are. As we saw with COVID-19, once a disease is spreading widely, stopping it is very, very difficult. The earlier the response starts, the more likely it is that a disease can be stopped before spreading nearly universally. The post-detection response, however, can vary significantly between countries, and those most able to detect the thread weren't the same as those best able to suppress cases - and for this and related reasons, putting our eggs all in one basket, so to speak, seems like a very dangerous approach.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-11T17:49:38.066Z · EA · GW

Your definition of problematic injustice seems far too narrow, and I explicitly didn't refer to race in the previous post. The example I gave was that the most disadvantaged people are in the present, and are further injured - not that non-white people (which under current definitions will describe approximately all of humanity in another half dozen generations) will be worse off.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-11T17:45:31.206Z · EA · GW

Yes. The ways that various movements have gone wrong certainly differs, and despite the criticism related to race, which I do think is worth addressing,  I'm not primarily worried that longtermists will end up repeating specific  failure modes - different movements fail differently.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-07T08:20:44.743Z · EA · GW

I think that ignoring historical precedent is exactly what Scott was pointing out we aren't doing in his post, and I think the vast majority of EAs think it would be a mistake to do so now.

My point was that we're aware of the skulls, and cautious. Your response seems to be "who cares about the skulls, that was the past. I'm sure we can do better now." And coming from someone who is involved in EA, hearing that view from people interested in changing the world really, really worries me - because we have lots of evidence from studies of organizational decision making and policy that ignoring what went wrong in the past is a way to fail now and in the future.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-07T08:15:29.383Z · EA · GW

Mostly endorsed. 

Or perhaps more simply, if a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-07T08:13:07.340Z · EA · GW

I'm pointing out that you're privileging your views over those of others - not "some philosophers," but "most people."

And unless you're assuming a fairly strong version of moral realism, this isn't a factual question, it's a values question - so it's strange to me to think that we should get to assume we're correct despite being a small minority, without at least a far stronger argument that most people would agree with longermism if properly presented - and I think Stefan Schubert's recent work implies that is not at all clear.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-07T08:08:23.156Z · EA · GW

Is that true?

Many current individuals will be worse off when resources don't go to them, for instance, because they are saving future lives, versus when they do, for instance, funds focused on near-term utilitarian goals like poverty reduction. And if, as most of us expect, the world's wealth will continue to grow, effectively all future people who are helped by existential risk reduction are not what we'd now consider poor. You can defend this via the utilitarian calculus across all people, but that doesn't change the distributive impact between groups.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-07T08:00:13.197Z · EA · GW

Right - fixed. Whoops!

Comment by Davidmanheim on Feedback on Meta Policy Research Questions to Help Improve the X/GCR Field’s Policy Engagement · 2021-10-06T10:04:32.758Z · EA · GW

I think this is a good idea, but would benefit greatly from narrowing the scope greatly, and finding what answer are already known before brainstorming what to investigate. Given that, I think you'd benefit from some of the basic works on policy analysis, rather than policy engagement, to see what is already understood. I'll specifically point to Bardach's A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving as a good place to start, followed by Weimar and Vining's book.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-06T09:47:14.373Z · EA · GW

I might have been unclear. As I said initially, I claim it's good to publicly address concerns about "the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents", and this is what the LARB piece actually discussed.  And I think that slightly more investigation into the issue should have convinced the author that any concerns about continued embrace of the eugenic ideas, or ignorance of the issues, were misplaced. I initially pointed out that specific claims about longtermism being similar to eugenics are "farcical." More generally, I tried to point out in this post that many the attacks are unserious or uniformed- as Scott pointed out in his essay, which this one quoted and applied to this situation, the criticisms aren't new. 

More serious attempts at dialog, like some of the criticisms in the LARB piece are not bad-faith or unreasonable claims, even if they fail to be original. And I agree that "we cannot claim to take existential risk seriously — and meaningfully confront the grave threats to the future of human and nonhuman life on this planet — if we do not also confront the fact that our ideas about human extinction, including how human extinction might be prevented, have a dark history."  But I also think it's obvious that others working on longtermism agree, so the criticism seems to be at best a weak man argument. Unfortunately, I think we'll need to wait another year or so for Will's new book, which I understand has a far more complete discussion of this, much of which was written before either of these pieces were published.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T16:38:28.900Z · EA · GW

My point is that many people who disagree with the longtermist ethical viewpoint also spent years thinking about the issues, and dismissing the majority of philosophers, and the vast, vast majority of people's views as not plausible, is itself one of the problems I tried to highlight on the original post when I said that a small group talking about how to fix everything should raise flags.

And my point about racism is that criticism of choices and priorities which have a potential to perpetuate existing structural disadvantages  and inequity is not the same as calling someone racist.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T16:29:20.390Z · EA · GW

I agree that there are some things in Bio and AI that are applied - though the vast majority of the work in both areas is still fairly far from application. But my point which granted your initial point was responding to "I don't think it counterfactually harms the global poor."

Comment by Davidmanheim on Robin Hanson on the Long Reflection · 2021-10-05T16:25:27.013Z · EA · GW

Yes, you've mentioned your skepticism of the efficacy of a long reflection, but conditional on it successfully reducing bad outcomes, you agree with the ordering?

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T16:19:47.447Z · EA · GW

Thanks - I largely agree, and am similarly concerned about the potential for such impacts, as was discussed in the thread with John Halstead.

As an aside, I think Harper's LARB article was being generous in calling Phil's current affairs article "rather hyperbolic," and think its tone and substance are an unfortunate distraction from various more reasonable criticisms Phil himself has suggested in the past.

Comment by Davidmanheim on EA Forum Creative Writing Contest: Submission thread for work first published elsewhere · 2021-10-05T11:35:16.098Z · EA · GW

Found it - the quote was slightly off: https://twitter.com/ASmallFiction/status/901252178588778498

"It was a dirty job, he thought, but somebody had to do it. 
As he walked away, he wondered who that somebody might be."

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T10:54:21.102Z · EA · GW

As I said above in a different comment thread, it seems clear we're talking past one another.

Yes, being racist would be racist, and no, that's not the criticism. You said that "there are some popular views on which we would discount or ignore future people. I just don't think that they are plausible." And I think part of the issue is exactly this dismissiveness. As a close analogy, imagine someone said "there are some popular views where AI could be a risk to humans. I just don't think that these are plausible," and went on to spend money building ASI instead of engaging with the potential that they are wrong, or taking any action to investigate or hedge that possibility.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T10:41:57.806Z · EA · GW

Note: I did not call anyone racist, other than to note that there are groups which embrace some views which themselves embrace that label - but on review, you keep saying that this is about calling someone racist, whereas I'm talking about unequal impacts and systemic impacts of choices - and I think this is a serious confusion which is hampering our conversation. 

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T10:37:47.418Z · EA · GW

I disagree about at least some Biorisk, as the allocation of scarce resources in public health has distributive effects, and some work on pandemic preparedness has reduced focus for near-term campaigns on vaccinations. I suspect the same is true, to a lesser extent, in pushing people who might otherwise work on near-term ML bias to work on longer term concerns. But as this relates to your second point, and the point itself, I agree completely, and don't think it's reasonable to say it's blameworthy or morally unacceptable, though as I argued, I think we should worry about the impacts.
 
But the last point confuses me. Even ignoring person-affecting or not, shifting efforts to help John can (by omission, at the very least,) injure Sam. "The global poor" isn't a uniform pool, and helping those who are part of "the global poor" in a century by, say, taxing someone now is a counterfactual harm for the person now. If you aggregate the way you prefer, this problem goes away, but there are certainly ethical views, even within utilitarianism, where this isn't acceptable - for example, if the future benefit is discounted so heavily that it's outweighed by the present harm.

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T10:05:07.591Z · EA · GW

It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.

But there is at least one concrete thing that has happened - many people in effective altruism who previously worked on and donated to near-term causes in global health and third world poverty have shifted focus away from those issues. And I don't disagree with that choice, but if that isn't an impact of longtermism which counterfactually harms the global poor, what do you think would qualify?

Comment by Davidmanheim on Noticing the skulls, longtermism edition · 2021-10-05T09:46:44.362Z · EA · GW

First, I agree that racism isn't the most worrying criticism of longtermism - though is the one that has been highlighted recently. But it is a valid criticism of at least some longtermist ideas, and I think we should take this seriously. Sean's argument is one sketch of a real problem, though I think there is a broader point about racism in existential risk reduction, which I make below. But there is also more to longtermism than preventing extinction risks, which is what you defended. As the LARB article notes, transhumanism borders on some very worrying ideas, and there is non-trivial overlap with the ideas of communities which emphatically embrace racism. (And for that reason the transhumanist community has worked hard to distance itself from those ideas.)

And even within X-risk reduction. it's not the case that attempts to reduce existential risks are obviously on their own a valid excuse for behavior that disadvantages others. For example, a certainty of faster western growth that disadvantages the world's poor for a small reduction in risk of human extinction a century from now is a tradeoff that disadvantages others, albeit probably one I would make, were it up to me. But essential to the criticism is that I shouldn't decide for them. And if utilitarian views about saving the future are contrary to the views of most of humanity, longermists should be very wary of unilateralism, or at least think very, very carefully before deciding to ignore others' preferences to "help" them. 

Comment by Davidmanheim on Robin Hanson on the Long Reflection · 2021-10-05T07:22:42.465Z · EA · GW

As you laid out in the post, your biggest concern about the long reflection is the likely outcome of a pause - is that roughly correct?

In other words, I understand your preferences are roughly: 
Extinction < Eternal Long Reflection < Unconstrained Age of Em < Century-long reflection followed by Constrained Age of Em < No reflection + Constrained Age of Em

(As an aside, I would assume that without changing the preference order, we could replace unconstrained versus constrained Age of Em with, say,  indefinite robust totalitarianism versus "traditional" transhumanist future.)

Comment by Davidmanheim on Issues with Giving Multiplier · 2021-10-03T07:51:42.706Z · EA · GW

I don't think that most donors who are looking at getting matching donations are particularly interested in thinking about / worried about counterfactual donations - but if they are, and bother to do minimal reading, the situation is very clear. 
(Note that they are doing counterfactual donation direction, since otherwise the money will not necessarily go to the organization they picked, which is what, in my experience, most non-EA people think they are doing when getting matched donations.)

Comment by Davidmanheim on Issues with Giving Multiplier · 2021-09-30T12:28:47.334Z · EA · GW

I think this is all correct as criticisms, and also very weak, and it's enough to convince me we should make sure the organization is sufficiently funded to meet demand from new donors. The three key reasons are:

1) It doesn't seem that the organization is materially misleading people, and at least is clear about what is happening, and isn't allocating funds in ways that they would find upsetting.

2) It does create counterfactual donations to EA charities, albeit from the donors, not from the match.

3) This is a marginally effective use of EA dollars, especially if you are relatively cause neutral between the effective charities they support. That is, the donor still has given the money effectively, so (absent the below point about costs,) this can't be any less effective that just giving the money to the organizations which end up receiving it. This is even more clear if the users who start with Giving Multiplier end up getting more involved in EA giving,  which seems very likely, but unless overhead costs are high, it remains true without that. I'm unsure / they have not yet reported how much they have raised, how much has been donated, and what their costs have been. (I'd guesstimate the 1,000 donors are giving an average of over $50, and on average probably give around half to the EA org, as long as operating costs are a relatively small fraction of $25,000, it seems likely that it's on net effective.)

Comment by Davidmanheim on Cultured meat: A comparison of techno-economic analyses · 2021-09-27T06:24:55.270Z · EA · GW

Minor note; "...a factor of 4 increase in cell density means that the percentage of bioreactor volume that is now cells would increase from 17.5% to 70%, very close to sphere packing density limits."

Sphere packing density limits shouldn't be relevant for cells, which are not rigid - they can deform to squish together, unlike in the sphere packing problem. (The problem of getting nutrients distributed is a different issue, of course.)

Comment by Davidmanheim on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T19:31:16.135Z · EA · GW

Thanks -  I meant to point out that it wasn't definitively single-shot, unlike actual, you know, destruction.

Comment by Davidmanheim on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T10:40:52.739Z · EA · GW

The question is whether precommitment would actually change behavior. In this case, anyone shutting down either site is effectively playing nihilist, and doesn't care, so it shouldn't.

In fact, if it does anything, it would be destabilizing - if "they" commit to pushing the button if "we" do, they are saying they aren't committed to minimizing damage  overall, which should make us question whether we're actually on the same side. (And this is a large part of why MAD only works if you are both selfish, and scared of losing.)

Comment by Davidmanheim on What should "counterfactual donation" mean? · 2021-09-26T08:22:38.399Z · EA · GW

I'm probably a bit unusual in this regard, but I have different budgets for different things, so a counterfactual donation means spending $50 from my personal luxuries budget on a donation to that charity, which is in addition to the 10% of my net income that I donate otherwise. That keeps everything simple.

Comment by Davidmanheim on Why I am probably not a longtermist · 2021-09-26T06:56:17.757Z · EA · GW

"If there have been any intentional impacts for more than a few hundred years out"

There have been a number of stabilizing religious institutions which were built for exactly this purpose, both Jewish, and Christian. They intended to maintain the faiths of members and peace between them, and have been somewhere between very and incredibly successful in doing so, albeit imperfectly. Similarly, Temple-era Judaism seems to have managed a fairly stable system for several hundred years, including rebuilding the Temple after its destruction.  We also have the example of Chinese dynasties and at least several European monarchies which intended to plan for centuries, and were successful in doing so. 

But given the timeline of "more than a few hundred years out," I'm not sure there are many other things which could possibly qualify. On a slightly shorter timescale, there are many, many more examples. The US government seems like one example - an intentionally built system which lasted for centuries and spawned imitators which were also largely successful. But on larger and smaller scales, we've seen 200+ year planning be useful in many, many cases, where it occurred. 

The question of what portion of such plans worked out is a different one, and a harder one to answer, but it's obviously a minority. I'm also unsure whether there are meaningful differentiators between cases where it did and didn't work, but it's a really good question, and one that I'd love to see work on.

Comment by Davidmanheim on Guarding Against Pandemics · 2021-09-21T20:21:20.634Z · EA · GW

I'm not responding on behalf of GAP, but since I've been working a bit with them, I'll try to answer.
 

  1. The efforts to find and work with Republican champions are ongoing, and there are at least some (non-public, in the works) efforts which are definitely on the Republican side. I don't know all the details, but I'm assuming the issue for now is that 1) they haven't set up infrastructure for donations independent of ActBlue, and 2) the democrats are in power, and lost of things are happening immediately, so they are the primary target for much current lobbying.
  2. This is a definite topic of discussion, and I'm not sure there's a way to answer briefly, but I think that a well-run and careful lobbying done by a group which aligns itself with the EA movement, but in no way claims to reflect the movement, has limited risks. That said, of course it's very difficult to predict how political lobbying plays out, but companies and other movements certainly negotiate this with a decent ability to avoid trouble. More than that, the alternative which has been embraced so far is to not have any outlet to engage in lobbying directly, and it seems like an important tool, so continuing not to use it seems ill-advised - but I'd be happy to have a more in-depth discussion of this with you.
  3. I can't name who has been involved in discussions, but I'll vouch for the fact that several of the people I would want in the loop on this are, in fact, in the loop. I can't promise that they will have sufficient veto-power, but I think Gabe is sufficiently aware of the issues and the risks of unilateralism that it's fine.

If anyone has a contrary impression on any of these points, feel free to say so, and/or reach out to me privately.

Comment by Davidmanheim on Guarding Against Pandemics · 2021-09-21T17:37:38.915Z · EA · GW

As I said in another comment, I'm working with GAP, but am not speaking on their behalf. And feel free to wait until the presentation before deciding about donating, but yes, there is already effort to push on both sides of the aisle. That said, it's a waste of time and money for a narrowly focused lobbying group to aim to support equal numbers of people on both sides of the aisle, rather than opportunistically finding champions for individual issues on both sides, and building relationships that allow us to get specific items passed. 

That means that when there is a bill which is getting written by the party currently in power in the house, GAP is going to focus on key members of the relevant committees - which is largely, but certainly not exclusively,  the party in power. And given US political dynamics, it is likely that GAP will be talking even more to Republicans during the next year, to ensure they have champions for their work during the next Congress.

Comment by Davidmanheim on What we learned from a year incubating longtermist entrepreneurship · 2021-09-01T08:23:54.535Z · EA · GW

Is the slack or other community  resources still being used / are they still available for additional people to join?

Comment by Davidmanheim on "Epistemaps" for AI Debates? (or for other issues) · 2021-09-01T07:48:46.083Z · EA · GW

(I'm also working on the project.)

We definitely like the idea of doing semantically richer representation, but there are several components of the debate that seem much less related to arguments, and more related to prediction - but they are interrelated.

For example, 
Argument 1: Analogies to the brain predict that we have sufficient computation to run an AI already
Argument 2: Training AI systems (or at least hyperparameter search) is more akin to evolving the brain than to running it. (contra 1)
Argument 2a: The compute needed to do this is 30 years away.
Argument 2b (contra 2a): Optimizing directly for our goal will be more efficient.
Argument 2c (contra 2b): We don't know what we are optimizing for, exactly.
Argument 2d (supporting 2b): We still manage to do things like computer vision.

Each of these has implications about timelines until AI - we don't just want to look at strength of the arguments, we also want to look at the actual implication for timelines.

Semantica Pro doesn't do quantitative relationships which allow for simulation of outcomes and uncertainty, like "argument X predicts progress will be normal(50%, 5%) faster." On the other hand, Analytica doesn't really do the other half of representing conflicting models - but we're not wedded to it as the only way to do anything, and something like what you suggest is definitely valuable. (But if we didn't pick something, we could spend the entire time until ASI debating preliminaries or building something perfect for what we want,)

It seems like what we should do is have different parts of the issue represented different / multiple ways, and given that we've been working on cataloging the questions, we'd potentially be interested in collaborating.

Comment by Davidmanheim on Is it really that a good idea to increase tax deductibility ? · 2021-08-31T17:13:42.211Z · EA · GW

Per my answer, I think it's likely that eliminating tax deductibility would be net negative without other simplifications of the tax code to eliminate the alternative tax shelters.

Comment by Davidmanheim on Is it really that a good idea to increase tax deductibility ? · 2021-08-31T17:11:39.079Z · EA · GW

Agreeing with others that this is a good question - and it's not simple. (Because, of course, policy debates should not appear one-sided!)

Two of the key reasons I'm a fan of tax deductibility is because it's a clear signal about whether something is a charity, and because it's a behavioral incentive to donate - people feel like they are getting something from donating. (Never mind the fact that they are spending - it's the same cognitive effect when people feel like they "saved money" by buying something they don't need on sale.)

On the other hand, I think Rob Reich is right about this, and we'd be better off switching to a system that doesn't undermine our taxation system generally - though tax deductibility is far from the only culprit, and if this is a single change, the other loopholes are less publicly beneficial as side effects, so I would guess it's a net negative unless coupled with broader reform.  Note that I haven't read Rob's latest book, (he is an incredibly fast writer!) and maybe he talks about this. If not, I'd be interested in asking him for his take.

Given all of this, I don't have a strong take on this - and short of general reform, I'd at least be in favor of expanding tax credits for EA charities, so that they aren't relatively disadvantaged as places to give.

Comment by Davidmanheim on Towards a Weaker Longtermism · 2021-08-21T19:05:45.682Z · EA · GW

Thanks Will - I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify.

I'm especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the "mainstream EA view," albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.

Comment by Davidmanheim on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-16T17:32:44.334Z · EA · GW

Who should buy them?

I'm concerned that it would look really shady for OpenPhil to do so, but maybe Sam Bankman-Fried or another very big EA donor could do it - but then the purchaser needs to figure out who to pick to actually manage things, since they aren't experts themselves. (And they need to ensure that their control doesn't undermine the publication's credibility - which seems quite tricky!)

Comment by Davidmanheim on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-16T17:29:59.938Z · EA · GW

Yeah, cloning humans is effectively illegal almost everywhere. (I specifically know it's banned in the US and Israel, I assume the EU's rules would be similar.)

Comment by Davidmanheim on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-16T17:16:55.847Z · EA · GW

I think the budget to do this is easily tens of millions a year, for perhaps a decade, plus the ability to hire the top talent, and it likely only works as a usefully secure system if you open-source it. Are there large firms who are willing to invest $25m/year for 4-5 years on a long-term cybersecurity effort like this, even if it seems somewhat likely to pay off? I suspect not - especially if they worry (plausibly) that governments will actively attempt to interfere in some parts of this.

Comment by Davidmanheim on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-16T17:13:31.778Z · EA · GW

I will go further than that. Everyone I know in infosec, including those who work for either the US or the Israeli government, seem to strongly agree with the following claim:
"No amount of feasible security spending will protect your network against a determined attempt by an advanced national government (at the very least, US, Russia, China, and Israel) to get access. If you need that level of infosec, you can't put anything on a computer."

If AI safety is a critical enabler for national security, and/or AI system security is important for their alignment, that means we're in deep trouble.

Comment by Davidmanheim on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-16T17:07:46.582Z · EA · GW

To clarify, this is $100m over around 5 years, or $20m/year - which is a good start, but far less than $100m/year.

Comment by Davidmanheim on Towards a Weaker Longtermism · 2021-08-16T17:05:41.438Z · EA · GW

I think I can restate your view;  there is no moral objective truth, but individual future lives are equally valuable to individual present lives,  (I assume we will ignore the epistemic and economic arguments for now,) and your life in particular has no larger claim on your values than anyone else's. 

That certainly isn't incoherent, but I think it's a view that few are willing to embrace - at least in part because even though you do admit that personal happiness, or caring for those close to you, is instrumentally useful, you also claim that it's entirely contingent, and that  if new evidence were to emerge, you would endorse requiring personal pain to pursue greater future or global benefits.

Comment by Davidmanheim on Get 100s of EA books for your student group · 2021-08-16T16:51:14.518Z · EA · GW

I think the ROI for existing EAs is far lower, since they are already engaged, and less likely to increase engagement because they got a book - though I'd also love free books.

Comment by Davidmanheim on Towards a Weaker Longtermism · 2021-08-15T06:57:07.950Z · EA · GW

This isn't really relevant to the point I was making, but the idea that longtermism has objective long-term value, but ice cream now is a moral failing seems to presuppose moral objectivism. And that seems be be your claim - the only reason to value ice cream now is to make us better at improving the long term in practice. And I'm wondering why "humans are intrinsically unable to get rid of value X" is a criticism / shortcoming, rather than a statement about our values that should be considered in maximization. (To some extent, the argument for why to change out values is about coherency / stable time preferences, but that doesn't seem to be the claim here.)

Comment by Davidmanheim on Towards a Weaker Longtermism · 2021-08-15T06:47:51.567Z · EA · GW

Yeah, it should read "long-term *risk*" - fixing now, thanks!