Posts

EA is a global community - but should it be? 2022-11-18T08:23:14.592Z
"Doing Good Best" isn't the EA ideal 2022-09-16T12:06:24.305Z
Interesting vs. Important Work - A Place EA is Prioritizing Poorly 2022-07-28T10:08:24.385Z
Making Effective Altruism Enormous 2022-07-24T13:29:46.070Z
ALTER Israel - Mid-year 2022 Update 2022-06-12T09:22:14.083Z
Baby Cause Areas, Existential Risk, and Longtermism 2022-05-25T13:13:57.724Z
Contest - A New Term For "Eucatastrophe" 2022-02-17T20:48:16.004Z
Reflecting on Steering, Pitfalls, and History 2021-11-21T08:05:28.725Z
Noticing the skulls, longtermism edition 2021-10-05T07:08:28.304Z
Time-average versus Individual-average Total Utilitarianism 2021-09-26T07:05:42.403Z
Towards a Weaker Longtermism 2021-08-08T08:31:03.727Z
Maybe Antivirals aren’t a Useful Priority for Pandemics? 2021-06-20T10:04:25.592Z
Announcing "Naming What We Can"! 2021-04-01T10:17:28.990Z
Project Ideas in Biosecurity for EAs 2021-02-16T21:01:44.588Z
The Upper Limit of Value 2021-01-27T14:15:03.200Z
The Folly of "EAs Should" 2021-01-06T07:04:54.214Z
A (Very) Short History of the Collapse of Civilizations, and Why it Matters 2020-08-30T07:49:42.397Z
New Top EA Cause: Politics 2020-04-01T07:53:27.737Z
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-04T17:06:42.972Z
International Relations; States, Rational Actors, and Other Approaches (Policy and International Relations Primer Part 4) 2020-01-22T08:29:39.023Z
An Overview of Political Science (Policy and International Relations Primer for EA, Part 3) 2020-01-05T12:54:34.826Z
Policy and International Relations - What Are They? (Primer for EAs, Part 2) 2020-01-02T12:01:21.222Z
Introduction: A Primer for Politics, Policy and International Relations 2019-12-31T19:27:46.293Z
When To Find More Information: A Short Explanation 2019-12-28T18:00:56.172Z
Carbon Offsets as an Non-Altruistic Expense 2019-12-03T11:38:21.223Z
Davidmanheim's Shortform 2019-11-27T12:34:36.732Z
Steelmanning the Case Against Unquantifiable Interventions 2019-11-13T08:34:07.820Z
Updating towards the effectiveness of Economic Policy? 2019-05-29T11:33:17.366Z
Challenges in Scaling EA Organizations 2018-12-21T10:53:27.639Z
Is Suffering Convex? 2018-10-21T11:44:48.259Z

Comments

Comment by Davidmanheim on Announcing Nonlinear Emergency Funding · 2022-12-01T19:53:05.893Z · EA · GW

No, that's not how any of that works.

Comment by Davidmanheim on Announcing Nonlinear Emergency Funding · 2022-12-01T19:50:44.076Z · EA · GW

Also, the naming was completely on me, not them, as I explained in another comment.

Comment by Davidmanheim on Announcing Nonlinear Emergency Funding · 2022-12-01T19:50:00.443Z · EA · GW

I know nothing about most of the discussion here, but...

they named an eponymous prize without consent from the named person or their estate

That wasn't them, it was me. Since I came up with the prize, and suggested the name. Ryan reasonably reached out them to get clarity, and I wasn't initially in the loop, but that's a communication problem, not an ethics issue. And if you look at the comments, I really don't think it was mishandled.

Comment by Davidmanheim on Where are you donating this year, and why? (Open thread) · 2022-11-28T16:13:30.951Z · EA · GW

While my work is focused mostly on longtermist interventions, most of my donations for the first half of the year were to Givewell as unrestricted donations. I also did a smaller amount of political giving to EA-aligned candidates, which was partially from my 10% of income dedicated to EA giving. (I split those political donations 50-50 between my bucketed EA spending and my personal spending.) I also gave a small amount to MIRI via every.org.

I have not yet donated all of my 2nd-half-of-2022 donations.

Comment by Davidmanheim on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-11-22T18:22:32.092Z · EA · GW

I was inexact - by "post-foom" I simply meant after a capabilities takeoff occurs, regardless of whether than takes months, years, or even decades - as long as humanity doesn't manage to notice and successfully stop ASI from being deployed.

Comment by Davidmanheim on The FTX crisis highlights a deeper cultural problem within EA - we don't sufficiently value good governance · 2022-11-22T08:19:43.440Z · EA · GW

I agree  with this substantively, agree that it's a bit unreasonable to think EA orgs should be similar to large corporations, but we should aspire to do this well - the status quo for small nonprofits is pretty dismal, and we should at the very least be better than peer organizations, and ideally even better than that. I also think that this is a great area to work with Ian David Moss, who has a lot of experience in the area, and is well connected with other experts.

Comment by Davidmanheim on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-11-22T07:30:46.182Z · EA · GW

I worry that a naïve approach to complexity and pluralism is detrimental, but agree that this is important. As you said, "the complex web of impacts of research also need to be untangled. This is tricky, and needs to be done very carefully."

I also think that you're preaching to the choir, in an important sense. The people in EA working on existential risk reduction are aware of the complexity of the debates and discussions, while the average EA posting on the forum seems not to be. This is equivalent to the difference between climate expert's views and the lay public.

To explain the example more, I think that most people's view of climate risk isn't that it destabilizes complex systems and may contribute to risk understood broadly in unpredictable ways. Their view is that it's bad, and we need to stop it, and that worrying about other things isn't productive because we need to do something about the bad thing now. But this leads to approaches that could easily contribute to risks rather than mitigate them - a more fragile electrical grid, or as you cited from Tang and Kemp. more reliance on mitigations like geoengineering that are poorly understood and build in new systemic risks of failure.

 Of course, popular science books don't necessarily go into the details, or when read casually leave the lay public with a at least somewhat misleading view - but one that pushes in the direction of supporting actions that the experts recommend. (Note that as a general rule, people working in the climate space are not pushing for geoengineering, they are pushing for emissions reductions, work increasing resilience to impacts, and similar.) The equivalent in EA is skimming the precipice, and ignoring Toby's footnotes, citations, and cautions. Those first starting to work on risk and policy , or writing EA forum posts often have this view, but I think it's usually tempered fairly quickly via discussion. Unfortunately, many who see the discussions simply claim longtermism is getting everything wrong, while agreeing with us on both priorities, and approaches.

So I agree that we need to appreciate the more sophisticated approach to risk, and blend them with cause prioritization and actually considering what might work. I also strongly applaud your efforts to inject nuance and push in the right direction, appropriately, without ignoring the nuance and complexity. And yes, squaring the circle with effectiveness is a difficult question - but I think it's one that is appreciated.

Comment by Davidmanheim on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-11-22T07:09:12.747Z · EA · GW

I think you're confused about what different parts of the AI risk community are concerned about.  Your explanation addresses the risks of human-caused, AGI assisted catastrophe. What Eliezer and others are warning about is a post-foom misaligned AGI. And no, a united, peaceful, adaptable world that managed to address the specific risks of pandemics and nuclear war would not be in a materially better position to "stave off" a highly-superhuman agent that controls its communications systems. This is akin to the paradigm of computer security by patching individual components - it will keep out the script-kiddies, but not the NSA.

So as far as I understand it, the key question that splits between different parts of the AI risk community is what the timeline for AGI takeoff is, and that has little to do with cultural approach to risk, and everything to do with the risk analysis itself. (And we already had the rest of this discussion in the comments on the link to your views on non-infallible AGI.)

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-20T08:25:40.633Z · EA · GW

I think that social ties are useful, yet having a sprawling global community is not. I think that you're attacking a bit of a straw man, one which claims that we should have no relationships or community whatsoever.

I also think that there is an unfair binary you're assuming, where on one side you have "unpaid, ad-hoc community organising" and on the other you have the current abundance of funding for community building. Especially in EA hubs like London, the Bay Area, and DC, the local community can certainly afford to pay for events and event managers without needing central funding, and I'd be happy for CEA to continue to do community building - albeit with the expectation that communities do their own thing and pay for events, which would be a very significant change from the current environment. Oh, and I also don't live in an EA hub, and have never attended an EAG - but I do travel occasionally, and have significant social interaction with both EAs and non-EAs working in pandemic preparedness, remotely.  The central support might be useful, but it's far from the only way to have EA continue.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-20T07:17:52.984Z · EA · GW

Does this make the idea of an EA 'community' self-defeating?


I think that it's possible to have a community that minimizes the distortion of these social dynamics, or at least is capable of doing significant good despite the distortions - but as I argued in the post, at scale this is far harder, and I think it might be net negative to try to build a single global community the way EA seems to have decided to do.
 

My own view is that things that work in the world are rare, so when you find one you need to do what you can to replicate or widen it. 

Agreed - and that was one of the key things early EA emphasized - and it's been accepted, in small part due to EA, to the point that it is now conventional wisdom.

I want to fully acknowledge the massive pathologies of the formal aid sector, but I work to mitigate those in the course of my job. I haven't, to be honest, seen anything from the EA community that would help me with that other than an articulation of fairly obvious general principles.

I don't think that EA as a movement is well placed to provide ways to reform traditional aid. As you point out, it has many pathologies, and I don't think there is a simple answer to fix a complex system deeply embedded in geopolitics and social reality. I do think that EA-promoted ideas, including giving directly, have the potential to displace some of the broken systems, and we should work towards figuring out where simpler systems can replace  current complex but broken ones. I also think that an EA-like focus on measuring outcomes helps push for the parts of traditional aid that do work. That is, it identifies specific programs  which are effective and evaluates and scales them. This isn't to say that traditional aid doesn't have related efforts which are also successful, but I think overall it's helpful to have external pushes from EA and people who embrace related approaches for this work.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-19T17:01:54.539Z · EA · GW

I hope they can, but don't know that it's easy to direct groups so easily. My biggest concern is with college EA groups, where well-intentioned 21 year olds with very limited life experience  are running groups, often without much external monitoring.

And regarding material for theories of change, I'm skeptical that it can be taught well without somewhat deep engagement. In grad school, it took thinking, feedback, and practice to get to the point where we could coherently lay out a useful theory of change.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-19T16:13:23.205Z · EA · GW

First, I didn't say there are no benefits. 

Second, I don't have a clear enough vision to lay out the alternative,

Third, I'm skeptical of doing a cost-benefit analysis without considering options for a specific actor.  I can't usefully compare alternatives of specific actions I would take, as someone not controlling any of the decisions. If CEA wants to evaluate 3 different potential strategies, they could do a useful CBA.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-19T16:10:03.424Z · EA · GW

It seems like you're ignoring that he said EA has an actively bad reputation, and viewing this as a generic claim about not wanting to share a view others don't embrace.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-19T16:07:00.524Z · EA · GW

This seems very plausibly a better direction. I think we agree there is something wrong, and the direction you're pointing may be a better one - but I'm concerned, because I don't see a way to make an extent and large community shift, and think that we need a more concrete theory of change...

Speaking of which, "I still don’t know where to find a good, simple article or video that describes how to create a theory of change" - you should have asked! I'd recommend here and here. (I also have a couple more PDFs of relevant articles from classes in grad school, if you want.)

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-19T15:58:46.898Z · EA · GW

Completely fair - I'm not endorsing him, just pointing to a source for the allegation. (And more recent allegations, about a compete lack of control and self dealing, are far more damning.)

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-19T15:56:45.165Z · EA · GW

"What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation."

1. The community is unhealthy in various ways. 
2. You're suggesting centralizing around high trust, without a mechanism to build that trust.

I don't think that the EA community could have stopped SBF, but they absolutely could have been independent of him in ways that mean EA as a community didn't expect a random person most of us had never  heard of before this to automatically be a trusted member of the community. Calling people out is far harder when they are a member of your trusted community, and the people who said they had concerns didn't say it loudly because they feared community censure. That's a big problem.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-18T13:36:32.022Z · EA · GW

I certainly think that having an academic discipline devoted to AI safety is an option, but I think it's a bad idea for other reasons; if safety is viewed as separate from ML in general, you end up in a situation similar to cybersecurity, where everyone builds dangerous shit, and then the cyber people recoil in horror, and hopefully barely patch the most obvious problems.

That said, yes, I'm completely fine with having informal networks of people working on a goal - it exists regardless of efforts. But a centralized effort at EA community building in general is a different thing, and as I argued here, I tentatively think this are bad, at least at the margin.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-18T11:50:46.442Z · EA · GW

I don't mean either (1) or (2), but I'm not sure it's a single argument. 

First, I think it's epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, it's good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldn't be in EA because they aren't making the right choices. If your community isn't just about the eventual altruistic value they will create, those failure modes are less likely.

Second, it's easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donated - both community size and total dollars seem like an  unfortunately easy attractor for this failure.

Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naïve EV calculations about increasing community size!

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-18T11:40:32.839Z · EA · GW

I think that my earlier attempt to discuss this mostly matches what you're saying. 

A key thing that changed is that I no longer think we should try to "manage things at the EA community level" - and if we're not attempting that, we should reconceptualize what it means to be good community members and leaders, and what failure modes we should anticipate and address.

The other thing I want is more ambitious - ideally, in 20+ years I want the idea of prioritizing giving part of your income, viewing the future as at least some level of moral priority, and cause neutrality to all look like women's suffrage does; so obviously correct and uncontroversial that it's not a topic of discussion.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-18T10:08:43.284Z · EA · GW

Ombudsmen and clear rules about and norms of protection for whistleblowing, more funding transparency, and better disclosure about conflicts of interest.

(None of these relate to having a community, by the way - they are just important things to have if you care about having well run organizations.)

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-18T10:06:56.666Z · EA · GW

I think that AI safety needs to be promoted as a cause, not as a community. If you have personal moral uncertainty about whether to focus on animal suffering or AI risk, it might make sense to be a vegan AI researcher. But if you have moral uncertainty about what the priority is overall, you shouldn't try to mix the two.

People in Machine learning are increasingly of the opinion that there is a risk, and it would be much better to educate them than to try to bring them in to a community which has goals they don't, and don't need to, care about.

Comment by Davidmanheim on EA is a global community - but should it be? · 2022-11-18T09:19:02.156Z · EA · GW

I agree that it's premature - I don't think we should cancel EAG in 2023, but I do think that we're likely to make minor changes and keep doing poorly in various ways if we aren't explicitly thinking about the question of what the community should be. 

And I'm certainly open to hearing if and why I'm wrong. But I worry that if we aren't thinking about what the community looks like in a decade, we'll keep stumbling forward ineffectually, with unnecessarily and unexpected problems and failures from unplanned growth.

Comment by Davidmanheim on Mass media interventions probably deserve more attention (Founders Pledge) · 2022-11-18T08:52:29.534Z · EA · GW

Different types of media and strategies will have very different effects,  and different interventions will have very different levels of effectiveness. Not only that, but this class of intervention is very, very easy to do poorly, can have negative impacts, and the impact of a specific media strategy isn't guaranteed to replicate given changing culture. So I think that treating "media interventions" as a single thing might be a mistake - not one that the program implementers make, but one that the EA community might not sufficiently appreciate. I don't think this analysis is wrong in pointing to mass media as a category, but do worry that "fund more mass media interventions, because they work" is too simplistic a takeaway. At the very least, I'd be interested in more detailed case studies of when they succeeded or failed, and what types of messages and approaches work.

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-17T12:31:00.565Z · EA · GW

Fully agreed.

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-17T11:28:44.956Z · EA · GW

It's complicated, and I don't know the exact rules, but I think accredited is enough in this case.

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-17T09:53:18.378Z · EA · GW

I didn't claim that it was impossible to generate liquidity, but it's not the "very common" thing of donating stock which was suggested.

I'm aware that there is ownership stake in private companies which can be transferred - that's not the same as shares.  Insider deals involving dilution and sales are subject to investor compacts, and I don't know that SBF would have been able to do this. Even if he could, they'd need to sell the stake, which can be a problem - you can't sell non-traded stake in a company to a non-qualified investor.  So it's not at all as simple as you're implying.

Comment by Davidmanheim on Open Phil is seeking applications from grantees impacted by recent events · 2022-11-16T19:44:55.609Z · EA · GW

I strongly suspect there are legal reasons that covering future clawbacks, especially if they say so explicitly, is not going to be workable, or at least is significantly more complex / dangerous legally.

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-16T12:08:00.706Z · EA · GW

I'm claiming it would generally not work in practice. (There's a reason that founders pledge suggests pledging money for after you exit, not trying to donate earlier than that.)

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-16T11:21:46.118Z · EA · GW

Either way, in a fraud case it seems likely that - until these technical distinctions about control and source of funds are worked out in court - the court would have frozen the assets of the charity, given that it was explicitly created to hand out FTX funds, either at the request of the government, or of the civil litigants, depending on the case.

Is that correct?

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-16T10:42:36.780Z · EA · GW

"it is very common for foundations to receive stock, including in private companies,"

 

From the parent post: "that only works if the donors are liquid, or the funds can be donated as stock."

In this case, there was no stock to distribute, as they were a private company.

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-16T10:40:05.231Z · EA · GW

Yeah, them not being in charge would probably be helpful once things are resolved, but it wouldn't change the legal responsibility for directors not to distribute funds once they have reason to believe the funds had an illegitimate source.

Comment by Davidmanheim on Who's at fault for FTX's wrongdoing · 2022-11-16T10:07:40.168Z · EA · GW

So are people who never attacked EA before suddenly doing so? That isn't what I've seen. I've seen lots of bad-faith takes about how this is proof of what they always thought, and news reporting which is about as accurate as you'd expect - that is, barely correct on the knowable facts, and misleading or confused about anything more complicated than that.

Comment by Davidmanheim on Who's at fault for FTX's wrongdoing · 2022-11-16T10:04:52.996Z · EA · GW

I think you probably need to label your account "EliezerYudkowsky (parody)" because otherwise a few people might not realize you're occasionally being sarcastic, and then you might get banned from Twitter.

Comment by Davidmanheim on Open Phil is seeking applications from grantees impacted by recent events · 2022-11-16T09:41:53.159Z · EA · GW

If a grant / grantee is doing work which aligns with Open Phil's work, but is more properly classified as global health or animal welfare, can they still apply here, should they apply in some other way, or is Open Phil not the correct vehicle?

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-15T20:31:51.124Z · EA · GW

I think you should ask a lawyer before guessing.

From by recollection working on similar cases - and IANAL, so this is informed guesswork, but I did work at the FTC on fraud cases for a summer - there's no way a judge would deny a request to freeze those funds pre-trial, and especially because of who controls the fund (a mistake, and a problem I think you're right about,) they'd need to prove the funds were not derived from the alleged fraud to get the freeze lifted. And the fraud allegation probably doesn't need to claim details about exact time frames, which would  instead likely only come out at trial, after extensive review of the books.

Comment by Davidmanheim on Why didn't the FTX Foundation secure its bag? · 2022-11-15T20:14:07.288Z · EA · GW

I disagree with lots of this, as I said privately before.

Yes, it would be great if funders donated everything upfront and had foundations manage the money. But that only works if the donors are liquid, or the funds can be donated as stock, often where the donor maintains voting control, for example, for startups. In other cases, many cases, it's reasonable to have them put in money as it is needed, and as it is committed. Yes, Dustin says that they have moved towards having lots of money controlled by Open Phil, but evidently that wasn't always the situation. That's fine.

And even if the Future Fund had cash to cover all its outstanding commitments as of the beginning of November (which I suspect they did not, given that at least a reasonable fraction of donations I know about were to be given over multiple years,) the reason they stopped giving out money had nothing to do with lack of cash on hand. The stopped because they legally, and morally, couldn't continue to donate money once they knew it might have been the product of fraud. So if they did what you suggest, and they were given money before it was committed... they still couldn't have done anything differently. Maybe they'd be able to pay it out in, optimistically, another decade, once the lawsuits are all settled. But that doesn't help now, and likely wouldn't change anything, since the funds would be taken FTX creditors.

The failure here wasn't the way the foundation was managed, it was the fraud.

Comment by Davidmanheim on Thoughts on legal concerns surrounding the FTX situation · 2022-11-14T08:22:35.109Z · EA · GW

You're wrong on this.

FTX Foundation is a legal entity, EIN 88-0669152 , incorporated in Delaware. Per a lawyer's letter, it is a "501(c)(3) organization that has applied for recognition as a public charity with the United States Internal Revenue Service (IRS)"

Comment by Davidmanheim on AI Safety Microgrant Round · 2022-11-14T07:33:24.312Z · EA · GW

Personally, I want to make a recommendation for anyone working on technical AI safety without a computer from after 2018 to apply for a grant for a new one. (Grad students with access to university compute are slightly less critical, but if it's affecting your work at all, you should still do this.)

Comment by Davidmanheim on X-risk Mitigation Does Actually Require Longtermism · 2022-11-13T19:58:39.103Z · EA · GW

Most of the things that are being pursued as longtermist interventions only require caring about our grandchildren, or maybe great grandchildren, which well within scope of even many ethical frameworks that care about preferences but not future lives. The rest of the interventions potentially require caring about the next, say, 1,000 years - which still doesn't require anything like the actual longtermist assumptions. (Anything further out than that isn't really going to be amenable to the types of actions we're taking anyways.)

 

Comment by Davidmanheim on Thoughts on legal concerns surrounding the FTX situation · 2022-11-13T19:27:56.998Z · EA · GW

IANAL, but...

  1. The FTX Foundation is a legal separate entity, but not all the grants were paid out from the foundation. I have no idea how many steps the clawbacks can go through, and given the relationship between FTX and the foundation, I imagine this is at least a question.
  2. It's harder for the creditors to sue from the US, but they can, and depending on the amounts they likely will - individuals who got small grants in foreign countries are probably at somewhat less risk.

    For those who got paid by a different legal entity, here is the bankruptcy filing, which lists the entities: https://s.wsj.net/public/resources/documents/alameda-filing-11112022.pdf
Comment by Davidmanheim on The FTX Future Fund team has resigned · 2022-11-13T12:16:18.884Z · EA · GW

This isn't your fault, but you almost certainly "benefitted" - any (increased) funding from other EA funders is a counterfactual result of FTX generously funding other  groups that otherwise would have competed for funds. And many regrantors certainly helped MIRI more indirectly by funding things you would have wanted that helped MIRI's agenda in various ways.

Comment by Davidmanheim on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-10T10:58:22.866Z · EA · GW

I assume not, no.

Comment by Davidmanheim on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-09T10:04:35.575Z · EA · GW

That's a fractional reserve scheme - they said they were carrying it all in untouched accounts.

Comment by Davidmanheim on Simulation models could help prepare for the next pandemic · 2022-11-01T18:46:48.029Z · EA · GW

(I'd potentially be willing to give advice on that sort of model as well.)

Comment by Davidmanheim on The Fermi Paradox has not been dissolved · 2022-10-27T10:25:27.069Z · EA · GW

Except for the fact that those aren't the only two possibilities.

 

Right, but the other hypotheses need large complexity penalties for why the impact of time travelling is invisible, so lack of seeing time travelers is still pretty strong evidence.

Comment by Davidmanheim on Should EA influence governments to enact more effective interventions? · 2022-10-23T11:39:56.717Z · EA · GW

First, not all EAs agree that all side-constraints should be binding. Second, most lobbying isn't about donations - donations are used to help get someone elected, instead of their opponent, but that doesn't usually get the politician to change their mind on a topic - it just gets to donor time to discuss things with the elected official.  And so, third, informing government officials often simply makes the issue salient, rather than changing opinions - most of the time,  for most topics, governments don't do X because they are busy and no-one got it on their agenda. Lobbying can change that. Of course, some lobbying is more pernicious - but I think that those types of lobbying aren't necessary for many EA causes, which are already widely shared, if unreflectively.

Comment by Davidmanheim on Should EA influence governments to enact more effective interventions? · 2022-10-23T11:35:30.336Z · EA · GW

I think that this is correct, and it is why such efforts have already begun, many quite a few years ago.

Comment by Davidmanheim on The Bioethicists are (Mostly) Alright · 2022-10-23T08:21:23.535Z · EA · GW

Per HHS, "The Belmont Report... is the outgrowth of an intensive four-day period of discussions that were held in February 1976 at the Smithsonian Institution's Belmont Conference Center supplemented by the monthly deliberations of the Commission that were held over a period of nearly four years."

Not sure who was part of the four-day discussion, but per that site, the commission included, among others:

  • Albert R. Jonsen, Ph.D., Associate Professor of Bioethics, University of California at San Francisco.
  • Karen Lebacqz, Ph.D., Associate Professor of Christian Ethics, Pacific School of Religion.
Comment by Davidmanheim on Cultural EA considerations for Nordic folks · 2022-10-13T07:01:45.849Z · EA · GW

I think that checks out, though it depends on being in a high-tax state, when your marginal income tax is in the top bracket - 37% is for income above a half million dollars. You need enough income to actually get the large tax deduction in the year you make the donation - startup founders and inheritors of appreciated assets could often get a tax credit for more than their normal income, if they donate it all at once. For lower income people, it also takes a large donation to make it better to itemize than to take the standard deduction. And this only occurs for relatively wealthy folks with high incomes. (Which probably describes you at this point, so congrats!)

But overall, I'd still say it's not "quite common."

 

Comment by Davidmanheim on Cultural EA considerations for Nordic folks · 2022-10-12T19:10:44.101Z · EA · GW
  • In the US, it is quite common to donate large sums of money. This is because there are significant tax benefits in doing so, effectively meaning that in some cases you can somewhat choose to either pay taxes or to donate to an organization of your choice.


I've seen this misunderstanding before by non-Americans, which seems weird because most places have similar setups. In the US, charity is tax deductible, but not a tax credit - so at any income level, you will personally have more money not donating than donating. And given that typical maximum marginal tax levels between state and federal taxes are under 50%, it means you are losing $0.50 for every dollar you give away, which is far less than needing to pay he full amount, but not anything like "pick whether to give the dollar to the government or to a charity."