Posts

How should Effective Altruists think about Leftist Ethics? 2021-11-27T13:25:37.281Z
A Red-Team Against the Impact of Small Donations 2021-11-24T16:03:40.479Z
[Linkpost] Apply For An ACX Grant 2021-11-12T09:44:36.017Z
Why aren't you freaking out about OpenAI? At what point would you start? 2021-10-10T13:06:40.911Z
What is the role of public discussion for hits-based Open Philanthropy causes? 2021-08-04T20:15:28.182Z
Writing about my job: Internet Blogger 2021-07-19T20:24:31.357Z
Does Moral Philosophy Drive Moral Progress? 2021-07-02T21:22:24.111Z
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare 2021-06-04T21:08:11.200Z
Base Rates on United States Regime Collapse 2021-04-05T17:14:22.775Z
Responses and Testimonies on EA Growth 2021-03-10T23:22:16.613Z
Why Hasn't Effective Altruism Grown Since 2015? 2021-03-09T14:43:01.316Z

Comments

Comment by AppliedDivinityStudies on Liberty in North Korea, quick cost-effectiveness estimate · 2021-11-30T22:27:36.302Z · EA · GW

I believe NK people would likely disagree with this conclusion, even if they were not being coerced to do so. I don't have good intuitions on this, it doesn't seem absurd to me.

Unrelated to NK, many people suffer immensely from terminal illnesses, but we still deny them the right to assisted suicide. For very good reasons, we have extremely strong biases against actively killing people, even when their lives are clearly net negative.

So yes, I think it's plausible that many humans living in extreme poverty or under totalitarian regimes are experiencing extremely negative net utility, and under some ethical systems, that implies that it would be a net good to let them die.

That doesn't mean we should promote policies that kill North Korean people or stop giving humanitarian food and medical aid.

Comment by AppliedDivinityStudies on Why hasn’t EA found agreement on patient and urgent longtermism yet? · 2021-11-29T22:29:49.675Z · EA · GW

EA has consensus on shockingly few big questions. I would argue that not coming to widespread agreement is the norm for this community.

Think about:

  • neartermism v.s. longtermism
  • GiveWell style CEAs v.s. Open Phil style explicitly non-transparent hits-based giving
  • Total Utilitarianism v.s. Suffering-focused Ethics
  • Priors on the hinge-of-history hypothesis
  • Moral Realism

These are all incredibly important and central to a lot of EA work, but as far as I've seen, there isn't strong consensus.

I would describe the working solution as some combination of:

  • Pursuing different avenues in parallel
  • Having different institutions act in accordance with different worldviews
  • Focusing on work that's robust to worldview diversification

Anyway, that's all to say, you're right, and this is an important question to make progress on, but it's not really surprising that there isn't consensus.

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-29T14:16:30.980Z · EA · GW

I think I see the confusion.

No, I meant an intervention that could produce 10x ROI on $1M looked better than an intervention that could produce 5x ROI on $1B, and now the opposite is true (or should be).

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-28T15:52:55.671Z · EA · GW

Uhh, I'm not sure if I'm misunderstanding or you are. My original point in the post was supposed to be that the current scenario is indeed better.

Comment by AppliedDivinityStudies on How should Effective Altruists think about Leftist Ethics? · 2021-11-28T15:50:27.164Z · EA · GW

I sort of expect the young college EAs to be more leftist, and expect them to be more prominent in the next few years. Though that could be wrong, maybe college EAs are heavily selected for not being already committed to leftist causes.

I don't think I'm the best person to ask haha. I basically expect EAs to be mostly Grey Tribe, pretty democratic, but with some libertarian influences, and generally just not that interested in politics. There's probably better data on this somewhere, or at least the EA-related SlateStarCodex reader survey.

Comment by AppliedDivinityStudies on How should Effective Altruists think about Leftist Ethics? · 2021-11-27T21:31:42.764Z · EA · GW

Okay, as I understand the discussion so far:

  • The RP authors said they were concerned about PR risk from a leftist critique
  • I wrote this post, explaining how I think those concerns could more productively be addressed
  • You asked, why I'm focusing on Leftist Ethics in particular
  • I replied, because I haven't seen authors cite concerns about PR risk stemming from other kinds of critique

That's all my comment was meant to illustrate, I think I pretty much agree with your initial comment.

Comment by AppliedDivinityStudies on How should Effective Altruists think about Leftist Ethics? · 2021-11-27T21:28:59.594Z · EA · GW

As I understand your comment, you think the structure of the report is something like:

  1. Here's our main model
  2. Here are it's implications
  3. By the way, here's something else to note that isn't included in the formal analysis

That's not how I interpret the report's framing. I read it more as:

  1. Here's our main model focused on direct benefits
  2. There are other direct benefits, such as Charter Cities as Laboratories of Governance
  3. Those indirect benefits might out-weight the direct ones, and might make Charter Cities attractive from a hits-based perspective
  4. One concern with the conception of Charter Cities as Laboratories of Governance is that it adds to the neocolonialist critique.
  5. "the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial... Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities."

So that's a bit different. It's not "here's a random side note". It's "Although we focus on modeling X, Charter Cities advocates might say the real value comes from Y, but we're not focusing on Y, in part, because of this neocolonialist critique."

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-27T21:23:12.206Z · EA · GW

Yeah, that's a good question. It's underspecified, and depends on what your baseline is.

We might say "for $1 donated, how much can we increase consumption". Or "for $1 donated, how much utility do we create?" The point isn't really that it's 10x or 5x, just that one opportunity is roughly 2x better than the other.

https://www.openphilanthropy.org/blog/givewells-top-charities-are-increasingly-hard-beat

So if we are giving to, e.g., encourage policies that increase incomes for average Americans, we need to increase them by $100 for every $1 we spend to get as much benefit as just giving that $1 directly to GiveDirectly recipients.

That's not exactly "Return on Investment", but it's a convenient shorthand.

Comment by AppliedDivinityStudies on How should Effective Altruists think about Leftist Ethics? · 2021-11-27T17:36:10.700Z · EA · GW

Thanks! Really appreciate getting a reply for you, and thanks for clarifying how you meant this passage to be understood.

I agree that you don't claim the PR risks should disqualify charter cities, but you do cite it as a concern right? I think part of my confusion stems from the distinction between "X is a concern we're noting" and "X is a parameter in the cost-effectiveness model", and from trying to understand the relative importance of the various qualitative and quantitative arguments made throughout.

I.e., one way of interpreting your report would be:

  1. There are various ways to think about the benefits of Charter Cities
  2. Some of those ways are highly uncertain and/or difficulty to model, here are some briefly comments on why we think so
  3. We're going to focus on quantitatively modeling this one path to impact
  4. On the basis of that model, we can't recommend funding Charter Cities and don't believe that they're cost-effective for that particular path to impact

In that case, it makes less sense for me to think of the neocolonialism critique as a argument against Charter Cities, and more sense to think of it as an explanation for why you didn't choose to prioritize analyzing a different path to impact.

Is that about right? Or closer to right than my original interpretation?

Comment by AppliedDivinityStudies on How should Effective Altruists think about Leftist Ethics? · 2021-11-27T17:24:21.252Z · EA · GW

Yes that's true. Though I have not read any EA report that includes a paragraph of the flavor "Libertarians are worried about X, we have no opinion on whether or not X is true, but it creates substantial PR-risk."

That might be because libertarians are less inclined to drum up big PR-scandals, but it's also because EAs tend to be somewhat sympathetic to libertarianism already.

My sense is that people mostly ignore virtue ethics, though maybe Open Phil thinks about them as part of their "worldview diversification" approach. In that case, I think it would be useful to have a specific person serving as a community virtue ethicist instead of a bunch of people who just casually think "this seems reasonable under virtue ethics so it's robust to worldview diversification". I have no idea if that's what happens currently, but basically I agree with you.

Comment by AppliedDivinityStudies on Is it no longer hard to get a direct work job? · 2021-11-26T20:44:43.166Z · EA · GW

EA Funds is also just way bigger than it used to be https://funds.effectivealtruism.org/stats/overview

This dashboard only gives payout amounts, so I'm not sure what's happened to # of grants or acceptance rate, but the huge increase in sheer cumulative donation from last year to this one is encouraging.

Comment by AppliedDivinityStudies on Don’t wait – there’s plenty more need and opportunity today · 2021-11-24T22:32:10.101Z · EA · GW

a large-scale study evaluating our program in Kenya found each $1 transferred drove $2.60 in additional spending or income in the surrounding community, with non-recipients benefitting from the cash transfers nearly as much as recipients themselves. **Since 2018, we have asked GiveWell to fully engage with this study and others, but they have opted not to, citing capacity constraints. ** [emphasis mine]

This sounded pretty concerning to me, so I looked into it a bit more.

This GiveWell post mentions that they did engage with the study, or at least private draft results of it. An updated at the top of the post does clarify that they have not reviewed the full results. They explain the decision as:

We have not made it a high priority to update this analysis because it is very unlikely to change our recommendations to donors. This is because we estimate that the grants we have recommended on the margin are at least 5 times, and in most cases at least 10 times, as cost-effective as unconditional cash transfers, so we do not anticipate that any changes to our model from investigating this factor further would be large enough to lead us to direct Maximum Impact Fund grants to GiveDirectly at this time.

I guess that decision sounds fine to me. They're basically saying that even taking the 2.6x multiple at face value, it doesn't put GiveDirectly ahead of any of their top charities, so it's not worth taking the time to fully evaluate it.

Does that seem unfair to you?

Comment by AppliedDivinityStudies on Don’t wait – there’s plenty more need and opportunity today · 2021-11-24T22:22:10.627Z · EA · GW

I want to agree with your points on delegating as much decision making directly to the affected populations, but my sense is that this is something GiveWell already thinks very seriously about, and has deeply considered.

For example, I personally felt very persuaded by some of Alex Berger's comments explaining that the advantage of buying bed-nets over direct transfers are that many of the beneficiaries are children, who wouldn't be eligible for GiveDirectly, and that ~50% of the benefits are from the positive externalities of killing mosquitoes, so people making individual choices would tend to underinvest.

I'm guessing you wouldn't find that argument compelling, or at least not sufficiently compelling, so I'd love to understand what I'm missing here, or why/how our views might differ.

Comment by AppliedDivinityStudies on Don’t wait – there’s plenty more need and opportunity today · 2021-11-24T22:19:10.837Z · EA · GW

We applaud the work they did with IDinsight to understand better preferences of potential aid recipients, but the scale and scope of this survey doesn’t go nearly far enough in correcting the massive imbalances in power and lived experience that exist in their work and in philanthropy in general.

Was happy to see you link to this. I agree the IDinsight surveys are simultaneously super useful and nowhere near enough.

My own sense is that more work in the vein of surveying people in extreme poverty to better calibrate moral weights would eventually alleviate something like 50% of my concern that donors are much wealthier than their recipients, but my interpretation of your phrasing makes me guess you would put that number at more like 5%.

What do you think would be a promising future scale and scope for surveys like this? Are those surveys being conducted? Do you worry that even much more comprehensive surveys wouldn't "go nearly far enough"?

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-24T22:10:07.092Z · EA · GW

Agreed that my arguments don't apply to donations to GiveDirectly, it's just that they're 5-10x less effective than top GiveWell charities.

I think that part of my arguments don't apply to other GiveWell charities, but the general concern still does. If AMF (or whoever) has funding capacity, why shouldn't I just count on GiveWell to fill it?

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-24T22:07:38.798Z · EA · GW

I agree EA is really good as funding weird things, but every in-group has something they consider weird. A better way of phrasing that might have been "fund things that might create PR risk for OpenPhil".

See this comment from the Rethink Priorities Report on Charter Cities:

Finally, the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial. Charter cities are likely to be financed by rich-country investors but built in low-income countries. If rich developers enforce radically different policies in their charter cities, that opens up the charge that the rich world is using poor communities to experiment with policies that citizens of the rich world would never allow in their own communities. Whether or not this criticism is justified, it would probably resonate with many socially-minded individuals, thereby reducing the appeal of charter cities.

Note the phrasing "Whether or not this criticism is justified". The authors aren't worried that Charter Cities are actually neocolonialist, they're just worried that it creates PR risk. So Charter Cities are a good example of something small donors can fund that large EA foundations cannot.

I agree that EA Funds is in a slightly weird place here since you tend to do smaller grants. Being able to refer applicants to private donors seems like a promising counter-argument to some of my criticisms as well. Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-24T22:02:26.731Z · EA · GW

But the more you think everyone else is doing that, the more important it is to give now right? Just as an absurd example, say the $46b EA-related funds grows 100% YoY for 10 years, then we wake up in 2031 with $46 trillion. If anything remotely like that is actually true, we'll feel pretty dumb for not giving to CEPI now.

Comment by AppliedDivinityStudies on A Red-Team Against the Impact of Small Donations · 2021-11-24T21:30:50.597Z · EA · GW

Hey, thanks. That's a good point.

I think it depends partially on how confident you are that Dustin Moskovitz will give away all his money, and how altruistic you are. Moskovitz seems great, I think he's pledged to give away "more than half" his wealth in his lifetime (though I current find a good citation, it might be much higher). My sense is that some other extremely generous billionaires (Gates/Buffet) also made pledges, and it doesn't currently seem like they're on track. Or maybe they do give away all their money, but it's just held by the foundation, not actually dolled out to causes. And then you have to think about how foundations drift over time, and if you think OpenPhil 2121 will have values you still agree with.

So maybe you can think of this roughly as: "I'm going to give Dustin Moskovitz more money, and trust that he'll do the right thing with it eventually". I'm not sure how persuasive that feels to people.

(Practically, a lot of this hinges on how good the next best alternatives actually are. If smart weirdos you know personally are only 1% as effective as AMF, it's probably still not worth it even if the funding is more directly impactful. Alternatively, GiveDirectly is ~10% as good as GiveWell top charities, and even then I think it's a somewhat hard sell that all my arguments here add up to a 10x reduction in efficacy. But it's not obviously unreasonable either.)

Comment by AppliedDivinityStudies on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-24T12:03:29.000Z · EA · GW

I believe that GiveWell/OpenPhil often try to avoid providing over 50% of a charity's funding to avoid fragility / over-reliance.

Is an upshot of that view that personal small donations are effectively matched 1:1?

I.e. Suppose AMF is 50% funded by GiveWell, when I give AMF $100, I'm allowing GiveWell to give another $100 without exceeding the threshold.

Curious if anyone could corroborate this guess.

Comment by AppliedDivinityStudies on Our Criminal Justice Reform Program Is Now an Independent Organization: Just Impact · 2021-11-20T02:03:30.943Z · EA · GW

This is great to see, a huge congratulations to everyone involved!

Side note: Sorry for a totally inane nitpick, but I was curious about the phrasing in your opening line:

Mass incarceration in America has devastated communities, particularly communities of color: 1 in 2 Americans has a family member who’s been incarcerated, 1 in 4 women in America have a loved one in jail or prison, and millions of children have a parent in prison.

From a glance at Wikipedia, US incarceration rates are 7.5x higher for males, or within Black adults, 16.7x higher for males.

So I guess I'm confused by the decision to highlight the impact of mass incarceration on women. Sorry if that's a dumb question, just hoping to understand this better.

Comment by AppliedDivinityStudies on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-16T09:02:05.344Z · EA · GW

Makes sense, thanks!

Comment by AppliedDivinityStudies on What's the GiveDirectly of longtermism & existential risk? · 2021-11-16T09:01:16.376Z · EA · GW

Here's Will MacAskill at EAG 2020:

I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option.

Command-f for the full context on this.

Comment by AppliedDivinityStudies on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-15T14:48:05.798Z · EA · GW

Basically, funders are holding their bar significantly higher than GiveDirectly. And that’s because they believe that by waiting, we’ll be able to find and also create new opportunities that are significantly more cost effective than GiveDirectly, and therefore over the long term have a much bigger impact. So I’d say the kind of current bar of funding within GiveDirectly is more around the level of Against Malaria Foundation, which GiveWell estimates is 15 times more cost effective than GiveDirectly. So generally, charities that are around that level of cost effectiveness, that level of evidence base, and kind of good in the same other ways, have a good shot of getting funding.

Could you clarify what you mean here? Is "current bar of funding within GiveDirectly" the phrasing you intended? Is it that new interventions need the cost effectiveness of AMF, and also the scale of GiveDirectly? Sorry there's not a more specific question, I'm just generally a bit confused by the literal meaning of this paragraph as you intended it.

Comment by AppliedDivinityStudies on What does the growth of EA mean for our priorities and level of ambition? · 2021-11-15T14:34:07.073Z · EA · GW

Sorry, really silly nit. But curious if the 80k transcripts are auto generated or manually transcribed?

This one felt a bit harder to parse than the typical 80k podcast transcripts, I think because you left in pretty much all of the "ok"/"so"/"yeah" filler words.

Or might just be because it's a public talk instead of an interview? Though I would actually expect the latter to be more conversation and less prepared.

Comment by AppliedDivinityStudies on [Linkpost] Apply For An ACX Grant · 2021-11-13T08:54:17.489Z · EA · GW

I agree that s-risks are highly neglected relative to their importance, but are they neglected by existing sources of funding? I'm genuinely asking because I'm not sure. The question is roughly:

  1. Are they currently funded by any large EA donors?
  2. Is funding a bottleneck, such that more funding would result in better results?
Comment by AppliedDivinityStudies on How many people should get self-study grants and how can we find them? · 2021-11-11T15:32:51.432Z · EA · GW

That's true, but feels less deadweight to me. You have fewer friends, but that results in more time. You move out of one town, but into another with new opportunities.

Comment by AppliedDivinityStudies on How many people should get self-study grants and how can we find them? · 2021-11-11T14:11:14.322Z · EA · GW

This is an important point. You want some barrier to entry, while also minimizing deadweight loss from signaling / credentialing. So "you can join EA and get funding, but only if you complete a bunch of arbitrary tasks" is bad, but "you can join EA and get funding, but only if you move to this town" is pretty good!

Of course it would be nice to have an EA Hotel equivalent that is more amenable to people with visa/family/health restrictions (Especially now that the UK is not part of the EU and has Covid-related entry requirements), but I think it's a fairly good model for unblocking potential talent without throwing money around.

Comment by AppliedDivinityStudies on How Do We Make Nuclear Energy Tractable? · 2021-11-11T14:07:10.313Z · EA · GW

Epistemic status: loose impressions and wild guesses.

Note that this is not true across the globe. See this Wikipedia list, and command-F "202" (for 202X).

Some are even plant closures (mostly US, Canada, Germany), but China has a ton of new plants. Other countries with new plants and planned plants include Finland, Egypt, France, Poland, Russia, Turkey, even the US!

My loose impression is that some recent excitement is driven by Small Modular Reactors, and of course, climate change.

This chart is useful, showing that nuclear as a share of all energy plateaued in the last 80s, and in absolute terms plateaued in the early 2000s, but it doesn't show the last few years or future projections.

One final note, it's possible nuclear just isn't as good as it's proponents say (or wasn't historically). Our World in Data does show that nuclear is among the safest (in terms of deaths per watt-hour), but even though deaths were low, Fukushima cleanup is estimated to cost $200 billion (similar to global annual investment in solar).

Also note that nuclear is now more expensive than both solar and wind, both of which has been consistently getting cheaper.

Nuclear in contrast is actually getting more expensive. Possibly due to increased regulatory/safety overhead.

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-11-08T08:48:00.364Z · EA · GW

That's pretty wild, especially considering getting Holden on the board was a major condition of OpenPhilanthropy's $30,000,000 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support#Details_on_Open_Philanthropy8217s_role

Thought it also says the grant was for 3 years, so maybe it shouldn't be surprising that his board seat only lasted that long.

Comment by AppliedDivinityStudies on If I have a strong preference for remote work, should I focus my career on AI or on blockchain? · 2021-11-04T10:43:28.872Z · EA · GW

Hey, would recommend reading a bit more of the 80k materials https://80000hours.org/

Or starting here https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/

Of course you're free to do whatever you want with your career, but the standard EA advice is going to be to follow the 80k recommendations for high impact careers https://80000hours.org/career-reviews/

Comment by AppliedDivinityStudies on Liberty in North Korea, quick cost-effectiveness estimate · 2021-11-03T10:27:38.238Z · EA · GW

very speculative

Say you're hit by a car tomorrow and die. An angel comes down, and they don't quite offer you a second chance at life, they just offer you a day of life, with none of your current memories, as an average middle class person in South Korea.

Do you accept? I probably would, I expect the median South Korean to have a net-positive existence.

But here's the catch: you also have to spend a day as an average political dissident in North Korea. Would you take that trade? I definitely would not. I think the disutility of the second scenario far outweighs the utility of the first.

So what would the ratio have to be? I.e How many good days in SK would you have to get in return to accept a single day living in NK? It's hard to say without a better sense of the conditions in each play, but I would genuinely guess something like 10:1. In other words, putting very rough guesses on the utility of each scenario:

  • Middle class in South Korea: 10
  • Muzak and potatoes: 0
  • Political dissident in North Korea: -100

In this view, you're not just "saving a life", you're preventing a huge amount of suffering.

I'm not sure how exactly this compares to GiveWell's evaluations, or what degree of disutility they expect to prevent with interventions. Dying is bad, getting malaria and then dying is probably really horrible.

I'm not advocating running out and donating to LiNK for all the reasons mentioned by OP, but this is the chain of reasoning I would pursue more rigorously if I wanted to seriously evaluate their efficacy.

Comment by AppliedDivinityStudies on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-03T10:15:12.519Z · EA · GW

Agreed. The proper approach is probably to develop a playbook for rapidly evaluating whether or not a news cycle is worth thinking about at all, and then executing on a specific pre-determined plan when it is.

Comment by AppliedDivinityStudies on Annual donation rituals? · 2021-11-03T10:13:21.980Z · EA · GW

Ohh mugs are a great idea! I just found (ACE top charity) the Humane League's gift shop: https://thehumaneleague.org/shop

Water bottle and mug are pretty compelling.

Comment by AppliedDivinityStudies on Annual donation rituals? · 2021-11-01T09:20:56.390Z · EA · GW

One example idea might be a specific family dinner every year where we all research and discuss where we want to give and what the impact might be.

I tried this last year, spent several hours with a friend doing research... and then sighed and gave it all to GiveWell charities as usual.

FWIW, I specifically don't discuss giving with any other friends. Most of them are not EAs, and giving away a significant chunk of money would likely be alienating (for financial reasons), or scrutiny inducing ("aren't you just spending a lot to signal how good you are?"), or politically contentious ("why are you giving to these random charities when Current Political Moment deserves all our attention??").

I gave to ACE charities a while back and got a very nice hand-written card with an animal on it, which I then had up in my room for many months. That's not really a ritual, but I thought it was really great. I would frequently look at the card and immediately feel better about myself and about the world. Also, the animal was extremely cute.

Comment by AppliedDivinityStudies on Is there anyone working full-time on helping EAs address mental health problems? · 2021-11-01T09:07:28.057Z · EA · GW

There are some "EA coaches", though I'm not sure what balance they tend to strike between mental health and increasing productivity.

Scott Alexander has been compiling some articles here: https://lorienpsych.com/ This is sort of meant as an early prototype of the ideas here: https://slatestarcodex.com/2018/06/20/cost-disease-in-medicine-the-practical-perspective/ It seems pretty basic, but I've found it very useful. It's rare to have any kind of source that is simultaneously A) high trust informally, B) from a conventionally credentialed expert and C) offering that advice sincerely in a way that aligns with your goals (as opposed to avoiding liability, etc).

I'm not sure generating more material is the way to go. I don't have it all compiled, but I feel like there's plenty of EA content along the lines of "don't feel guilty all the time for not giving away all of your money".

Also FWIW, I think a big part of what you're describing in selection effects. EA might be the proximate cause of guilt, but it's a community that selects for neurotic and scrupulous people.

Comment by AppliedDivinityStudies on Has Life Gotten Better? · 2021-10-19T18:07:52.331Z · EA · GW

You write:

25 of 33 societies appear to have no possibility for female leaders. [76%] 19 of 33 societies appear to have limited or no female voice in intraband affairs. [58%]

Out of curiosity, I wanted to check how many current societies (countries) have female leaders. This wikipedia page lists 26, and there are ~195 countries total, which gives us 13%.

To weigh by population and rule out ceremonial positions, I compiled some data in this Google Sheet, which gets us that 5.44% of the world population has a female leader.

To be clear, I don't consider this a particularly strong counterpoint. You do go on to mention that even the societies with female leaders had serious gender inequality. Also, many of the countries I've listed have had female leaders in the past, or have laws allowing female leaders, so it's not as if they have "no possibility" as may have been the case in the past.

But if I were writing the article "post-agricultural gender relations seem bad", I might say something like "169 out of 195 societies have no female leaders" and "19 out of 20 people don't have a female leader", and it would sound quite bad for the modern world.

Comment by AppliedDivinityStudies on Has Life Gotten Better? · 2021-10-19T17:56:35.626Z · EA · GW

I thought this was a helpful corrective to a largely unchecked popular narrative.

It seems to me that there is a fair amount of interest in stretching thin evidence to argue that pre-agriculture societies had strong gender equality. This might be partly be coming from a fear that if people think gender inequality is "ancient" or "natural," they might conclude that it is also "good" and not to be changed.

That's part of it, but I think the stronger reason is something like "there were female leaders in the past, therefore today's gender inequality is the result of social norms".

EDIT: Also FWIW, the Wikipedia page for Sexism does note under Ancient world:

Evidence, however, is lacking to support the idea that many pre-agricultural societies afforded women a higher status than women today.

Comment by AppliedDivinityStudies on An update in favor of trying to make tens of billions of dollars · 2021-10-17T13:26:19.094Z · EA · GW

That's a good clarification, I do agree that EAs should consider becoming VCs in order to make a lot of money. I just don't think they should become VCs in order to enable earn-to-give EA founders.

Comment by AppliedDivinityStudies on An update in favor of trying to make tens of billions of dollars · 2021-10-17T10:53:00.072Z · EA · GW

This is my personal view, I understand that it might not be rigorously argued enough to be compelling to others, but I'm fairly confident it in anyway:

I literally believe that there are ~0 companies which would have been valued at $10b or more, but which do not exist because they were unable to raise seed funding.

You will often hear stories from founders who had a great idea, but the VCs were just too close minded. I don't believe these. I think a founder who's unable to raise seed money is simply not formidable (as described here), and will not be able to create a successful company.

This is particularly true right now when seed money is extremely available. If you're unable to fundraise, something has gone wrong, and you probably should not be starting the company.

The strongest objection is that ~0 is not 0, and so we should create an EA VC even if the odds are really bad. I'm not that convinced, but it's possible this is correct.

Comment by AppliedDivinityStudies on An update in favor of trying to make tens of billions of dollars · 2021-10-17T07:42:14.212Z · EA · GW

See my comment here https://forum.effectivealtruism.org/posts/m35ZkrW8QFrKfAueT/an-update-in-favor-of-trying-to-make-tens-of-billions-of?commentId=MZvxZ9yrZoqAXM3Cx

Comment by AppliedDivinityStudies on An update in favor of trying to make tens of billions of dollars · 2021-10-17T07:40:50.691Z · EA · GW

Depends immensely on if you think there are EAs who could start billion-dollar companies, but would not be able to without EA funding. I.e. they're great founders, but can't raise money from VCs. Despite a lot of hand-wringing over the years about the ineffectiveness of VCs, I generally think being able to raise seed money is a decent and reasonable test, and not arbitrary gatekeeping. The upshot being, I don't think think EAs should try to start a seed fund.

You could argue that it would be worth it, solely for the sake of getting equity in very valuable companies. But at that point you're just trying to compete with VCs directly, and it's not clear that EAs have a comparative advantage.

Comment by AppliedDivinityStudies on On the assessment of volcanic eruptions as global catastrophic or existential risks · 2021-10-14T09:34:33.817Z · EA · GW

Exciting to hear about your upcoming plans, thanks!

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-14T09:24:58.543Z · EA · GW

Google does claim to be working on "general purpose intelligence" https://www.alignmentforum.org/posts/bEKW5gBawZirJXREb/pathways-google-s-agi

I do think we should be worried about DeepMind, though OpenAI has undergone more dramatic changes recently, including restructuring into a for-profit, losing a large chunk of the safety/policy people, taking on new leadership, etc.

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-14T09:02:14.027Z · EA · GW

In the absence of rapid public progress, my default assumption is that "trying to build AGI" is mostly a marketing gimmick. There seem to be several other companies like this, e.g.: https://generallyintelligent.ai/

But it is possible they're just making progress in private, or might achieve some kind of unexpected breakthrough. I guess I'm just less clear about how to handle these scenarios. Maybe by tracking talent flows, which is something the AI Safety community has been trying to do for a while.

Comment by AppliedDivinityStudies on On the assessment of volcanic eruptions as global catastrophic or existential risks · 2021-10-13T16:24:08.170Z · EA · GW

Thanks so for taking the time to write this up! I've been (casually) curious about this topic for a while, and it's great to have your expert analysis.

My main question is: How tractable are the current solutions to all of this? Are there specific next steps one could take? Organizations that could accept funding or incoming talent? Particular laws or regulations we ought to be advocating for? Those are all tough questions, but it would be helpful to have even a very vague sense of how far a unit of money/time could go towards this cause.

What is clear though is that large magnitude eruptions (mag 7+), with a cumulative probability this century of ~1 in 6, are a demonstrable global catastrophic threat and through food and resource impacts would lead to mass global suffering, as well as acting as a not insignificant existential risk factor to x-risks such as environmental damage, pandemics and nuclear wars.

Not sure how others will respond, but just to offer one data point: this was really surprisingly high to me.

if you know literature that may be connected with some of the themes we cover, then please let us know.

The only thing that jumps to mind is Luisa Rodriguez's work on famines during a civilizational collapse or nuclear winter: https://forum.effectivealtruism.org/posts/GsjmufaebreiaivF7/what-is-the-likelihood-that-civilizational-collapse-would

Not quite the same, but as you mention, "the closest analogy is nuclear war scenarios". They feel similar in that the worst case scenarios seem to be various hard to predict follow-on effects, e.g. there's a resource shortage, people panic and chaos ensues.

large explosive eruptions have a range of different effects and impacts which in themselves represent clear global catastrophic risks or ‘s-risks’, leading to extensive loss of life.

Minor nit: As I understand the term's usage, the events you describe would probably not entirely qualify. One org describes "s-risk" as "risks of cosmically significant amounts of suffering". A few other things I've read focuses on really astronomically large (in terms of population or timescale) almost science-fiction-esque scenarios, for example, colonizing the galaxy, producing 10^50 humans, and then torturing them all for a trillion years. But I'm not 100% confident that's the canonical definition, so your usage might be totally fine.

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-13T11:59:24.714Z · EA · GW

Happy to see they think this should be discussed in public! Wish there was more on questions #2 and #3.

Also very helpful to see how my question could have been presented in a less contentious way.

Comment by AppliedDivinityStudies on Progress studies vs. longtermist EA: some differences · 2021-10-13T09:22:00.759Z · EA · GW

Hey sorry for the late reply, I missed this.

Yes, the upshot from that piece is "eh". I think there are some plausible XR-minded arguments in favor of economic growth, but I don't find them overly compelling.

In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it's hard to argue that it'll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.

R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-12T11:37:34.414Z · EA · GW

dynamics of Musk at the creation of OpenAI, not recent events or increasing salience

Thanks, this is a good clarification.

It is hard to tell if the OP has a model of AI safety or insight into what the recent org dynamics mean, all of which are critical to his post having meaning.

You're right that I lack insight into what the recent org dynamics mean, this is precisely why I'm asking if anyone has more information. As I write at the end:

To be clear, I'm not advocating any of this. I'm asking why you aren't. I'm seriously curious and want to understand which part of my mental model of the situation is broken.

The quotes from Paul are helpful, I don't read LW much and must have missed the interview, thanks for adding these. Having said that, if you see u/irving's comment below, I think it's pretty clear that there are good reasons for researchers not to speak up too loudly and shit talk their former employer.

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-11T16:11:09.280Z · EA · GW

Thanks for the recommendation. I spent about an hour looking for contact info, but was only able to find 5 public addresses of ex-OpenAI employees involved in the recent exodus. I emailed them all, and provided an anonymous Google Form as well. I'll provide an update if I do hear back from anyone.

Comment by AppliedDivinityStudies on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-11T04:15:24.599Z · EA · GW

Is it that I'm out of touch, missing recent news, and OpenAI has recently convincingly demonstrated their ongoing commitment to safety?

This turns out to be at least partially the answer. As I'm told, Jan Leike joined OpenAI earlier this year and does run an alignment team.