Posts

What is the role of public discussion for hits-based Open Philanthropy causes? 2021-08-04T20:15:28.182Z
Writing about my job: Internet Blogger 2021-07-19T20:24:31.357Z
Does Moral Philosophy Drive Moral Progress? 2021-07-02T21:22:24.111Z
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare 2021-06-04T21:08:11.200Z
Base Rates on United States Regime Collapse 2021-04-05T17:14:22.775Z
Responses and Testimonies on EA Growth 2021-03-10T23:22:16.613Z
Why Hasn't Effective Altruism Grown Since 2015? 2021-03-09T14:43:01.316Z

Comments

Comment by AppliedDivinityStudies on How to Train Better EAs? · 2021-08-05T15:40:09.398Z · EA · GW

There's the CFAR workshop, but it's just a 4 day program. (Though it would take longer to read all of Yudkowsky's writing.)

I'm no expert, but in some plausible reading, US Military training is primarily about cultivating obedience and conformity. Of course some degree of physical conditioning is genuinely beneficial, but when's the last time a Navy Seal got into a fist fight?

For most of the EA work that needs to get done (at the moment), having an army of replaceable, high-discipline, drones is not actually that useful. A lot of the movement hinges on a relatively small number of people acting with integrity, and thinking creatively.

Instead of intense training processes, EA at the moment relies on a really intense selection process. So the people who end up working in EA orgs have mostly already taught themselves the requisite discipline, work ethic and so on.

Comment by AppliedDivinityStudies on What is the role of public discussion for hits-based Open Philanthropy causes? · 2021-08-04T23:33:51.236Z · EA · GW

the same criticism applies to the large Open Phil spending on specific scientific bets.

Sorry, just to clarify again (and on the topic of swearing fealty), I don't mean any of this as a criticism of Open Phil. I agree enthusiastically with the hits-based giving point, and generally think it's good for at least some percentage of philanthropy to be carried out without the expectation of full transparency and GiveWell-level rigor.

It's unclear how we would expect a public forum discussion to substantially influence any of the scientific granting above.

I think that's what I'm saying. It's unclear to me if EA Forum, and public discussions more generally, play a role in this style of grant-making. If the answer is simply "no", that's okay too, but would be helpful to hear.

these orgs would be easy to talk about.

I agree that there are avenues for discussion. But it's not totally clear to me which of these are both useful and appropriate. For example, I could write a post on whether or not the constructivist view of science is correct (FWIW I don't believe Alexey actually holds this view), but it's not clear that the discussion would have any bearing on the grant-worthiness of New Science.

Again, maybe EA Forum is simply not a place to discuss the grant-worthiness of HOP-style causes, but the recent discussion of Charter Cities made me think otherwise.

I think your post is great honestly

Thanks!

Comment by AppliedDivinityStudies on What is the role of public discussion for hits-based Open Philanthropy causes? · 2021-08-04T23:25:27.233Z · EA · GW

I also don't know for sure, but this examples might be illustrative:

Ought General Support:

Paul Christiano is excited by Ought’s plan and work, and we trust his judgement.

And:

We have seen some minor indications that Ought is well-run and has a reasonable chance at success, such as: an affiliation with Stanford’s Noah Goodman, which we believe will help with attracting talent and funding; acceptance into the Stanford-Startx4 accelerator; and that Andreas has already done some research, application prototyping, testing, basic organizational set-up, and public talks at Stanford and USC.

So it's not really a big expected value calculation. It's more like:

  • We consider AI Safety to be very important
  • A trusted advisor is excited
  • Everything checks out at the operational level

It might not follow point-by-point, but I can imagine how a similar framework might apply to New Science / QRI / Charter Cities.

Returning to the original point: As far as I can tell, these are not the kinds of issue we are (or should be) discussing on EA Forum. I could be wrong, but it's hard to imagine endorsing a norm where many top EA Forum posts are of the form "I talked to Alexey Guzey from New Science, it seems exciting" or worse "I talked to Adam Marblestone about New Science, and he seems excited about it".

Full disclosure, I did talk to Alexey about New Science and it did seem exciting. I also talked to Andrew at QRI and Mark at Charter Cities, and they all seemed exciting! But precisely the point of this question is to figure out how I'm supposed to frame that endorsement in a way that is both appropriate and useful.

Comment by AppliedDivinityStudies on Writing about my job: Internet Blogger · 2021-07-29T15:46:41.175Z · EA · GW

It's really hard to tell if my writing has had any impact. I think it has, but it's often in the form of vague influence that's difficult to verify. And honestly, I haven't tried very hard because I think it's potentially harmful in the short run to index too heavily on any proxy metric. F.e.x. I don't even track page views.

Though I have talked to some EA people who mostly told me to keep blogging, rather than pursuing any of the other common paths. Some people did recommend that I pursue the Future Perfect Fellowship, which I think is likely to be super high impact, but it just wasn't a good fit for me.

I didn't think a lot about it. It was basically "Scott Alexander has a good blog, some EA people have good blogs, this seems to be a worthwhile activity".

One way to explain it is as self-mentorship. Todd's latest report indicates that EA really is talent constrained, and specifically senior talent constrained. Unfortunately, the senior talent pipeline is not that healthy right now, largely because there is a lack of senior talent available to mentor junior talent in the first place. So blogging is one path to eventually becoming senior talent without taxing EA resources, and does effectively create new capacity out of nowhere.

On that path, some good next steps could be to:

  • Do more consulting for EA orgs
  • Directly work on a large research project, once this seems manageable
  • Eventually try to hire/mentor even more junior people
Comment by AppliedDivinityStudies on Writing about my job: Data Scientist · 2021-07-21T03:42:19.995Z · EA · GW

I would guess US market (at least those reporting on Glassdoor) skews heavily SF/NYC, maybe Seattle.

Comment by AppliedDivinityStudies on Writing about my job: Internet Blogger · 2021-07-20T17:24:52.206Z · EA · GW

Thanks! That's one perk I neglected to mention. You can try blogging in your spare time without much commitment. Though I do think it's a bit risky to do it half-heartedly, get disappointed in the response, and never find out what you would be capable of if you went full time.

There are lots of bloggers who definitely don't do independent research, but within the broader EA space it's a really blurry line. One wacky example is Nadia Eghbal who's writing products include tweets, notes, a newsletter, blog posts, a 100 page report, and a book.

The journalism piece is interesting. Previously I would have said there are mainstream journalists, and then small-scale citizen journalists who focus on hyperlocal reporting or something. Now so many high profile journalists have gone to Substack to do something that is often opinion-writing, but sometimes goes beyond that.

In the past, I also would have said that journalists have more of a responsibility to be impartial, be the view from nowhere, etc. That seems less true today, but it's possible I'm conflating op-eds with "real reporting", and an actual journalist would tell you that there are still clear boundaries.

Comment by AppliedDivinityStudies on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-07-20T16:59:19.675Z · EA · GW

This is a very good longtermist piece. Is the short-termist interpretation that we should try very hard to clone John von Neumann?

Comment by AppliedDivinityStudies on Writing about my job: Internet Blogger · 2021-07-20T16:51:38.128Z · EA · GW
  1. It depends on your skillset. My impression is that EA is not really talent constrained, with regards to the talents I currently have. So I would have a bit to offer on the margins, but that's all. I also just don't think I'm nearly as productive when working on a specific set of goals, so there's some tradeoff there. I'm interested in doing RSP one day, and might apply in the future. In theory I think the Vox Future Perfect role could be super high impact.

  2. I probably should.

  3. The short answer is that it's an irreversible decision, so I'm being overly cautious. But mostly it's aesthetic: I like Ender's Game, Death Note, etc.

  4. X-risk = Applied Eschatology. Progress Studies = Applied Theodicy.

Comment by AppliedDivinityStudies on Writing about my job: Internet Blogger · 2021-07-20T16:36:14.754Z · EA · GW

I've wanted to do this for a while, but haven't yet amassed enough material on a topic to consider it a very coherent work. But someday...

Comment by AppliedDivinityStudies on Writing about my job: Internet Blogger · 2021-07-20T03:03:04.687Z · EA · GW

Thanks!

Prior to blogging, I had a day job for a while and lived pretty frugally. I told myself I was investing the money to donate eventually, and did eventually donate some, but kept the bulk of it. So when I first started blogging I already had enough to live on for a while. Then I got the EV grant, and a bit of additional private funding. So long story short, it's not stressful, but it is something I think about. I'm not 100% sure what the long term strategy will be, but based on the feedback I've gotten so far, I think it's likely I'll be able to continue getting grants/donations.

Comment by AppliedDivinityStudies on Writing about my job: Data Scientist · 2021-07-19T17:57:40.253Z · EA · GW

Thanks for the writeup. Minor point about salary, is £41k entry-level is typical for London? According to Glassdoor average base pay for US is $116k USD, equivalent to £85k. Their page for Data Scientists in London puts the average at £52k.

I get that this is an average overall levels of seniority, but it's also just your base pay. My impression from Levels.fyi is that at large US companies, base pay is only around 67-75% of total compensation.

So I guess what I'm asking is, given your experience, which of the following statements would you agree with:

  • The aggregate data is wrong or misleading
  • You're being underpaid
  • There really is a huge pay difference between the UK and US
  • Something else?
Comment by AppliedDivinityStudies on New blog: Cold Takes · 2021-07-13T18:52:23.843Z · EA · GW

Exciting to hear!

Minor UI nit: I found the grey Sign up button slightly confusing and initially thought it was disabled.

Comment by AppliedDivinityStudies on Can money buy happiness? A review of new data · 2021-07-08T18:12:28.523Z · EA · GW

Rohin, I thought this was super weird too. Did a bit more digging and found this blog post: https://kieranhealy.org/blog/archives/2021/01/26/income-and-happiness/

if the figure is showing a subset of the two (i.e. only observations from people who answered both questions) then the z-score means across income levels will be slightly different, depending on who is excluded.

The author (who is an academic) agrees this is a bit weird, and notes "small-n noisiness at high incomes".

So overall, I see the result as plausible but not super robust. Though note that in alignment with Kahneman/Deaton, Life Satisfaction does continue to increase even as Experienced Wellbeing dips.

Comment by AppliedDivinityStudies on People working on x-risks: what emotionally motivates you? · 2021-07-06T00:14:22.748Z · EA · GW

Personally:

  • 5% internalized consequences
  • 45% intellectual curiosity
  • 50% status

I'm sort of joking. Really, I think it's that "motivation" is at least a couple things. In the grand scheme of things, I tell myself "this research is important". Then day to day, I think "I've decided to do this research, so now I should get to work". Then every once in a while, I become very unmotivated, and I think about what's actually at stake here, and also about the fact that some Very Important People I Respect tell me this is important.

Comment by AppliedDivinityStudies on Does Moral Philosophy Drive Moral Progress? · 2021-07-06T00:11:41.542Z · EA · GW

Thanks, this is a good comment.

Comment by AppliedDivinityStudies on Is an increase in attention to the idea that 'suffering is bad' likely to increase existential risk? · 2021-07-01T20:25:28.524Z · EA · GW

Million years of "state of nature" type pain is strongly preferable to s-risks.

Comment by AppliedDivinityStudies on Is an increase in attention to the idea that 'suffering is bad' likely to increase existential risk? · 2021-06-30T20:26:25.405Z · EA · GW

This is a good question, but I worry you can make this argument about many ideas, and the cost of self-censorship is really not worth it. For example:

  • If we talk too much about how much animals are suffering, someone might conclude humans are evil
  • If we talk too much about superintelligence, someone might conclude AI is superior and deserves to outlive us
  • If we talk too much about the importance of the far future, a maximally evil supervillain could actually become more motivated to increase x-risk

As a semi-outsider working on the fringes of this community, my impression is that EA is way too concerned about what is good/bad to talk about. There are ideas, posts and words with negative EV in the short run, but I feel that's all outweighed by the virtue of vigorous debate and capacity for free thinking.

On a more serious note, I am philosophically concerned about the argument "the possibility of s-risks implies we should actually increase x-risk", and am actively working on this. Happy to talk more if it's of mutual interest.

Comment by AppliedDivinityStudies on What would an entity with GiveWell's decision-making process have recommended in the past? · 2021-06-29T03:48:27.301Z · EA · GW

A couple relevant pieces: In this talk, Tyler Cowen talks about how impartial utilitarianism makes sense today since we can impact humans far from ourselves (in both time and space), but how deontology may have been more sensible in the distant past.

In this talk, Devin Kalish argues that utilitarianism is the correct moral theory on the basis of its historical track record. He argues that utilitarianism correctly "predicted" now widely recognized ethical positions (women's rights, anti-slavery, etc).

So I think it's interesting to ask, if GiveWell was around 200 years ago, what would they have recommended, and in hindsight, would that have been the correct cause to advocate for.

One common criticism of EA is that it focuses too much on incremental rather than Systemic Change. We might worry that in 1800, GiveWell would have advocated for better farming practices, but not for abolition, though in retrospect, the latter seems to have been more important.

This is more or less the point Patrick Collison makes here when he says: "It's hard to me to see how writing a treatise on human nature would score really highly in an EA framework, and yet, ex-post, that looks like a really valuable thing for a human to do. And similarly, when we look at things that in hindsight seem like very good things to have happened, it's unclear to me how an EA intuition would have caused someone to do so"

Overall, I don't think this is a super damning criticism. The world has changed. It's more legible, and more subject to utilitarian calculus.

But still, it's an interesting quesiton.

Comment by AppliedDivinityStudies on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-06T03:45:11.566Z · EA · GW

Yes that's a good point, as Scott argues in the linked post:

The moral of the story is: if there's some kind of weird market failure that causes galaxies to be priced at $1, normal reasoning stops working; things that do incalculable damage can be fairly described as "only doing $1 worth of damage", and you will do them even if less damaging options are available.

Give Well notes that their analysis should only really be taken a relative measure of cost-effectiveness. But even putting that aside, you're right that it doesn't imply human lives are cheap or invaluable.

Actually, I pretty much agree with all your points. But a better analogy might be "is it okay to murder someone to prevent another murder?" That's a much fuzzier line, and you can extend this to all kinds of absurd trolly-esque scenarios. In the animal case, it's not that I'm murdering someone in cold blood and then donating some money. It's that I'm causing one animal to be produced, and then causing another animal not to be. So it is much closer to equivalent.

To be clear again, the specific question this analysis address is not "is it ethical to eat meat and then pay offsets". The question is "assuming you pay for offsets, is it better to eat chicken or beef?"

And of course, there are plenty of reasons murder seems especially repugnant. You wouldn't want rich people to be able to murder people effectively for free. You wouldn't want people getting revenge on their coworkers. You wouldn't want to allow a world where people have to life in fear, etc etc etc. So I don't think it's a particularly useful intuition pump.

Comment by AppliedDivinityStudies on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-06T03:30:57.483Z · EA · GW

This is very specifically attempting to compile some existing analysis on whether it's better to eat chicken or beef, incorporating ethical and environmental costs, and assuming you choose to offset both harms through donations.

In the future, I would like to aggregate more analysis into a single model, including the one you link.

As I understand it (this might be wrong), what we have currently is a much of floating analyses, each mostly focused on the cost-effectiveness of a specific intervention. Donors can then compare those analyses and make a judgement about where best to give their money.

Where the Give Well style monolithic CEA succeed is in ensuring that a similar approach is used to produce analysis that is genuinely comparable, and in giving readers the opportunity to adjust subjective moral weights. That's my ultimate goal with this project, but it will likely take some time.

This was maybe a premature release, but so far the feedback has already been useful.

Comment by AppliedDivinityStudies on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-06T03:26:55.616Z · EA · GW

Yeah, I'm hopeful that this is correct, and plan to incorporate other intervention impact estimates soon.

For that particular post, Saulius is talking about "lives affected". E.g chickens having more room as described here: https://www.compass-usa.com/compass-group-usa-becomes-first-food-service-company-commit-100-healthier-slower-growing-chicken-2024-landmark-global-animal-partnership-agreement/

I don't yet have a good sense of how valuable this is v.s. the chicken not being produced in the first place, and I think this will end up being a major point of contention. My intuitive personal sense is that chicken lives are not "worth living" (i.e. ethically net positive) even if they are receiving the listed enrichments, but others would disagree: https://nintil.com/on-the-living-standards-of-animals-in-the-united-kingdom

But overall I'm optimistic that there are or could be much more cost-effective interventions than the one I looked at.

If true, this wouldn't change the cow/chicken analysis, but would make me much favorable towards eating meat + offsets as opposed to eating more expensive plant-based alternatives. As noted elsewhere, of course the optimific action is still to be vegan and also donate anyway.

Comment by AppliedDivinityStudies on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-06T03:14:09.084Z · EA · GW

Yes good question! Cow lives are longer, and cows are probably more "conscious" (I'm using that term loosely), but their treatment is generally better than that of chickens.

For this particular calculation, the "offset" isn't just an abstract moral good, it's attempting to decrease cow/chicken production respectively. E.g. you eat one chicken, donate to a fund that reduces the numbers of chickens produced by one, the net ethical impact is 0 regardless of farming conditions.

That convenience is part of the reason I chose to start with this analysis, but it's certainly something I'll have to consider for future work.

Comment by AppliedDivinityStudies on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-05T21:42:14.995Z · EA · GW

Sorry yes, "saving a life" means some kind of intervention that leads to fewer animals going through factory farming. The estimate I'm using is from: https://forum.effectivealtruism.org/posts/9ShnvD6Zprhj77zD8/animal-equality-showed-that-advocating-for-diet-change-works

And yes, it is definitely better to just be vegan and not eat meat at all. This analysis is purely aimed at answer the chicken vs cow question.

Comment by AppliedDivinityStudies on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-05T21:40:15.532Z · EA · GW

Sorry about all that, changed the title to "Give Well-style".

Agreed on the other title as well. I made some notes on this in the follow up post and noted that I could have picked a better title. https://forum.effectivealtruism.org/posts/xedQto46TFrSruEoN/responses-and-testimonies-on-ea-growth

Thanks for the feedback, I appreciate the note and will think more about this in the future. FWIW I typically spend a lot of time on the post, very little time on the title, even though the title is probably read by way more people. So it makes sense to re-calibrate that balance a bit.

Comment by AppliedDivinityStudies on My current impressions on career choice for longtermists · 2021-06-04T20:41:32.194Z · EA · GW

I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There's a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That's not to say it's useless, but if someone's medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.

Comment by AppliedDivinityStudies on Progress studies vs. longtermist EA: some differences · 2021-06-04T20:38:17.463Z · EA · GW

Thanks! I think that's a good summary of possible views.

FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven't been quite ready to express them publicly, and I don't think they're endorsed by other members of the Progress community.

Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I'm heavily paraphrasing there.

He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.

Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won't speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:

  • Ramp up high skilled immigration (especially from China, especially in AI, biotech, EE and physics) by expanding visa access and proactively recruiting scientists
Comment by AppliedDivinityStudies on My current impressions on career choice for longtermists · 2021-06-04T19:54:53.122Z · EA · GW

Thanks for the writeup Holden, I agree that this is a useful alternative to the 80k approach.

On the conceptual research track, you note "a year of full-time independent effort should be enough to mostly reach these milestones". How do you think this career evolves as the researcher becomes more senior? For example, Scott Alexander seems to be doing about the same thing now as he was doing 8 years ago. Is the endgame for this track simply that you become better at doing a similar set of things?

Comment by AppliedDivinityStudies on Constructive Criticism of Moral Uncertainty (book) · 2021-06-04T16:06:40.150Z · EA · GW

Thanks for these notes! I found the chapter on Fanaticism notable as well. The authors write:

A better response is simply to note that this problem arises under empirical uncertainty as well as under moral uncertainty. One should not give 0 credence to the idea that an infinitely good heaven exists, which one can enter only if one goes to church; or that it will be possible in the future through science to produce infinitely or astronomically good outcomes. This is a tricky issue within decision theory and, in our view, no wholly satisfactory solution has been provided. But it is not a problem that is unique to moral uncertainty. And we believe whatever is the best solution to the fanaticism problem under empirical uncertainty is likely to be the best solution to the fanaticism problem under moral uncertainty. This means that this issue is not a distinctive problem for moral uncertainty.

I agree with their meta-argument, but it is still a bit worrying. Even if you reduce the unsolvable problems if your field to unsolvable problems in another field, I'm still left feeling concerned that we're missing something important.

In the conclusion, the authors call for more work on really fundamental questions, noting:

But it’s plausible that the most important problem really lies on the meta-level: that the greatest priority for humanity, now, is to work out what matters most, in order to be able to truly know what are the most important problems we face.

Moral atrocities such as slavery, the subjection of women, the persecution of non-heterosexuals, and the Holocaust were, of course, driven in part by the self-interest of those who were in power. But they were also enabled and strengthened by the common-sense moral views of society at the time about what groups were worthy of moral concern.

Given the importance of figuring out what morality requires of us, the amount of investment by society into this question is astonishingly small. The world currently has an annual purchasing-power-adjusted gross product of about $127 trillion. Of that amount, a vanishingly small fraction—probably less than 0.05%—goes to directly addressing the question: What ought we to do?

I do wonder, given the historical examples they cite, if purely philosophical progress was the limiting factor. Mary Wollstonecraft and Jeremy Bentham made compelling arguments for women's rights in the 1700s, but it took another couple hundred years for process to occur in legal and socioeconomic spheres.

Maybe it's a long march, and progress simply takes hundreds of years. The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.

It's tempting to wash away our past atrocities under the guise of ignorance, but I'm worried humanity just knowingly does the wrong thing.

Comment by AppliedDivinityStudies on The Place of Stories Within EA: "The Egg" · 2021-06-03T22:31:10.731Z · EA · GW

You might be familiar with Bostrom's Fable of the Dragon Tyrant https://www.nickbostrom.com/fable/dragon.html

And of course, Yudkowsky's fiction, while not exactly EA, was inspiring to many people.

In some ways, the EA creed requires being against empathy in an important way. We can't just care for those close to us, or those with sympathetic stories. But of course that kind of impartiality is also a story. So at the very least, fiction is useful as a kind of reverse mind-control or intuition pump.

For what it's worth, in this particular instance, I don't find "impartiality" to be a useful source of emotional motivation. Working on animal welfare for example, you might find it more helpful to develop selective empathy post-hoc.

That sounds silly, but it's basically just reverse of what people typically do. Normally we form emotional judgements and then rationalize them after the fact, there's no reason you can't do the opposite.

Comment by AppliedDivinityStudies on Progress studies vs. longtermist EA: some differences · 2021-06-02T21:42:03.499Z · EA · GW

Thanks for clarifying, the delta thing is a good point. I'm not aware of anyone really trying to estimate "what are the odds that MIRI prevents XR", though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/

I absolutely agree with all the other points. This isn't an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: "People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later... the philosophical side of this seems like ineffective posturing.

Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view

That's a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical.

R.e. safety for something that hasn't been invented: I'm not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it's not necessarily a feature you can just add on later. But it doesn't sound like there's a deep disagreement here, and in any case I'm not the best person to try to argue this case.

Intuitively, one analogy might be: we're building a rocket, humanity is already on it, and the AI Safety people are saying "let's add life support before the rocket takes off". The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.

Comment by AppliedDivinityStudies on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T21:32:43.678Z · EA · GW

Good to hear!

In the abstract, yes, I would trade 10,000 years for 0.001% reduction in XR.

In practice, I think the problem with this kind of Pascal Mugging argument is that it's really hard to know what a 0.001% reduction looks like, and really easy to do some fuzzy Fermi estimate math. If someone were to say "please give me one billion dollars, I have this really good idea to prevent XR by pursuing Strategy X", they could probably convince me that they have at least a 0.001% chance of succeeding. So my objections to really small probabilities are mostly practical.

Comment by AppliedDivinityStudies on Help me find the crux between EA/XR and Progress Studies · 2021-06-02T20:00:38.675Z · EA · GW

I see myself as straddling the line between the two communities. More rigorous arguments at the end, but first, my offhand impressions of what I think the median EA/XR person beliefs:

  • Ignoring XR, economic/technological progress is an immense moral good
  • Considering XR, economic progress is somewhat good, neutral at worst
  • The solution to AI risk is not "put everything on hold until we make epistemic progress"
  • The solution to AI risk is to develop safe AI
  • In the meantime, we should be cautious of specific kinds of development, but it's fine if someone wants to go and improve crop yields or whatever

As Bostrom wrote in 2003: "In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development."

"However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years." https://www.nickbostrom.com/astronomical/waste.html

With regards to poverty reduction, you might also like this post in favor of growth: http://reflectivedisequilibrium.blogspot.com/2018/10/flow-through-effects-of-innovation.html

Comment by AppliedDivinityStudies on Progress studies vs. longtermist EA: some differences · 2021-06-02T18:37:05.101Z · EA · GW

As Peter notes, I written about the issue of x-risk within Progress Studies at length here: https://applieddivinitystudies.com/moral-progress/

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

For what it's worth, I do think there are compelling arguments, I just haven't seen them made elsewhere. For example:

  • If the US/UK research community doesn't progress rapidly in AI development, we may be overtaken by less careful actors
Comment by AppliedDivinityStudies on Progress studies vs. longtermist EA: some differences · 2021-06-02T18:34:39.426Z · EA · GW

Hey Jason, I share the same thoughts on pascal-mugging type arguments.

Having said that, The Precipice convincingly argues that the x-risk this century is around ~1/6, which is really not very low. Even if you don't totally believe Toby, it seems reasonable to put the odds at that order of magnitude, and it shouldn't fall into the 1-e6 type of argument.

I don't think the Deutsch quotes apply either. He writes "Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology".

That might be true when it comes to warring human civilizations, but not when it comes to global catastrophes. In the past, there was no way to say "let's not move on to the bronze age quite yet", so any individual actor who attempted to stagnate would be dominated by more aggressive competitors.

But for the first time in history, we really do have the potential for species-wide cooperation. It's difficult, but feasible. If the US and China manage to agree to a joint AI resolution, there's no third party that will suddenly sweep in and dominate with their less cautious approach.

Comment by AppliedDivinityStudies on What key facts do you find are compelling when talking about effective altruism? · 2021-04-19T18:04:52.576Z · EA · GW

Open Phil has given a total of $140 million to "Potential Risks from Advanced Artificial Intelligence" over all time.

By comparison, some estimates from Nature have "climate-related financing" at around $500 billion annually. That's around 10,000x higher.

So even if you think that Climate Change is much more pressing that AI Safety, you might agree that the latter is much more neglected.

Also note that the majority of that Open Phil funding went to either CSET or Open AI. CSET is more focused on short-term arms races and international power struggles, Open AI only has a small safety team. So even of the $140 million, only a bit is going to technical AI Safety research.

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T19:44:37.934Z · EA · GW

That's a good way of framing it. I absolutely agree that individuals and groups should reflect on whether or not their time is being spent wisely.

Here are some possible failure modes. I am not saying that any of these are occurring in this particular situation. As a naive outsider looking in, this is merely what springs to mind when I consider what might happen if this type of publishing were to become commonplace.

  • Imagine I am a mildly prominent academic. One day, a colleague sends me a draft of a paper, asking if I would like to co-author it. He tells me that the other co-authors include Yew-Kwang Ng, Toby Ord, Hilary Greaves and other superstars. I haven't given the object-level claims much thought, but I'm eager to associate with high-status academics and get my name on a publication in Utilitas. I go ahead and sign off.

  • Imagine I am a junior academic. One day, I have an insight that may lead to an important advance in population ethics, but it relies on some discussion of the Repugnant Conclusion. As I discuss this idea with colleagues, I'm directed to this many-authored paper indicating that we should not pay too much attention to the Repugnant Conclusion. I don't take issue with any of the paper's object-level claims, I simply believe that my finding is important whether or not it's in an subfield that has received "too much focus". My colleagues have no opinion on the matter at hand, but keep referring me to the many-authored paper anyway, mumbling something about expert consensus. In the end, I'm persuaded not to publish.

  • Imagine I am a very prominent academic with a solid reputation. I now want to raise more grant funding for my department, so I write a short draft making the claim that my subfield has received too little focus. I pass this around to mildly prominent academics, who sign off on the paper in order to associate with me and get their name on a publication in Utilitas. With 30 prominent academics on the paper, no journal would dare deny me publication.

Again, my stance here is not as an academic. These are speculative failure modes, not real scenarios I've seen, and certainly not real accusations I'm making of the specific authors in question here. My goal is to express what I believe to be a reasonable discomfort, and seek clarification on how the academic institutions at play actually function.

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T19:28:45.930Z · EA · GW

Thanks Dean! Good to hear from you.

I hope you don't feel like I'm misrepresenting this paper. To be clear, I am referring to "What Should We Agree on about the Repugnant Conclusion?", which includes the passages:

  • "We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature."
  • "It is not simply an academic exercise, and we should not let it be governed by undue attention to one consideration. "

That is from the introduction and conclusion. I'm not sure if that constitutes the "main claim". I may have been overreaching to say that it "basically" only serves as a call for less attention. As I noted in the comment, my intention was never to lend too much credence to that particular claim.

I fully agree with your points on the interdisciplinary of population ethics and the unavoidability of incentives.

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T09:01:02.523Z · EA · GW

I received a nice reply from Dean which I've asked if I can share. Assuming he says yes, I'll have a more thought out response to this point soon.

Here are some quick thoughts: There are many issues in all academic fields, the vast majority of which are not paid the appropriate amount of attention. Some are overvalued, some are unfairly ignored. That's too bad, and I'm very glad that movements like EA exist to call more attention to pressing research questions that might otherwise get ignored.

What I'm afraid of is living in a world where researchers see it as part of their charter to correct each of these attentional inexactitudes, and do so by gathering bands of other academics to many-author a paper which basically just calls for a greater/lesser amount of attention to be paid to some issue.

Why would that be bad?

  1. It's not a balanced process. Unlike the IGM Experts Panel, no one is being surveyed and there's no presentation of disagreement or distribution of beliefs over the field. How do we know there aren't 30 equally prominent people willing to say the Repugnant Conclusion is actually very important? Should they go out and many-author their own paper?
  2. A lot of this is very subjective, you're just arguing that an issue receives more/less attention than is merited. That's fine as a personal judgement, but it's hard for anyone else to argue against on an object-level. This risks politicization.
  3. There are perverse incentives. I'm not claiming that's what's at play here, but it's a risk this precedent sets. When academics argue for the (un)importance of various research questions, they are also arguing for their own tenure, departmental funding, etc. This is an unavoidable part of the academic career, but it should be limited to careerist venues, not academic publications.

Again, those are some quick thoughts from an outsider, so I wouldn't attach too much credence to them. But I hope that help explains why this strikes me as somewhat perilous.

Once shared, I think Dean's response will show that my concerns are, in practice, not very serious.

Comment by AppliedDivinityStudies on My personal cruxes for focusing on existential risks / longtermism / anything other than just video games · 2021-04-15T18:46:47.831Z · EA · GW

This is a super interesting exercise! I do worry how much it might bias you, especially in the absence of equally rigorously evaluated alternatives.

Consider the multiple stage fallacy: https://forum.effectivealtruism.org/posts/GgPrbxdWhyaDjks2m/the-multiple-stage-fallacy

If I went through any introductory EA work, I could probably identify something like 20 claims, all of which must hold for the conclusions to have moral force. It would. then feel pretty reasonable to assign each of those claims somewhere between 50% and 90% confidence.

That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%. What explains the discrepancy?

  • Lack of superior alternatives. I'm not sure if I'm a moral realist, but I'm also pretty unsure about moral nihilism. There's lots of uncertainty all over the place, and we're just trying to find the best working theory, even if it's overall pretty unlikely. As Tyler Cowen once put it: "The best you can do is to pick what you think is right at 1.05 percent certainty, rather than siding with what you think is right at 1.03 percent. "
  • Ignoring correlated probabilities
  • Bias towards assigning reasonable sounding probabilities
  • Assumption that the whole relies on each detail. E.g. even if utilitarianism is not literally correct, we may still find that pursuing a Longtermist agenda is reasonable under improved moral theories
  • Low probabilities are counter-acted by really high possible impacts. If the probability of longtermism being right is ~20%, that's still a really really compelling case.

I think the real question is, selfishly speaking, how much more do you gain from playing video games than from working on longtermism? I play video games sometimes, but find that I have ample time to do so in my off hours. Playing video games so much that I don't have time for work doesn't sound pleasurable to me anyway, although you might enjoy it for brief spurts on weekends and holidays.

Or consider these notes from Nick Beckstead on Tyler Cowen's view: "his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-15T18:22:41.328Z · EA · GW

Separating this question from my main comment to avoid confusion.

Your medium post reads: "Tyler Cowen, calling for faster technological growth for a better future, dismissed the Repugnant Conclusion as a constraint: “I say full steam ahead.”"

Linking to this MR post: https://marginalrevolution.com/marginalrevolution/2018/08/preface-stubborn-attachments-book-especially-important.html

The MR post does not mention the Repugnant Conclusion, nor does it contain the words "full steam ahead". Did. you perhaps link to the wrong post? I searched the archives briefly, but was unable to find a MR post that dismisses the Repugnant Conclusion: https://marginalrevolution.com/?s=repugnant+conclusion

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-15T18:11:39.464Z · EA · GW

I agree with every claim made in this paper. And yet, its publication strikes me as odd and inappropriate.

Consider the argument from Agnes Callard that philosophers should not sign petitions. She writes: "I am not saying that philosophers should refrain from engaging in political activity; my target is instead the politicization of philosophy itself. I think that the conduct of the profession should be as bottomless as its subject matter: If we are going to have professional, intramural discussions about the ethics of the profession, we should do so philosophically and not by petitioning one another. We should allow ourselves the license to be philosophical all the way down." https://www.nytimes.com/2019/08/13/opinion/philosophers-petitions.html

The article in question here is not exactly a petition, but it's not a research paper either. Had it not be authored by so many distinguished names, it would not have been deemed fit for publication. By its own admission, the purpose of this article is not to make an original research contribution. Rather, its purpose is to claim that "the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research".

Is this a good principle to publish by? Is the role of philosophers in the near-future to sign off in droves on many-authored publications, all for the sake of shifting the focus of attention?

Of course philosophers should refute the arguments they disagree with. But that doesn't seem to be what's occurring here.

This risks being an overly-heated debate, so I'll stop there. I would just ask you to consider whether or not this is what the practice of philosophy ought to look like, and if it constitutes a desirable precedent for academic publishing.

Comment by AppliedDivinityStudies on Base Rates on United States Regime Collapse · 2021-04-08T06:25:55.284Z · EA · GW

Hey thanks for asking, it's the paragraphs from "Looking back" to "raw base rates to consider"

In some ways this feels like a silly throwback, on the other hand I think it is actually more worth reading now that we're not caught up in the heat of the moment. More selfishly, I didn't post on EA Forum when I first wrote this, but have since been encouraged to share old posts that might not have been seen.

Comment by AppliedDivinityStudies on Mundane trouble with EV / utility · 2021-04-03T16:09:58.594Z · EA · GW

Hey Ben, I think these are pretty reasonable questions and do not make you look stupid.

On Pascal's mugging in particular, I would consider this somewhat informal answer: https://nintil.com/pascals-mugging/ Though honestly, I don't find this super satisfactory, and it is something and still bugs me.

Having said that, I don't think this line of reasoning is necessary for answering your more practical questions 1-3.

Utilitarianism (and Effective Altruism) don't require that there's some specific metaphysical construct that is numerical and corresponds to human happiness. The utilitarian claim is just that some degree of quantification is, in principle, possible. The EA claim is that attempting to carry out this quantification leads to good outcomes, even if it's not an exact science.

GiveWell painstakingly compiles cost-effectiveness estimates numerical, but goes on to state that they don't view these as being "literally true". These estimates still end up being useful for comparing one charity relative to another. You can read more about this thinking here: https://blog.givewell.org/2017/06/01/how-givewell-uses-cost-effectiveness-analyses/

In practice, GiveWell makes all sorts of tradeoffs to attempt to compare goods like "improving education", "lives saved" or "increasing income". Sometimes this involves directly asking the targeted populations about their preferences. You can read more about their approach here: https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/2019-moral-weights-research

Finally, in the case of existential-risk, it's often not necessary to make these kinds of specific calculations at all. By one estimate, the Earth alone could support something like 10^16 human lives, and the universe could support somewhere something like 10^34 human life-years, or up to 10^56 "cybernetic human life-years". This is all very speculative, but the potential gains are so large that it doesn't matter if we're off by 40%, or 40x. https://en.wikipedia.org/wiki/Human_extinction#Ethics

Returning to the original point, you might ask if work on x-risk is then a case of Pascal's Mugging? Toby Ord gives the odds of human extinction in the next century at around 1/6. That's a pretty huge chance. We're much less confident what the odds are of EA preventing this risk, but it seems reasonable to think that it's some normal number. I.e. much higher than 10^-10. In that case, EA has huge expected value. Of course that might all seem like fuzzy reasoning, but I think there's a pretty good case to be made that our odds are not astronomically low. You can see one version of this argument here: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/