Posts

Base Rates on United States Regime Collapse 2021-04-05T17:14:22.775Z
Responses and Testimonies on EA Growth 2021-03-10T23:22:16.613Z
Why Hasn't Effective Altruism Grown Since 2015? 2021-03-09T14:43:01.316Z

Comments

Comment by AppliedDivinityStudies on What key facts do you find are compelling when talking about effective altruism? · 2021-04-19T18:04:52.576Z · EA · GW

Open Phil has given a total of $140 million to "Potential Risks from Advanced Artificial Intelligence" over all time.

By comparison, some estimates from Nature have "climate-related financing" at around $500 billion annually. That's around 10,000x higher.

So even if you think that Climate Change is much more pressing that AI Safety, you might agree that the latter is much more neglected.

Also note that the majority of that Open Phil funding went to either CSET or Open AI. CSET is more focused on short-term arms races and international power struggles, Open AI only has a small safety team. So even of the $140 million, only a bit is going to technical AI Safety research.

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T19:44:37.934Z · EA · GW

That's a good way of framing it. I absolutely agree that individuals and groups should reflect on whether or not their time is being spent wisely.

Here are some possible failure modes. I am not saying that any of these are occurring in this particular situation. As a naive outsider looking in, this is merely what springs to mind when I consider what might happen if this type of publishing were to become commonplace.

  • Imagine I am a mildly prominent academic. One day, a colleague sends me a draft of a paper, asking if I would like to co-author it. He tells me that the other co-authors include Yew-Kwang Ng, Toby Ord, Hilary Greaves and other superstars. I haven't given the object-level claims much thought, but I'm eager to associate with high-status academics and get my name on a publication in Utilitas. I go ahead and sign off.

  • Imagine I am a junior academic. One day, I have an insight that may lead to an important advance in population ethics, but it relies on some discussion of the Repugnant Conclusion. As I discuss this idea with colleagues, I'm directed to this many-authored paper indicating that we should not pay too much attention to the Repugnant Conclusion. I don't take issue with any of the paper's object-level claims, I simply believe that my finding is important whether or not it's in an subfield that has received "too much focus". My colleagues have no opinion on the matter at hand, but keep referring me to the many-authored paper anyway, mumbling something about expert consensus. In the end, I'm persuaded not to publish.

  • Imagine I am a very prominent academic with a solid reputation. I now want to raise more grant funding for my department, so I write a short draft making the claim that my subfield has received too little focus. I pass this around to mildly prominent academics, who sign off on the paper in order to associate with me and get their name on a publication in Utilitas. With 30 prominent academics on the paper, no journal would dare deny me publication.

Again, my stance here is not as an academic. These are speculative failure modes, not real scenarios I've seen, and certainly not real accusations I'm making of the specific authors in question here. My goal is to express what I believe to be a reasonable discomfort, and seek clarification on how the academic institutions at play actually function.

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T19:28:45.930Z · EA · GW

Thanks Dean! Good to hear from you.

I hope you don't feel like I'm misrepresenting this paper. To be clear, I am referring to "What Should We Agree on about the Repugnant Conclusion?", which includes the passages:

  • "We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature."
  • "It is not simply an academic exercise, and we should not let it be governed by undue attention to one consideration. "

That is from the introduction and conclusion. I'm not sure if that constitutes the "main claim". I may have been overreaching to say that it "basically" only serves as a call for less attention. As I noted in the comment, my intention was never to lend too much credence to that particular claim.

I fully agree with your points on the interdisciplinary of population ethics and the unavoidability of incentives.

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-16T09:01:02.523Z · EA · GW

I received a nice reply from Dean which I've asked if I can share. Assuming he says yes, I'll have a more thought out response to this point soon.

Here are some quick thoughts: There are many issues in all academic fields, the vast majority of which are not paid the appropriate amount of attention. Some are overvalued, some are unfairly ignored. That's too bad, and I'm very glad that movements like EA exist to call more attention to pressing research questions that might otherwise get ignored.

What I'm afraid of is living in a world where researchers see it as part of their charter to correct each of these attentional inexactitudes, and do so by gathering bands of other academics to many-author a paper which basically just calls for a greater/lesser amount of attention to be paid to some issue.

Why would that be bad?

  1. It's not a balanced process. Unlike the IGM Experts Panel, no one is being surveyed and there's no presentation of disagreement or distribution of beliefs over the field. How do we know there aren't 30 equally prominent people willing to say the Repugnant Conclusion is actually very important? Should they go out and many-author their own paper?
  2. A lot of this is very subjective, you're just arguing that an issue receives more/less attention than is merited. That's fine as a personal judgement, but it's hard for anyone else to argue against on an object-level. This risks politicization.
  3. There are perverse incentives. I'm not claiming that's what's at play here, but it's a risk this precedent sets. When academics argue for the (un)importance of various research questions, they are also arguing for their own tenure, departmental funding, etc. This is an unavoidable part of the academic career, but it should be limited to careerist venues, not academic publications.

Again, those are some quick thoughts from an outsider, so I wouldn't attach too much credence to them. But I hope that help explains why this strikes me as somewhat perilous.

Once shared, I think Dean's response will show that my concerns are, in practice, not very serious.

Comment by AppliedDivinityStudies on My personal cruxes for focusing on existential risks / longtermism / anything other than just video games · 2021-04-15T18:46:47.831Z · EA · GW

This is a super interesting exercise! I do worry how much it might bias you, especially in the absence of equally rigorously evaluated alternatives.

Consider the multiple stage fallacy: https://forum.effectivealtruism.org/posts/GgPrbxdWhyaDjks2m/the-multiple-stage-fallacy

If I went through any introductory EA work, I could probably identify something like 20 claims, all of which must hold for the conclusions to have moral force. It would. then feel pretty reasonable to assign each of those claims somewhere between 50% and 90% confidence.

That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%. What explains the discrepancy?

  • Lack of superior alternatives. I'm not sure if I'm a moral realist, but I'm also pretty unsure about moral nihilism. There's lots of uncertainty all over the place, and we're just trying to find the best working theory, even if it's overall pretty unlikely. As Tyler Cowen once put it: "The best you can do is to pick what you think is right at 1.05 percent certainty, rather than siding with what you think is right at 1.03 percent. "
  • Ignoring correlated probabilities
  • Bias towards assigning reasonable sounding probabilities
  • Assumption that the whole relies on each detail. E.g. even if utilitarianism is not literally correct, we may still find that pursuing a Longtermist agenda is reasonable under improved moral theories
  • Low probabilities are counter-acted by really high possible impacts. If the probability of longtermism being right is ~20%, that's still a really really compelling case.

I think the real question is, selfishly speaking, how much more do you gain from playing video games than from working on longtermism? I play video games sometimes, but find that I have ample time to do so in my off hours. Playing video games so much that I don't have time for work doesn't sound pleasurable to me anyway, although you might enjoy it for brief spurts on weekends and holidays.

Or consider these notes from Nick Beckstead on Tyler Cowen's view: "his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-15T18:22:41.328Z · EA · GW

Separating this question from my main comment to avoid confusion.

Your medium post reads: "Tyler Cowen, calling for faster technological growth for a better future, dismissed the Repugnant Conclusion as a constraint: “I say full steam ahead.”"

Linking to this MR post: https://marginalrevolution.com/marginalrevolution/2018/08/preface-stubborn-attachments-book-especially-important.html

The MR post does not mention the Repugnant Conclusion, nor does it contain the words "full steam ahead". Did. you perhaps link to the wrong post? I searched the archives briefly, but was unable to find a MR post that dismisses the Repugnant Conclusion: https://marginalrevolution.com/?s=repugnant+conclusion

Comment by AppliedDivinityStudies on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-15T18:11:39.464Z · EA · GW

I agree with every claim made in this paper. And yet, its publication strikes me as odd and inappropriate.

Consider the argument from Agnes Callard that philosophers should not sign petitions. She writes: "I am not saying that philosophers should refrain from engaging in political activity; my target is instead the politicization of philosophy itself. I think that the conduct of the profession should be as bottomless as its subject matter: If we are going to have professional, intramural discussions about the ethics of the profession, we should do so philosophically and not by petitioning one another. We should allow ourselves the license to be philosophical all the way down." https://www.nytimes.com/2019/08/13/opinion/philosophers-petitions.html

The article in question here is not exactly a petition, but it's not a research paper either. Had it not be authored by so many distinguished names, it would not have been deemed fit for publication. By its own admission, the purpose of this article is not to make an original research contribution. Rather, its purpose is to claim that "the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research".

Is this a good principle to publish by? Is the role of philosophers in the near-future to sign off in droves on many-authored publications, all for the sake of shifting the focus of attention?

Of course philosophers should refute the arguments they disagree with. But that doesn't seem to be what's occurring here.

This risks being an overly-heated debate, so I'll stop there. I would just ask you to consider whether or not this is what the practice of philosophy ought to look like, and if it constitutes a desirable precedent for academic publishing.

Comment by AppliedDivinityStudies on Base Rates on United States Regime Collapse · 2021-04-08T06:25:55.284Z · EA · GW

Hey thanks for asking, it's the paragraphs from "Looking back" to "raw base rates to consider"

In some ways this feels like a silly throwback, on the other hand I think it is actually more worth reading now that we're not caught up in the heat of the moment. More selfishly, I didn't post on EA Forum when I first wrote this, but have since been encouraged to share old posts that might not have been seen.

Comment by AppliedDivinityStudies on Mundane trouble with EV / utility · 2021-04-03T16:09:58.594Z · EA · GW

Hey Ben, I think these are pretty reasonable questions and do not make you look stupid.

On Pascal's mugging in particular, I would consider this somewhat informal answer: https://nintil.com/pascals-mugging/ Though honestly, I don't find this super satisfactory, and it is something and still bugs me.

Having said that, I don't think this line of reasoning is necessary for answering your more practical questions 1-3.

Utilitarianism (and Effective Altruism) don't require that there's some specific metaphysical construct that is numerical and corresponds to human happiness. The utilitarian claim is just that some degree of quantification is, in principle, possible. The EA claim is that attempting to carry out this quantification leads to good outcomes, even if it's not an exact science.

GiveWell painstakingly compiles cost-effectiveness estimates numerical, but goes on to state that they don't view these as being "literally true". These estimates still end up being useful for comparing one charity relative to another. You can read more about this thinking here: https://blog.givewell.org/2017/06/01/how-givewell-uses-cost-effectiveness-analyses/

In practice, GiveWell makes all sorts of tradeoffs to attempt to compare goods like "improving education", "lives saved" or "increasing income". Sometimes this involves directly asking the targeted populations about their preferences. You can read more about their approach here: https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/2019-moral-weights-research

Finally, in the case of existential-risk, it's often not necessary to make these kinds of specific calculations at all. By one estimate, the Earth alone could support something like 10^16 human lives, and the universe could support somewhere something like 10^34 human life-years, or up to 10^56 "cybernetic human life-years". This is all very speculative, but the potential gains are so large that it doesn't matter if we're off by 40%, or 40x. https://en.wikipedia.org/wiki/Human_extinction#Ethics

Returning to the original point, you might ask if work on x-risk is then a case of Pascal's Mugging? Toby Ord gives the odds of human extinction in the next century at around 1/6. That's a pretty huge chance. We're much less confident what the odds are of EA preventing this risk, but it seems reasonable to think that it's some normal number. I.e. much higher than 10^-10. In that case, EA has huge expected value. Of course that might all seem like fuzzy reasoning, but I think there's a pretty good case to be made that our odds are not astronomically low. You can see one version of this argument here: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/