AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

post by Toby_Ord · 2020-03-17T02:39:11.791Z · score: 66 (21 votes) · EA · GW · 82 comments

Contents

    Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video. 
  About Toby
None
82 comments

Note: Aaron Gertler, a Forum moderator, is posting this with Toby's account. (That's why the post is written in the third person.)

 

This is a Virtual EA Global AMA: several people will be posting AMAs on the Forum, then recording their answers in videos that will be broadcast at the Virtual EA Global event [EA · GW] this weekend.

Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video. 

 

About Toby

Toby Ord is a moral philosopher focusing on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.

His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.

Toby's new book, The Precipice, is now available for purchase in the UK and pre-order in other countries. You can learn more about the book here [EA · GW].

82 comments

Comments sorted by top scores.

comment by Halstead · 2020-03-18T10:23:34.702Z · score: 53 (21 votes) · EA(p) · GW(p)

How likely do you think we would be to recover from a catastrophe killing 50%/90%/99% of the world population respectively?

comment by SiebeRozendal · 2020-03-20T09:28:25.017Z · score: 6 (3 votes) · EA(p) · GW(p)

Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?

comment by Halstead · 2020-03-17T18:07:15.759Z · score: 34 (15 votes) · EA(p) · GW(p)

Does it worry you that there are very few published peer reviewed treatments of why AGI risk should be taken seriously that are relevant to current machine learning technology?

comment by richard_ngo · 2020-03-17T17:26:59.244Z · score: 31 (14 votes) · EA(p) · GW(p)

What would convince you that preventing s-risks is a bigger priority than preventing x-risks?

Suppose that humanity unified to pursue a common goal, and you faced a gamble where that goal would be the most morally valuable goal with probability p, and the most morally disvaluable goal with probability 1-p. Given your current beliefs about those goals, at what value of p would you prefer this gamble over extinction?

comment by NunoSempere · 2020-03-17T21:00:50.467Z · score: 5 (4 votes) · EA(p) · GW(p)

I like how you operationalized the second question.

comment by riceissa · 2020-03-17T03:29:27.799Z · score: 30 (15 votes) · EA(p) · GW(p)

The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?

comment by AmyLabenz · 2020-03-17T04:12:57.402Z · score: 44 (17 votes) · EA(p) · GW(p)

Thanks for the comment! Toby is going to do a written AMA on the Forum later in the year too. This one is timed so that we can have video answers during Virtual EA Global.

comment by Linch · 2020-03-17T03:43:48.960Z · score: 2 (1 votes) · EA(p) · GW(p)

Strongly concur, as someone who preordered the book and is excited to read it.

comment by Halstead · 2020-03-18T10:18:01.191Z · score: 26 (13 votes) · EA(p) · GW(p)

What is your solution to Pascal's Mugging?

comment by Ben Pace · 2020-03-17T05:25:17.152Z · score: 26 (12 votes) · EA(p) · GW(p)

What's a regular disagreement that you have with other researchers at FHI? What's your take on it and why do you think the other people are wrong? ;-)

comment by Ben Pace · 2020-03-17T05:17:45.801Z · score: 23 (14 votes) · EA(p) · GW(p)

We're currently in a time of global crisis, as the number of people infected by the coronavirus continues to grow exponentially in many countries. This is a bit of a hard question, but a time of crisis is often the time when governments substantially refactor things because it's finally transparent that they're not working, so can you name a feasible concrete change in the UK government (or a broader policy for any developed government) that you think would put us in a far better position for future such situations, especially future pandemics that have a much more serious chance of being an existential catastrophe?

comment by RandomEA · 2020-03-18T01:47:21.517Z · score: 21 (9 votes) · EA(p) · GW(p)

In an 80,000 Hours interview, Tyler Cowen states:

[44:06]
I don't think we'll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.

How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead's conclusion in this piece? Do you think Cowen's argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are "not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk"? Does positively shaping the development of artificial intelligence fall into that category?

Edit (likely after Toby recorded his answer): This comment [EA(p) · GW(p)] from Pablo Stafforini also mentions the idea of "reduc[ing] the risk of extinction for all future generations."

comment by MichaelStJules · 2020-03-18T06:52:12.090Z · score: 2 (1 votes) · EA(p) · GW(p)

This math problem is relevant, although maybe the assumptions aren't realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct.

EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that's basically the only other way out.

So, either:

1. We go extinct,

2. Our population increases without bound, or

3. We decrease extinction risk towards 0 in the long-run.

Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn't so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..

comment by Pablo_Stafforini · 2020-03-18T14:06:15.984Z · score: 6 (3 votes) · EA(p) · GW(p)

Source for the screenshot: Samuel Karlin & Howard E. Taylor, A First Course in Stochastic Processes, 2nd ed., New York: Academic Press, 1975.

comment by Misha_Yagudin · 2020-03-18T17:26:56.659Z · score: 2 (2 votes) · EA(p) · GW(p)

re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.

comment by MichaelStJules · 2020-03-18T19:20:24.543Z · score: 2 (1 votes) · EA(p) · GW(p)

Should that be ? Just taking logarithms.

comment by Misha_Yagudin · 2020-03-19T06:23:57.087Z · score: 3 (2 votes) · EA(p) · GW(p)

This is a valid convergence test. But I think it's easier to reason about \sum p_i < ∞. See math.SE for a proof.

comment by ishi · 2020-03-21T11:11:16.783Z · score: 1 (1 votes) · EA(p) · GW(p)

I've seen and liked that book. But i don't think there really is enough information about risks (eg earth being hit by a comet or meteor that kills everything) to really say much---maybe if cosmology makes major advances or in other fields one can say somerthing but that might takes centuries.

comment by Linch · 2020-03-17T06:42:48.385Z · score: 20 (14 votes) · EA(p) · GW(p)

What do you think is the biggest mistake that the EA community is currently making?

comment by Halstead · 2020-03-18T10:11:23.404Z · score: 19 (7 votes) · EA(p) · GW(p)

Is your view that:

(i) the main thing that matters for the long-term is whether we get to the stars

(ii) This could plausibly happen in the next few centuries

(iii) therefore the main long-termist relevance of our actions is whether we survive the next few centuries and can make it to the stars?

Or do you put some weight on the view that long-term human and post-human flourishing on Earth could also account for >1% of the total plausible potential of our actions?

comment by RandomEA · 2020-03-18T02:50:58.483Z · score: 18 (7 votes) · EA(p) · GW(p)

Do you think that [EA(p) · GW(p)] "a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill's] view [about the level of risk this century] than to the median FHI view"? If so, should we defer to such a panel out of epistemic modesty [EA · GW]?

comment by Davidmanheim · 2020-03-22T10:02:23.514Z · score: 8 (5 votes) · EA(p) · GW(p)

I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)

comment by MichaelStJules · 2020-03-18T06:55:25.691Z · score: 3 (2 votes) · EA(p) · GW(p)

Do you have references/numbers for these views you can include here?

comment by Misha_Yagudin · 2020-03-17T09:02:46.889Z · score: 18 (11 votes) · EA(p) · GW(p)

What have you changed your mind on recently?

comment by RandomEA · 2020-03-18T03:37:41.890Z · score: 17 (6 votes) · EA(p) · GW(p)

There are many ways that technological development and economic growth could potentially affect the long-term future, including:

  • Hastening the development of technologies that create existential risk (see here [EA · GW])
  • Hastening the development of technologies that mitigate existential risk (see here [EA · GW])
  • Broadly empowering humanity (see here)
  • Improving human values (see here and here [EA(p) · GW(p)])
  • Reducing the chance of international armed conflict (see here)
  • Improving international cooperation (see the climate change mitigation debate)
  • Shifting the growth curve forward (see here)
  • Hastening the colonization of the accessible universe (see here and here)

What do you think is the overall sign of economic growth? Is it different [EA(p) · GW(p)] for developing and developed countries?

Note: The fifth bullet point was added after Toby recorded his answers.

comment by richard_ngo · 2020-03-17T16:12:28.572Z · score: 17 (10 votes) · EA(p) · GW(p)

If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?

comment by Ben Pace · 2020-03-17T05:16:53.458Z · score: 16 (8 votes) · EA(p) · GW(p)

Can you tell us a specific insight about AI that has made you positively update on the likelihood that we can align superintelligence? And a negative one?

comment by Ben Pace · 2020-03-17T05:16:20.106Z · score: 16 (9 votes) · EA(p) · GW(p)

What are the three most interesting ideas you've heard in the last three years? (They don't have to be the most important, just the most surprising/brilliant/unexpected/etc.)

comment by Halstead · 2020-03-18T10:14:40.219Z · score: 13 (5 votes) · EA(p) · GW(p)

Do you think we will ever have a unified and satisfying theory of how to respond to moral uncertainty, given the huge structural and substantive differences between apparently plausible moral theories? Will MacAskill's thesis is one of the best treatments of this problem, and it seems like it would be hard to build an account of how one ought to respond to e.g. Rawlsianism, totalism, libertarianism, person-affecting views, absolutist rights-based theories, and so on, across most choice situations.

comment by RandomEA · 2020-03-18T03:57:18.714Z · score: 13 (7 votes) · EA(p) · GW(p)

What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?

comment by Ben Pace · 2020-03-17T05:19:33.723Z · score: 11 (5 votes) · EA(p) · GW(p)

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of misaligned artificial general intelligence?

comment by RandomEA · 2020-03-18T05:30:27.963Z · score: 10 (5 votes) · EA(p) · GW(p)

Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?

comment by RandomEA · 2020-03-18T00:32:14.985Z · score: 10 (7 votes) · EA(p) · GW(p)

Do you think there are any actions that would obviously decrease existential risk? (I took this question from here [EA · GW].) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist's curse etc.)?

comment by richard_ngo · 2020-03-17T17:06:14.110Z · score: 10 (7 votes) · EA(p) · GW(p)

If you could convince a dozen of the world's best philosophers (who aren't already doing EA-aligned research) to work on topics of your choice, which questions would you ask them to investigate?

comment by Linch · 2020-03-17T07:13:58.587Z · score: 10 (6 votes) · EA(p) · GW(p)

Are there any specific natural existential risks that are significant enough that more than 1% of EA resources should be devoted to it? .1%? .01%?

comment by MichaelA · 2020-03-18T14:47:01.482Z · score: 3 (2 votes) · EA(p) · GW(p)

Good question!

Just a thought: Assuming this question is intended to essentially be about natural vs anthropogenic risks, rather than also comparing against other things like animal welfare and global poverty, it might be simplest to instead wonder: "Are there any specific natural existential risks that are significant enough that more than 1% of longtermist [or "existential risk focused"] resources should be devoted to it? .1%? .01%?"

comment by Ben Pace · 2020-03-17T05:18:29.463Z · score: 10 (6 votes) · EA(p) · GW(p)

Can you tell us something funny that Nick Bostrom once said that made you laugh? We know he used to do standup in London...

comment by Linch · 2020-03-18T02:25:02.677Z · score: 9 (6 votes) · EA(p) · GW(p)

On balance, what do you think is the probability that we are at or close to a hinge of history [EA · GW] (either right now, this decade, or this century)?

comment by John_Maxwell (John_Maxwell_IV) · 2020-03-22T03:29:16.916Z · score: 8 (2 votes) · EA(p) · GW(p)

What are the most important new ideas in your book for someone who's already been in the EA movement for quite a while?

comment by MichaelA · 2020-03-18T15:41:35.948Z · score: 8 (3 votes) · EA(p) · GW(p)

You break down a "grand strategy for humanity" into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.

But do you worry that we might not get a chance for a long reflection before having to "lock in" certain things to reach existential security?

For example, perhaps to reach existential security given a vulnerable world, we put in place "greatly amplified capacities for preventive policing and global governance" (Bostrom), and this somehow prevents a long reflection - either through permanent totalitarianism or just through something like locking in extreme norms of caution and stifling of free thought. Or perhaps in order to avoid disastrously misaligned AI systems, we have to make certain choices that are hard to reverse later, so we have to have at least some idea up-front of what we should ultimately choose to value.

(I've only started the book; this may well be addressed there already.)

comment by RhysSouthan · 2020-04-08T17:44:50.257Z · score: 3 (2 votes) · EA(p) · GW(p)

I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.

comment by Halstead · 2020-03-18T10:18:42.505Z · score: 8 (5 votes) · EA(p) · GW(p)

What are your top three productivity tips?

comment by CarolineJ · 2020-03-17T21:07:21.720Z · score: 8 (4 votes) · EA(p) · GW(p)

Do you think that climate change has been neglected in the EA movement? What are some options that seem great to you at the moment to have a very large impact to stir us in a better direction regarding climate change?

comment by richard_ngo · 2020-03-17T17:10:59.936Z · score: 8 (2 votes) · EA(p) · GW(p)

We have a lot of philosophers and philosophically-minded people in EA, but only a tiny number of them are working on philosophical issues related to AI safety. Yet from my perspective as an AI safety researcher, it feels like there are some crucial questions which we need good philosophy to answer (many listed here [AF · GW]; I'm particularly thinking about philosophy of mind and agency as applied to AI, a la Dennett). How do you think this funnel could be improved?

comment by Ben Pace · 2020-03-17T05:16:37.087Z · score: 8 (5 votes) · EA(p) · GW(p)

What's a book that you read and has impacted how you think / who you are, that you expected most people here won't have read?

comment by Linch · 2020-03-18T02:02:01.737Z · score: 7 (5 votes) · EA(p) · GW(p)

Can you describe a typical day in your life with sufficient granularity that readers can have a sense of what "being a researcher at a place like FHI" is like?

comment by NunoSempere · 2020-03-17T21:21:31.982Z · score: 7 (2 votes) · EA(p) · GW(p)

What's up with Pascal's Mugging? Why hasn't this pesky problem just been authoritatively solved? (and if it has, what's the solution?) What is your preferred answer? / Which bullets do you bite (e.g., bounded utility function, assigning probability 0 to events, a decision-theoretical approach cop-out, etc.)?

comment by MichaelStJules · 2020-03-17T18:46:52.709Z · score: 6 (4 votes) · EA(p) · GW(p)

Which ethical views do you have non-negligible credence in and, if true, would substantially change what you think ought to be prioritized, and how? How much credence do you have in these views?

comment by NunoSempere · 2020-03-17T16:04:57.702Z · score: 6 (5 votes) · EA(p) · GW(p)

Suppose your life's work ended up having negative impact. What is the most likely scenario under which this could happen?

comment by NunoSempere · 2020-03-17T08:27:59.098Z · score: 6 (5 votes) · EA(p) · GW(p)

As a sharp mind, respected scholar, or prominent member in the EA community, you have a certain degree of agency, an ability to start new projects and make things happen, a no small amount of oomph and mojo. How are you planning to use this agency in the coming decades?

comment by NunoSempere · 2020-03-17T21:21:04.888Z · score: 8 (6 votes) · EA(p) · GW(p)

This is a genuine question. The framing is that if Toby Ord wants to get in touch with a high ranking member of government, get an article published in a prominent newspaper, direct a large number of man hours to a project he finds worthy, etc. he probably can; just the association to Oxford will open doors in many cases.

This is in opposition to a box in a basement which produces the same research he would, and some of these differences stem from him being endorsed by some prestigious organizations, and there being some social common knowledge around his person. The words "public intellectual" come to mind.

I'm wondering how the powers-of-being-different-from-a-box-which-produces-research will pan out.

comment by Linch · 2020-03-18T02:22:47.149Z · score: 4 (3 votes) · EA(p) · GW(p)

What's one book that you think most EAs have not yet read and you think that they should (other than your own, of course)?

comment by CarolineJ · 2020-03-17T21:06:04.407Z · score: 4 (3 votes) · EA(p) · GW(p)

What are some of your current challenges? (maybe someone in the audience can help!)

comment by CarolineJ · 2020-03-17T21:05:28.262Z · score: 4 (3 votes) · EA(p) · GW(p)

What are you looking for in a research / operations colleague?

comment by MichaelStJules · 2020-03-17T18:48:34.491Z · score: 4 (2 votes) · EA(p) · GW(p)

How robust do you think the case is for any specific longtermist intervention? E.g. do new considerations constantly affect your belief in their cost-effectiveness, and by how much?

comment by MichaelA · 2020-03-18T15:25:19.289Z · score: 3 (2 votes) · EA(p) · GW(p)

In your book, you define an existential catastrophe as "the destruction of humanity's longterm potential". Would defining it instead as "the destruction of the vast majority of the longterm potential for value in the universe" capture the concept you wish to refer to? Would it perhaps slightly more technically accurately/explicitly capture what you wish to refer to, just perhaps in a less accessible or emotionally resonating way?

I wonder this partly because you write:

It is not that I think only humans count. Instead, it is that humans are the only beings we know of that are responsive to moral reasons and moral argument - the beings who can examine the world and decide to do what is best. If we fail, that upwards force, that capacity to push towards what is best or what is just, will vanish from the world.

It also seems to me that "the destruction of the vast majority of the longterm potential for value in the universe" would seem to be meaningfully more similar to what I'm really interested in avoiding than the destruction of humanity's potential if/when AGI, aliens, or other intelligent life evolving on earth becomes or is predicted to become an important shaper of events (either now or in the distant future).

comment by Halstead · 2020-03-18T10:15:55.757Z · score: 3 (2 votes) · EA(p) · GW(p)

Do you think the problems of infinite ethics give us reason to reject totalism or long-termism? If so, what is the alternative?

comment by RandomEA · 2020-03-18T06:26:38.095Z · score: 3 (2 votes) · EA(p) · GW(p)

What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)

comment by RandomEA · 2020-03-18T04:18:27.355Z · score: 3 (2 votes) · EA(p) · GW(p)

How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?

comment by RandomEA · 2020-03-18T02:44:14.943Z · score: 3 (2 votes) · EA(p) · GW(p)

How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?

comment by RandomEA · 2020-03-18T02:36:32.004Z · score: 3 (2 votes) · EA(p) · GW(p)

What do you think of applying Open Phil's outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations [EA(p) · GW(p)], an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world [EA · GW] over a small chance of making a major contribution to reducing GCBRs?

comment by RandomEA · 2020-03-18T02:18:07.613Z · score: 3 (2 votes) · EA(p) · GW(p)

Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse's EA Global talk [EA · GW] seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for "I noticed that over the last few weeks" for context)?

comment by RandomEA · 2020-03-18T00:40:26.178Z · score: 3 (2 votes) · EA(p) · GW(p)

What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?

How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedge against the associated moral risk?

How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)

Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?

comment by MichaelStJules · 2020-03-17T18:49:19.701Z · score: 3 (2 votes) · EA(p) · GW(p)

What are your views on the prioritization of extinction risks vs other longtermist interventions/causes?

comment by MichaelStJules · 2020-03-17T18:45:50.580Z · score: 3 (2 votes) · EA(p) · GW(p)

Which interventions/causes do you think are best to support/work on according to views in which extra people with good or great lives not being born is not at all bad (or far outweighed by other considerations)? E.g. different person-affecting views, or the procreation asymmetry.

comment by MichaelA · 2020-03-18T15:29:53.982Z · score: 2 (2 votes) · EA(p) · GW(p)

You seem fairly confident that we are at "the precipice", or "a uniquely important time in our story". This seems very plausible to me. But how long of a period are you imagining for the precipice?

The claim is much stronger if you mean something like a century than something like a few millennia. But even if the "hingey" period is a few millennia, then I imagine that us being somewhere in it could still be quite an important fact.

(This might be answered past chapter 1 of the book.)

comment by MichaelStJules · 2020-03-17T19:03:32.397Z · score: 2 (1 votes) · EA(p) · GW(p)

Do you lean more towards a preferential account of value, a hedonistic one, or something else?

How do you think tradeoffs between pleasure and suffering are best grounded according to a hedonistic view? It seems like there's no objective one-size-fits-all trade-off rate, since it seems like you could have different people have different preferences about the same quantities of pleasure and suffering in themselves.

comment by MichaelStJules · 2020-03-17T18:56:53.113Z · score: 2 (1 votes) · EA(p) · GW(p)

What new evidence would cause the biggest shifts in your priorities?

comment by Peter_Hurford · 2020-03-17T15:32:07.943Z · score: 2 (5 votes) · EA(p) · GW(p)

What are the three least interesting ideas you've heard in the last three years? (They don't have to be the least important, just the least surprising/brilliant/unexpected/etc.)

comment by Ben Pace · 2020-03-17T17:57:01.891Z · score: 2 (1 votes) · EA(p) · GW(p)

This is such an odd question. Could produce surprising answers though, if it’s something like “the least interesting ideas that people still took seriously” or “the least interesting ideas that are still a little bit interesting”. Upvoted.

comment by Peter_Hurford · 2020-03-17T22:48:41.847Z · score: 2 (1 votes) · EA(p) · GW(p)

Sometimes the obvious is still important to discuss.

comment by Ben Pace · 2020-03-17T05:21:06.609Z · score: 2 (1 votes) · EA(p) · GW(p)

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of engineered pandemics?

comment by SiebeRozendal · 2020-03-20T09:34:00.060Z · score: 1 (1 votes) · EA(p) · GW(p)

There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?

I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.

comment by MichaelA · 2020-03-18T15:49:09.913Z · score: 1 (1 votes) · EA(p) · GW(p)

What are your thoughts on how to evaluate or predict the impact of longtermist/x-risk interventions, or specifically efforts to generate and spread insights on this matters? E.g., how do you think about decisions like which medium to write in and whether to focus on generating ideas vs publicising ideas vs fundraising?

comment by MichaelA · 2020-03-18T15:46:33.295Z · score: 1 (1 votes) · EA(p) · GW(p)

How would your views change (if at all) if you thought it was likely that there are intelligent beings elsewhere in the universe that "are responsive to moral reasons and moral argument" (quote from your book)? Or if you thought it's likely that, if humans suffer an existential catastrophe, other such beings would evolve on Earth later, with enough time to potentially colonise the stars?

Do your thoughts on these matters depend somewhat on your thoughts on moral realism vs antirealism/subjectivism?

comment by Misha_Yagudin · 2020-03-17T21:27:41.164Z · score: 1 (1 votes) · EA(p) · GW(p)

What are some of your favourite theorems, proofs, algorithms, and data structures?

comment by CarolineJ · 2020-03-17T21:07:01.182Z · score: 1 (1 votes) · EA(p) · GW(p)

What are some directions you'd like the EA movement or some parts of the EA movement to take?

comment by CarolineJ · 2020-03-17T21:06:23.716Z · score: 1 (1 votes) · EA(p) · GW(p)

What do you like to do during your free time?

comment by CarolineJ · 2020-03-17T21:05:48.959Z · score: 1 (1 votes) · EA(p) · GW(p)

If you've read the book 'So good they can't ignore you', what do you think are the most important skills to master to be a writer/philosopher like yourself?

comment by CarolineJ · 2020-03-17T18:50:10.199Z · score: 1 (1 votes) · EA(p) · GW(p)

Hi Tobby! Thanks for being such a great source of inspiration for philosophy and EA. You're a great model to me!

Some questions, feel free to pick:

1) What philosophers are your sources of inspiration and why?

(put my other questions in separate comments). Also, writing "Toby"!

comment by Ben Pace · 2020-03-17T20:07:53.352Z · score: 4 (3 votes) · EA(p) · GW(p)

I think your questions are great. I suggest that you leave 7 separate comments so that users can vote on the ones that they’re most interested in.

comment by CarolineJ · 2020-03-17T21:07:51.445Z · score: 3 (2 votes) · EA(p) · GW(p)

Thanks Ben! I've edited the message to have only one question per post. :-)