Posts

Good v. Optimal Futures 2020-12-11T16:38:45.251Z
A toy model for technological existential risk 2020-11-28T11:55:29.115Z
X-risks to all life v. to humans 2020-06-03T15:40:26.118Z
Effective Pro Bono Projects 2019-09-02T15:10:11.092Z

Comments

Comment by robertharling on Good v. Optimal Futures · 2020-12-14T15:38:34.169Z · EA · GW

Thanks for sharing this paper, I had not heard of it before and it sounds really interesting.

Comment by robertharling on Good v. Optimal Futures · 2020-12-12T17:21:47.613Z · EA · GW

Thanks for your comment Jack, that's a really great point. I suppose that we would seek to influence AI slightly differently for each reason:

  1. Reduce chance of unaligned/uncontrolled AI
  2. Increase chance of useful AI
  3. Increase chance of exactly aligned AI

e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.

Comment by robertharling on The Fermi Paradox has not been dissolved · 2020-12-12T13:47:37.743Z · EA · GW

Thank you very much for this post, I found it very interesting. I remember reading the original paper and feeling a bit confused by it. It's not too fresh in my mind so I don't feel too able to try to defend it. I appreciate you highlighting how the method they use to estimate f_l is unique and drives their main result.

A range of 0.01 to 1 for fl in your preferred model seems surprisingly high to me, though I don't understand the Lineweaver Davis paper well enough to really comment on its result which I think your range is based on.  I think they mention how their approach leaves uncertainty in n_e as to what counts as a terrestrial planet. I wonder if  most estimates of any one parameter have a tendency to shift uncertainty onto other parameters, so that when combining individual estimates of each of parameter you end up with an unrealistically certain result.

Comment by robertharling on Good v. Optimal Futures · 2020-12-12T09:46:48.151Z · EA · GW

Thanks for your comment athowes. I appreciate your point that I could have done more in the post to justify this "binary" of good and optimal. 

Though the simulated minds scenario I described seems at first to be pretty much optimal, it could be much larger if you thought it would last for many more years. Given large enough uncertainty about future technology, maybe seeking to identify the optimal future is impossible.

I think your resources, value and efficiency model is really interesting. My intuition is that values is the limiting factor. I can believe there are pretty strong forces that mean that humanity will eventually end up optimising resources and efficiency,  but less confident values will converge to the best ones over time. This probably depends on whether you think a singleton will form at some point, and then it feels like the limit is how good the values of the singleton are.

Comment by robertharling on Make a Public Commitment to Writing EA Forum Posts · 2020-12-11T16:41:47.461Z · EA · GW

Thanks again for creating this post Neel. I can confirm I managed to write and publish my post in time! 

I think without commiting to writing it here, my post would either have been made a few months later, or perhaps not been published at all.

Comment by robertharling on A toy model for technological existential risk · 2020-11-28T20:49:54.073Z · EA · GW

Thanks for your comment!

I hadn't thought to think about selection effects, thanks for pointing that out. I suppose Bostrom actually describes black balls as technologies that cause catastrophe but doesn't set the bar as high as extinction. Then drawing a black ball doesn't affect future populations drastically, so perhaps selection effects don't apply?

Also, I think in The Precipice Toby Ord makes some inferences for natural extinction risk given the length of time humanity has existed for? Though I may not be remembering correctly. I think the logic was something like "Assume we're randomly distributed amongst possible humans. If existential risk was very high, then there'd be a very small set of worlds in which humans have been around for this long, and it would be very unlikely that we'd be in such a world. Therefore it's more likely that our estimate of existential risk is too high".   This then seems quite similar to my model of making inferences based on not having previously drawn a black ball. I don't think I understand selection effects too well though so I appreciate any comments on this!

Comment by robertharling on Make a Public Commitment to Writing EA Forum Posts · 2020-11-25T21:30:29.101Z · EA · GW

Commitment: I commit to writing a post on a vague idea about where most of the value of the long term future is and how sensitive it is to different values by 7pm on 11th December.

Thanks for suggesting this Neel!

Comment by robertharling on Things I Learned at the EA Student Summit · 2020-10-27T23:44:28.993Z · EA · GW

Thanks for this post Akash, I found it really interesting to read. I definitely agree with your point about how friendly EAs can be when you reach out to them. I think this is something I've been aware of for a while, but it still takes me time to internalise and make myself more willing to reach out to people. But it's definitely something I want to push myself to do more, and encourage other people to do. No one is  going to be unhappy about someone showing an interest in their work and ideas!

Comment by robertharling on Idea: statements on behalf of the general EA community · 2020-06-12T11:21:01.972Z · EA · GW

This is a really interesting idea. I think I instinctively have a couple of concerns about such an idea

1) What is the benefit of such statements? Can we expect the opinion of the EA community to really carry much weight beyond relatively niche areas?

2) Can the EA community be sufficiently well defined to collect opinion? It is quite hard to work out who identifies as an EA, not least because some people are unsure themselves. I would worry any attempt to define the EA community too strictly (such as when surveying the community's opinion) could come across as exclusionary and discourage some people from getting involved.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-11T14:55:35.097Z · EA · GW

Thanks for your response!

I definitely see your point on the value of information to the future civilisation. The technology required to reach the moon and find the cache is likely quite different to the level required to resurrect humanity from the cache so the information could still be very valuable.

An interesting consideration may be how we value a planet being under human control vs control of this new civilisation. We may think we cannot assume that the new civilisation would be doing valuable things but that a human planet would be quite valuable. This consideration would depend a lot on your moral beliefs. If we don't extrapolate the value of humanity to the value of this new civilisation, we could then ask whether we can extrapolate from how humanity would respond to finding the cache on the moon to how the new civilisation would respond.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-11T10:38:05.106Z · EA · GW

Ah yes! Thanks for pointing that out!

Comment by robertharling on X-risks to all life v. to humans · 2020-06-10T14:04:20.070Z · EA · GW

Thanks for your comment, I found that paper really interesting and it was definitely an idea I'd not considered before.

My main two questions would be:

1) What is the main value of humanity being resurrected? - We could inherently value the preservation of humanity and it's culture. However, my intuition would be that humanity would be resurrected in small numbers and these humans might not even have very pleasant lives if they're being analysed or experimented on. Furthermore the resurrected humans are likely to have very little agency, being controlled by technologically superior beings. Therefore it would seem unlikely that the resurrected humans could create much value, much less achieve a grand future.

2) How valuable would information on humanity be to a civilisation that had technologically surpassed it? - The civilisation that resurrected humanity would probably be much more technologically advanced than humanity, and might even have it's own AI as mentioned in the paper. It would then seem that it must have overcome many of the technological x-risks to reach that point, so information on humanity succumbing to one may not be that useful. It may not be prepared for certain natural x-risks that could have caused human extinction, but these seem much less likely than manmade x-risks.

Thanks again for such an interesting paper!

Comment by robertharling on X-risks to all life v. to humans · 2020-06-10T11:40:54.161Z · EA · GW

Thanks for your comment. "Caring about more" is quite a vague way of describing what I wanted to say. I think I was just trying to say that the risk of a true existential event from A is about 7 times greater than the risk from B (as 0.7/0.095 =~ 7.368) so it would be 7 times not 70 times?

Comment by robertharling on X-risks to all life v. to humans · 2020-06-10T11:28:08.778Z · EA · GW

Considering evolutionary timelines is definitely very hard because it's such a chaotic process. I don't have too much knowledge about evolutionary history and am hoping to research this more. I think after most human existential events, the complexity of the life that remains would be much greater than that for most of the history of the Earth. So although it took humans 4.6 billion years to evolve "from scratch", it could take significantly less time for intelligent life to re-evolve after an existential event as a lot of the hard evolutionary work has already been done.

I could definitely believe it could take longer than 0.5 billion years for intelligent life to re-evolve, but I'd be very uncertain on that and give some credence that it could take significantly less time. For example, humanity evolved "only" 65 million years after the asteroid that caused the dinosaur extinction.

The consideration of how "inevitable" intelligence is in evolution is very interesting. One argument that high intelligence would be likely to re-emerge could be that humanity has shown it to be a very successful strategy. So it would just take one species to evolve high levels of intelligence for there to then become a large number of intelligent beings on Earth again.

(Apologies for my slow reply to your comment!)

Comment by robertharling on X-risks to all life v. to humans · 2020-06-10T11:18:30.646Z · EA · GW

I believe it is the probability that a nuclear war occurs AND leads to human extinction, as described in The Precipice. I think I would agree that if it was the just the probability of nuclear war, this would be too low, and a large reason the number is small is because of the difficulty for a nuclear war to cause human extinction.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-05T12:41:56.425Z · EA · GW

Thanks for the elaboration. I haven't given much consideration to "desired dystopias" before and they are really interesting to consider.

Another dystopian scenario to consider could be one in which humanity "strands" itself on Earth through resource depletion. This could also prevent future life from achieving a grand future.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-05T12:37:17.024Z · EA · GW

That makes a lot of sense. If the probability of intelligent life re-evolving is low, or if the probability of it doing morally valuable things is low then this reduces the importance of considering the effect on other species.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-04T11:24:31.151Z · EA · GW

Hi Michael, thanks for this comment!

This is a really good point and something I was briefly aware of when writing but did not take the time to consider fully. I've definitely conflated extinction risk with existential risk. I hope that when restricting everything I said just to extinction risk, the conclusion still holds.

A scenario where humanity establishes it's own dystopia definitely seems comparable to the misaligned AGI scenario. Any "locked-in" totalitarian regime would probably prevent the evolution of other intelligent life. This could cause us to increase the risk posed by such dystopian scenarios and weigh these risks more highly.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-04T11:19:40.382Z · EA · GW

Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!

Comment by robertharling on X-risks to all life v. to humans · 2020-06-04T11:15:21.080Z · EA · GW

Hi Michael, thank you very much for your comment.

I was not aware of some of these posts and will definitely look into them, thanks for sharing! I also eagerly await a compilation of crucial questions for longtermists which sounds very interesting and useful.

I definitely agree that I have not given consideration to what moral views re-evolved life would have. This is definitely a big question. One assumption I may have implicitly used but not discussed is that

"While the probability of intelligent life re-evolving may be somewhat soluble and differ between different existential scenarios, the probability of it being morally aligned with humanity is not likely to differ in a soluble way between scenarios."

Therefore it should not affect how we compare different x-risks. For example, if we assumed re-evolved life had a 10% chance of being morally aligned with humanity, this would apply in all existential scenarios and so not affect how we compare them. The question of what being "morally aligned" with humanity means, and whether this is what we want is also a big question I appreciate. I avoided discussing the moral philosophy as I'm uncertain how to consider it, but I agree it is a crucial question.

I also completely agree that considerations of ETI could inform how we consider probabilities of future evolution. It is my planned first avenue of research for getting a better grasp on the probabilities involved.

Thanks again for you comment and for the useful links!

Comment by robertharling on X-risks to all life v. to humans · 2020-06-04T11:02:55.630Z · EA · GW

Thanks for your comment! I'm very interested to hear about a modelling approach. I'll look at your model and will probably have questions in the near-future!

Comment by robertharling on X-risks to all life v. to humans · 2020-06-04T10:58:46.839Z · EA · GW

Hi, thanks for your questions!

(1) I definitely agree with P1. For P2, would it not be the case that the risk of extinction of humans is strictly greater than the the risk of extinction of humans and future possible intelligent life as the latter is a conjuction of the former? Perhaps a second premise could instead be

P2 The best approaches for reducing human existential risk are not necessarily the best approaches for reducing existential risk to humans and all future possible intelligent life

With a conclusion

C We should focus on the best methods of preventing "total existential risk", not on the best methods of preventing "human existential risk"

(subject to appropriate expected value calculations e.g. preventing a human existential risk may in fact be the most cost effective way of reducing total existential risk).


(2) I think unfortunately I do not have the necessary knowledge to answer these questions. It is something I hope to research further though. It seems that the probability of re-evolution in different scenarios probably has lots of considerations, such as the earth's environment after the event, the initial impact on a species, the initial impact on other species. One thing I find interesting is to consider what impact things left behind by humanity could have on re-evolution. Humans may go extinct, but our buildings may survive to provide new biomes for species, and our technology may survive to be used by "somewhat"-intelligent life in the future.

Comment by robertharling on X-risks to all life v. to humans · 2020-06-04T10:39:48.829Z · EA · GW

Hi Carl,

Thank you very much for your comment! I agree with your comment on the human extinction risks that 99% is probably not high enough to cause extinction. I think I wanted to provide examples of human extinction event, but should have been more careful on the exact values and situations I described.

On re-evolution after an asteroid impact, my understanding is that although species such as humans eventually evolved after the impact, had humanity existed at the time of the impact it would not have survived as nearly all land mammals over 25kg went extinct. So on biology alone humans would be unlikely to survive the impact. However I agree our technology could massively alter the probability in our favour.

I hope that if the probabilities of human extinction from both events are lower, my comment on the importance of the effect on other species still holds.