Prabhat Soni's Shortform

post by Prabhat Soni · 2020-06-30T10:19:36.684Z · score: 2 (1 votes) · EA · GW · 26 comments

26 comments

Comments sorted by top scores.

comment by Prabhat Soni · 2020-09-16T11:25:02.798Z · score: 6 (4 votes) · EA(p) · GW(p)

High impact career for Danish people: Influencing what will happen with Greenland

EDIT: Do see the comments if you're interested in this!

Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.

Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence, but it will be difficult for a single effective altruist since multiple large countries lay claims on Antarctica (i.e. more competition). Greenland however is much more interesting.

 

It's kinda easy for Danes to influence Greenland

Denmark is a small-ish country with a population of ~5.7 million people. There's really not much competition if one wants to enter politics (if you're a Dane you might correct me on this). The level of competition is much lower than conventional EA careers since you only need to compete with people within Denmark.

 

There are unsolved questions wrt Greenland

  1. There's a good chance Denmark will sell Greenland because they could get absurd amounts of money. Moreover, Greenland is not of much value to them since Denmark will mostly remain inhabitable and they don't have a large population to resettle. Do you sell Greenland to a peaceful/neutral country? To the highest bidder? Is it okay to sell it to a historically aggresive country? Are there some countries you want to avoid selling it to because they will gain too much influence? USA, China and Russia have shown interest in buying Greenland.
  2. Should Denmark just keep Greenland, allow mass immigration and become the next superpower?
  3. Should Greenland remain autonomous?

 

Importance

  1. Greenland, with a billion+ people living in it, could be the next superpower. Just like how most of the emerging technology (e.g. AI, biotechnology, nanotechnology) are developed in current superpowers like USA and China, future technologies could be developed in Greenland.
  2. In a world of extreme climate change, it is possible that 1-2 billion people could live in Greenland. That's a lot of lives you could influence.
  3. Greenland has a strategic geographic location. If a country with bad intentions buys Greenland, that could be catastrophic for world peace.
comment by RyanCarey · 2020-09-16T12:53:51.170Z · score: 3 (2 votes) · EA(p) · GW(p)

The total drylands population is 35% of the world population (~6% from desert/semi-desert). The total number of migrants, however, is 3.5% of world population. So less than 10% of those from drylands have left. But most such migrants move because of politics, war, employment rather than climate. The number leaving because of climate is less (and possibly much less) than 5% of the drylands population.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants. Probably too few of these people will go to any country, let alone Greenland, to make it into a new superpower. But let's run the numbers for Greenland anyway. Of the world's 300M migrants, Greenland currently has only ~10k. So of an extra 50M, Greenland could be expected to take ~2k, so I'm coming in 5-6 orders of magnitude lower than the 1B figure.

It does still have some military relevance, and would be good to keep it neutral, or at least out of the hands of China/Russia.

comment by Prabhat Soni · 2020-09-18T20:44:17.060Z · score: 4 (3 votes) · EA(p) · GW(p)

Thanks Ryan for your comment!

It seems like we've identified a crux here: what will be the total number of people living in Greenland in 2100 / world with 4 degrees warming?

 

I have disagreements with some of your estimates.

The total drylands population is 35% of the world population

Large populations currently reside in places like India, China and Brazil. These currently non-drylands could be converted to drylands in the future (and also possibly desertified). Thus, the 35% figure could increase in the future.

So less than 10% of those from drylands have left.

Drylands are categorised into {desert, arid, semi-arid, dry sub-humid}. It's only when a place is in the desert category, that people seriously consider moving out (for reference all of California comes under arid or semi-arid category). In the future, deserts could form a larger share of drylands, and less arid regions could form a smaller share. So, you could have more than 10% of people from places called "drylands" leaving in the future.

The total number of migrants, however, is 3.5% of world population.

Yes, that is correct. But that is also a figure from 2019. A more relevant question would be how many migrants would there be in 2100? I think it's quite obvious that as the Earth warms, the number of climate migrants will increase.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants.

I don't really agree with the 5% estimate. Specifically for desertified lands, I would guess the %age of people migrating to be significantly higher.

Of the world's 300M migrants, Greenland currently has only ~10k.

This is a figure from 2020 and I don't think you can simply extrapolate this.

 

After revising my estimates to something more sensible, I'm coming with ~50M people in Greenland. So, Greenland would be far from being a superpower. I'm hesitant to share my calculations because my confidence level for my calculations is low -- I wouldn't be surprised if the actual number was upto 2 orders of magnitude smaller or greater.

A key uncertainity: Does desertification of large regions imply that in-country / local migration is useless?

 

The world, 4 degrees warmer. A map from Parag Khanna's book Connectography
comment by RyanCarey · 2020-09-18T21:11:32.936Z · score: 2 (1 votes) · EA(p) · GW(p)

I'm not sure you've understood how I'm calculating my figures, so let me show how we can set a really conservative upper bound for the number of people who would move to Greenland.

Based on current numbers, 3.5% of world population are migrants, and 6% are in deserts. So that means less than 3.5/9.5=37% of desert populations have migrated. Even if half of those had migrated because of the weather, that would be less than 20% of all desert populations. Moreover, even if people migrated uniformly according to land area, only 1.4% of migrants would move to Greenland (that's the fraction of land area occupied by Greenland). So an ultra-conservative upper bound for the number of people migrating to Greenland would be 1B*.37*.2*.014=1M.

So my initial status-quo estimate was 1e3, and my ultra-conservative estimate was 1e6. It seems pretty likely to me that the true figure will be 1e3-1e6, whereas 5e7 is certainly not a realistic estimate.

comment by Prabhat Soni · 2020-09-22T23:25:32.602Z · score: 8 (2 votes) · EA(p) · GW(p)

Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving  in deserts, and I would expect this to continue.

comment by Prabhat Soni · 2020-09-26T04:26:07.878Z · score: 4 (3 votes) · EA(p) · GW(p)

Among rationalist people and altruistic people, on average, which of them are more likely to be attracted to effective altruism?

This has uses. If one type of people are significantly more likely to be attracted to EA, on average, then it makes sense to target them for outreach efforts. (e.g. at university fairs)

I understand that this is a general question, and I'm only looking for a general answer :P (but specifics are welcome if you can provide them!)

comment by markus_over · 2020-09-26T07:11:04.941Z · score: 6 (5 votes) · EA(p) · GW(p)

I don't have or know of any data (which doesn't mean much, to be fair), but my hunch would be that rationalist people who haven't heard of EA are, on average, probably more open to EA ideas than the average altruistic person who hasn't heard of EA. While altruistic people might generally agree with the core ideas, they may be less likely to actually apply them to their actions.

It's a vague claim though, and I make these assumption because of the few dozens of EAs I know personally, I'd very roughly assume 2/3 of them to come across as more rationalist than altruistic (if you had to choose which of the two they are), plus I'd further assume that from the general population more people will appear to be altruistic, than rationalist. If rationalists are more rare in the general population, yet more common among EAs, that would seem like evidence for them being a better match so to speak. These are all just guesses however without much to back them up, so I too would be interest in what other people think (or know).

comment by Prabhat Soni · 2020-09-28T04:05:42.145Z · score: 1 (1 votes) · EA(p) · GW(p)

Hmm interesting ideas. I have one disagreement though, my best guess is that there are more rationalist people than altruistic people.

I think around 50% of the people who study some quantitative/tech subject and have good IQ qualify as rationalist (is this an okay proxy for rationalist people?). And my definition for altruistic people is someone who makes career decisions primarily due to altruistic people.

Based on these definitions, I think there are more rationalist people than altruistic people. Though, this might be biased since I study at a tech college (i.e. more rationalists) and live in India (i.e. less altruistic people, presumably because people tend to become altruistic when their basic needs are met).

comment by Prabhat Soni · 2020-10-20T16:39:01.168Z · score: 3 (2 votes) · EA(p) · GW(p)

I've never seen anyone explain EA using the Pareto Principle (80/20 rule). The cause prioritisation / effectiveness part of EA is basically the Pareto principle applied to doing good. I'd guess 25-50% of the public knows of the Pareto principle. So, I think this might be a good approach. Thoughts?

comment by xccf · 2020-10-21T04:38:58.089Z · score: 3 (2 votes) · EA(p) · GW(p)

That's a good point, it's not a connection I've heard people make before but it does make sense.

I'm a bit concerned that the message "you can do 80% of the good with only 20% of the donation" could be misinterpreted:

  • I associate the Pareto principle with saving time and money.  EA isn't really a movement about getting people to decrease the amount of time and money they spend on charity though, if anything probably the opposite.
  • To put it another way, the top opportunities identified by EA still have room for more funding.  So the mental motion I want to instill is not about shaving away your low-impact charitable efforts, it's more about doubling down on high-impact charitable efforts that are underfunded (or discovering new high-impact charitable efforts).
  • We wouldn't want to imply that the remaining 20% of the good is somehow less valuable--it is more costly to access, but in principle if all of the low-hanging altruistic fruit is picked, there's no reason not to move on to the higher-hanging fruit.  The message "concentrate your altruism on the 80% and don't bother with the 20%" could come across as callous.  I would rather make a positive statement that you can do a lot of good surprisingly cheaply than a negative statement that you shouldn't ever do good inefficiently.

Nevertheless I think the 80/20 principle could be a good intuition pump for the idea that results are often disproportionate with effort and I appreciate your brainstorming :)

comment by Prabhat Soni · 2020-10-21T13:17:36.142Z · score: 1 (1 votes) · EA(p) · GW(p)

Hey, thanks for your reply. By the Pareto Principle, I meant something like "80% of the good is achieved by solving 20% of the problem areas". If this is easy to misinterpret (like you did), then it might not be a great idea :P  The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

comment by xccf · 2020-10-24T05:24:00.089Z · score: 2 (2 votes) · EA(p) · GW(p)

The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

That sounds harder to misinterpret, yeah.

comment by Max_Daniel · 2020-10-21T15:39:19.293Z · score: 2 (1 votes) · EA(p) · GW(p)

See here [EA(p) · GW(p)] for some related material, in particular Owen Cotton-Barratt's talk Prospecting for Gold and the recent paper by Kokotajlo & Oprea.

comment by Prabhat Soni · 2020-10-12T16:41:29.681Z · score: 3 (2 votes) · EA(p) · GW(p)

Does a vaccine/treatment for malaria exist? If yes, why are bednets more cost-effective than providing the vaccine/treatment?

comment by Linch · 2020-10-13T01:36:05.348Z · score: 9 (4 votes) · EA(p) · GW(p)

There's only one approved malaria vaccine, and it's not very good (requires 4 shots, and ~36% reduction in number of cases).

Anti-mosquito bednets have an additional advantage (over malaria vaccines) in being able to prevent mosquito-borne diseases other than malaria, though I don't know how big a deal this is in practice (eg, I don't know how often the same area will have yellow fever and malaria).

comment by Prabhat Soni · 2020-10-13T03:43:59.041Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks! This was helpful!

comment by Prabhat Soni · 2020-10-04T23:31:39.223Z · score: 3 (2 votes) · EA(p) · GW(p)

Is it high impact to work in AI policy roles at Google, Facebook, etc? If so, why is it discussed so rarely in EA?

comment by lifelonglearner · 2020-10-05T00:06:31.649Z · score: 3 (2 votes) · EA(p) · GW(p)

I see it discussed sometimes in AI safety groups.

There are, for example, safety oriented teams at both Google Research and DeepMind.

But I agree it could be discussed more.

comment by Prabhat Soni · 2020-06-30T10:19:36.980Z · score: 3 (2 votes) · EA(p) · GW(p)

Changing behaviour of people to make them more longtermist

Can we use standard behavioral economics techniques like loss aversion (e.g. humanity will be lost forever), scarcity bias, framing bias and nudging to influence people to make longtermist decisions instead of neartermist ones? Is this even ethical, given moral uncertainty?

It would be awesome if you could direct me to any existing research on this!

comment by Ramiro · 2020-06-30T15:59:15.106Z · score: 1 (1 votes) · EA(p) · GW(p)

I think people already do some of it. I guess the rhetorical shift from x-risk reasoning ("hey, we're all gonna die!") to lontermist arguments ("imagine how wonderful the future can be after the Precipice...") is based on that.

However, I think that, besides cultural challenges, the greatest obstacles for longtermist reasoning, in our societies (particularly in LMIC), is that we have an "intergenerational Tragedy of the Commons" aggravated by short-term bias (and hyperbolic discount) and representativeness heuristic (we've never observed human extinction). People don't usually think about the longterm future - but, even when they do it, they don't want to trade their individual-present-certain welfare for a collective (and non-identifiable), future and uncertain welfare.

comment by Prabhat Soni · 2020-06-30T22:43:18.808Z · score: 2 (2 votes) · EA(p) · GW(p)

Hi Ramiro, thanks for your comment. Based off this post, we can think of 2 techniques to promote longtermism. The first is what I mentioned - which is exploiting biases to get people inclined to longtermism. And the second is what you [might have] mentioned - a more rationality-driven approach where people are made aware of their biases with respect to longtermism. I think your idea is better since it is a more permanent-ish solution (there is security against future events that may attempt to bias an individual towards neartermism), has spillover effects into other aspects of rationality, and has lower risk with respect to moral uncertainity (correct me if I'm wrong).

I agree with the several biases/decision-making flaws that you mentioned! Perhaps, sufficient levels of rationality is a pre-requisite to one's acceptance of longtermism. Maybe a promising EA cause area could be promoting rationality (such a cause area probably exists I guess).

comment by Prabhat Soni · 2020-09-11T16:57:36.862Z · score: 1 (1 votes) · EA(p) · GW(p)

Some good, interesting critiques to effective altruism.

Short version: read https://bostonreview.net/forum/logic-effective-altruism/peter-singer-reply-effective-altruism-responses (5-10 mins)

Longer version: start reading from https://bostonreview.net/forum/peter-singer-logic-effective-altruism (~ 1 hour)

I think these critiques are fairly comprehensive. They probably cover like 80-90% of all possible critiques.

comment by Benjamin_Todd · 2020-09-11T21:59:21.424Z · score: 8 (2 votes) · EA(p) · GW(p)

This is a big topic, but I think these critiques mainly fail to address the core ideas of EA (that we should seek the very best ways of helping), and instead criticise related ideas like utilitarianism or international aid. On the philosophy end of things, more here: https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff [EA · GW]

comment by Prabhat Soni · 2020-07-22T11:02:00.298Z · score: 1 (1 votes) · EA(p) · GW(p)

Should More EAs Focus on Entrepreneurship?

My argument for this is:

1. EAs want to solve problems in area that are neglected/unpopular.

=> 2. Less jobs, etc in those fields and lot of competition for jobs among existing EA orgs (e.g. GPI, FHI, OpenPhil, Deepmind, OpenAI, MIRI, 80K). I'm not sure, but I think there's an unnecessarily high amount of competition at the moment -- i.e. rejecting sufficiently qualified candidates.

=> 3. It is immensely beneficial to create new EA orgs that can absorb people.


Other questions:

  • Should we instead make existing orgs larger? Does quality of orgs go down when you create a lot of orgs?
  • What about oligopoly over market when there are very few orgs (e.g. due to whatever reason if GPI starts messing up consistently it is very bad for EA since they are on of the very few orgs doing global priorities research)
comment by Prabhat Soni · 2020-10-01T07:54:35.770Z · score: 1 (1 votes) · EA(p) · GW(p)

Wonderful to learn more about you!

Yeah, I completely agree with you that there is massive potential for EA in India. EA India is pretty small as of now: ~50 people (and ~35 people if you don't count foreigners doing projects in India).

Also regarding introductions: I'll make e-mail introductions so could you send your e-mail?

Your ideas are indeed interesting. I'm far from an expert on this topic so I'll just send all the literature I know on this topic.

 

Recommended:

  • Future Perfect is an EA group/organisation at Vox that writes about EA-related stuff for mass media. You can see their website here and see a video about them here.
  • Regarding the "top 20 utilitarian profiles list": See https://80000hours.org/problem-profiles/#overall-list. They have ranked what they think are the top 9 problems. In fact, if you go to any of the problem profiles for individual problems, you will notice they have given a quantitative score for scale, tractability and negelectedness. 80,000 Hours uses a quantitative framework to rank problems, which you can read about here.
  • Regarding a "wiki-type editable problems list", Rethink Priorities has launched a Priority Wiki, which you can check out here and here. The second link wasn't working when I tried but maybe you'll be luckier!
  • The Fidelity model.

 

Might be helpful: