Posts

Factors other than ITN? 2020-09-26T04:34:08.244Z · score: 33 (9 votes)
A List of EA Donation Pledges (GWWC, etc) 2020-08-08T15:26:50.884Z · score: 24 (11 votes)
Prabhat Soni's Shortform 2020-06-30T10:19:36.684Z · score: 2 (1 votes)

Comments

Comment by prabhat-soni on Institutions for Future Generations · 2020-10-27T16:27:25.758Z · score: 1 (1 votes) · EA · GW

Hey, thanks for writing this. There are some age/time-related reforms that you have mentioned: Longer Election Cycles, Legislative Youth Quotas, Age Limits on Electorate, Age-weighted Voting, Enfranchisement of the Young, and Guardianship Voting for the Very Young.

These reforms would only promote "short longtermism" (i.e. next 50-100 years) while what we actually care about is "cosmic longtermism" (i.e. next ~1 billion years). What are your thoughts on this?

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-10-21T13:17:36.142Z · score: 1 (1 votes) · EA · GW

Hey, thanks for your reply. By the Pareto Principle, I meant something like "80% of the good is achieved by solving 20% of the problem areas". If this is easy to misinterpret (like you did), then it might not be a great idea :P  The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-10-20T16:39:01.168Z · score: 3 (2 votes) · EA · GW

I've never seen anyone explain EA using the Pareto Principle (80/20 rule). The cause prioritisation / effectiveness part of EA is basically the Pareto principle applied to doing good. I'd guess 25-50% of the public knows of the Pareto principle. So, I think this might be a good approach. Thoughts?

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-10-13T03:43:59.041Z · score: 1 (1 votes) · EA · GW

Thanks! This was helpful!

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-10-12T16:41:29.681Z · score: 3 (2 votes) · EA · GW

Does a vaccine/treatment for malaria exist? If yes, why are bednets more cost-effective than providing the vaccine/treatment?

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-10-04T23:31:39.223Z · score: 3 (2 votes) · EA · GW

Is it high impact to work in AI policy roles at Google, Facebook, etc? If so, why is it discussed so rarely in EA?

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-10-01T07:54:35.770Z · score: 1 (1 votes) · EA · GW

Wonderful to learn more about you!

Yeah, I completely agree with you that there is massive potential for EA in India. EA India is pretty small as of now: ~50 people (and ~35 people if you don't count foreigners doing projects in India).

Also regarding introductions: I'll make e-mail introductions so could you send your e-mail?

Your ideas are indeed interesting. I'm far from an expert on this topic so I'll just send all the literature I know on this topic.

 

Recommended:

  • Future Perfect is an EA group/organisation at Vox that writes about EA-related stuff for mass media. You can see their website here and see a video about them here.
  • Regarding the "top 20 utilitarian profiles list": See https://80000hours.org/problem-profiles/#overall-list. They have ranked what they think are the top 9 problems. In fact, if you go to any of the problem profiles for individual problems, you will notice they have given a quantitative score for scale, tractability and negelectedness. 80,000 Hours uses a quantitative framework to rank problems, which you can read about here.
  • Regarding a "wiki-type editable problems list", Rethink Priorities has launched a Priority Wiki, which you can check out here and here. The second link wasn't working when I tried but maybe you'll be luckier!
  • The Fidelity model.

 

Might be helpful:

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-09-28T04:05:42.145Z · score: 1 (1 votes) · EA · GW

Hmm interesting ideas. I have one disagreement though, my best guess is that there are more rationalist people than altruistic people.

I think around 50% of the people who study some quantitative/tech subject and have good IQ qualify as rationalist (is this an okay proxy for rationalist people?). And my definition for altruistic people is someone who makes career decisions primarily due to altruistic people.

Based on these definitions, I think there are more rationalist people than altruistic people. Though, this might be biased since I study at a tech college (i.e. more rationalists) and live in India (i.e. less altruistic people, presumably because people tend to become altruistic when their basic needs are met).

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-09-26T04:26:07.878Z · score: 4 (3 votes) · EA · GW

Among rationalist people and altruistic people, on average, which of them are more likely to be attracted to effective altruism?

This has uses. If one type of people are significantly more likely to be attracted to EA, on average, then it makes sense to target them for outreach efforts. (e.g. at university fairs)

I understand that this is a general question, and I'm only looking for a general answer :P (but specifics are welcome if you can provide them!)

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-09-22T23:25:32.602Z · score: 8 (2 votes) · EA · GW

Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving  in deserts, and I would expect this to continue.

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-09-18T20:44:17.060Z · score: 4 (3 votes) · EA · GW

Thanks Ryan for your comment!

It seems like we've identified a crux here: what will be the total number of people living in Greenland in 2100 / world with 4 degrees warming?

 

I have disagreements with some of your estimates.

The total drylands population is 35% of the world population

Large populations currently reside in places like India, China and Brazil. These currently non-drylands could be converted to drylands in the future (and also possibly desertified). Thus, the 35% figure could increase in the future.

So less than 10% of those from drylands have left.

Drylands are categorised into {desert, arid, semi-arid, dry sub-humid}. It's only when a place is in the desert category, that people seriously consider moving out (for reference all of California comes under arid or semi-arid category). In the future, deserts could form a larger share of drylands, and less arid regions could form a smaller share. So, you could have more than 10% of people from places called "drylands" leaving in the future.

The total number of migrants, however, is 3.5% of world population.

Yes, that is correct. But that is also a figure from 2019. A more relevant question would be how many migrants would there be in 2100? I think it's quite obvious that as the Earth warms, the number of climate migrants will increase.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants.

I don't really agree with the 5% estimate. Specifically for desertified lands, I would guess the %age of people migrating to be significantly higher.

Of the world's 300M migrants, Greenland currently has only ~10k.

This is a figure from 2020 and I don't think you can simply extrapolate this.

 

After revising my estimates to something more sensible, I'm coming with ~50M people in Greenland. So, Greenland would be far from being a superpower. I'm hesitant to share my calculations because my confidence level for my calculations is low -- I wouldn't be surprised if the actual number was upto 2 orders of magnitude smaller or greater.

A key uncertainity: Does desertification of large regions imply that in-country / local migration is useless?

 

The world, 4 degrees warmer. A map from Parag Khanna's book Connectography
Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-09-16T11:25:02.798Z · score: 6 (4 votes) · EA · GW

High impact career for Danish people: Influencing what will happen with Greenland

EDIT: Do see the comments if you're interested in this!

Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.

Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence, but it will be difficult for a single effective altruist since multiple large countries lay claims on Antarctica (i.e. more competition). Greenland however is much more interesting.

 

It's kinda easy for Danes to influence Greenland

Denmark is a small-ish country with a population of ~5.7 million people. There's really not much competition if one wants to enter politics (if you're a Dane you might correct me on this). The level of competition is much lower than conventional EA careers since you only need to compete with people within Denmark.

 

There are unsolved questions wrt Greenland

  1. There's a good chance Denmark will sell Greenland because they could get absurd amounts of money. Moreover, Greenland is not of much value to them since Denmark will mostly remain inhabitable and they don't have a large population to resettle. Do you sell Greenland to a peaceful/neutral country? To the highest bidder? Is it okay to sell it to a historically aggresive country? Are there some countries you want to avoid selling it to because they will gain too much influence? USA, China and Russia have shown interest in buying Greenland.
  2. Should Denmark just keep Greenland, allow mass immigration and become the next superpower?
  3. Should Greenland remain autonomous?

 

Importance

  1. Greenland, with a billion+ people living in it, could be the next superpower. Just like how most of the emerging technology (e.g. AI, biotechnology, nanotechnology) are developed in current superpowers like USA and China, future technologies could be developed in Greenland.
  2. In a world of extreme climate change, it is possible that 1-2 billion people could live in Greenland. That's a lot of lives you could influence.
  3. Greenland has a strategic geographic location. If a country with bad intentions buys Greenland, that could be catastrophic for world peace.
Comment by prabhat-soni on Some thoughts on EA outreach to high schoolers · 2020-09-15T17:42:48.816Z · score: 5 (4 votes) · EA · GW

Another approach that targets high-schoolers that I can think of is promoting philosophy education in schools. How does EA outreach in schools compare with this?

Comment by prabhat-soni on RyanCarey's Shortform · 2020-09-11T17:04:07.740Z · score: 7 (5 votes) · EA · GW

I'd be curious to discuss if there's a case for Moscow. 80,000 Hours's lists being a Russia or India specialist under "Other paths we're excited about". The case would probably revolve around Russia's huge nuclear arsenal and efforts to build AI. If climate change were to become really bad (say 4 degrees+ warming), Russia (along with Canada and New Zealand) would become the new hub for immigration given it's geography  -- and this alone could make it one of the most influential countries in the world.

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-09-11T16:57:36.862Z · score: 1 (1 votes) · EA · GW

Some good, interesting critiques to effective altruism.

Short version: read https://bostonreview.net/forum/logic-effective-altruism/peter-singer-reply-effective-altruism-responses (5-10 mins)

Longer version: start reading from https://bostonreview.net/forum/peter-singer-logic-effective-altruism (~ 1 hour)

I think these critiques are fairly comprehensive. They probably cover like 80-90% of all possible critiques.

Comment by prabhat-soni on A central directory for open research questions · 2020-09-11T06:39:47.798Z · score: 1 (1 votes) · EA · GW

Glad to hear this!

Comment by prabhat-soni on A central directory for open research questions · 2020-09-08T03:23:30.146Z · score: 1 (1 votes) · EA · GW

Yep, that's what I meant by "open source"! Awesome to hear you're taking this forward!

Comment by prabhat-soni on A central directory for open research questions · 2020-09-07T11:59:51.219Z · score: 3 (2 votes) · EA · GW

Hey, thanks for putting this together. I think it would be quite valuable to have these lists be put up on Effective Thesis's research agenda page. My reasoning for this is that Effective Thesis's research agenda page probably has more viewers than this EA Forum post or the Google Doc version of this post.

Additionally, if you agree with the above, I'd be curious to hear your thoughts on how we could make Effective Thesis's research agenda page open source?

Comment by prabhat-soni on Suggest a question for Bruce Friedrich of GFI · 2020-09-07T06:05:07.770Z · score: 5 (3 votes) · EA · GW

Are you more optimistic about the future of plant-based meat or cell-based meat? Why?

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-10T12:58:08.503Z · score: 1 (1 votes) · EA · GW

Thanks, added!

Comment by prabhat-soni on How are the EA Funds default allocations chosen? · 2020-08-10T12:53:04.815Z · score: 3 (3 votes) · EA · GW

Thanks!

Comment by prabhat-soni on How are the EA Funds default allocations chosen? · 2020-08-10T12:30:54.137Z · score: 2 (2 votes) · EA · GW

The recommended distribution between the four focus areas of effective altruism is as follows:


Could you please mention your source for this?

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-09T10:29:21.492Z · score: 2 (2 votes) · EA · GW

Thanks! Added!

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-09T10:29:02.766Z · score: 2 (2 votes) · EA · GW

Thanks!

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-08T15:34:49.924Z · score: 3 (2 votes) · EA · GW

Please comment pledges that I've missed out or new ones in this thread!

Comment by prabhat-soni on Addressing Global Poverty as a Strategy to Improve the Long-Term Future · 2020-08-08T06:52:42.925Z · score: 6 (5 votes) · EA · GW

I think additional research on this would be beneficial. This question is also a part of the Global Priorities Institute's research agenda.

Related questions the Global Priorities Institute is interested in:

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-07T07:33:01.597Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification!

Comment by prabhat-soni on Bored at home? Contribute to the EA Wiki! · 2020-08-06T11:23:57.946Z · score: 7 (2 votes) · EA · GW

I am skeptical if making an EA Wiki is better than uploading EA-relevant articles on Wikipedia (https://www.wikipedia.org/).

There are many other arguments for why it wouldn't be a good idea, but I want to focus on the target group.

Case 1: The target group is EAs. In this case, the EA Wiki would probably host in-depth/comprehensive knowledge that is not available on places EA's normally visit like 80000hours.org or effectivealtruism.org. It would serve for questions like "Has anyone in EA ever talked about __?". As of now, most of this "in-depth" knowledge is present in the form of EA Forum posts and comments. Most of the content on the EA Wiki would be copy-pasted from the EA Forum. The EA Forum is well-searchable, and it already fulfills this purpose. For long-run things like "how should the EA content be organized in the long run (e.g. 5 years later)", an EA Wiki may be more promising. But, for the reasons written above, it is difficult to see any real use of it in the short term (e.g. 1-2 years).

Case 2: The target group is non-EAs. The EA Wiki wouldn't show up in search engines. Period. Wikipedia articles appear much more easily on search engines and are linked to by other Wikipedia articles. A much better idea would be to upload EA-relevant articles on Wikipedia. Also, there is more scope for extending EA to other languages since Wikipedia supports articles in a 5-10 other languages.

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-06T10:28:40.819Z · score: 1 (1 votes) · EA · GW

I am unable to create tables, upload images, etc in a comment. I think this would be useful. Is this a deliberate design choice, or will it be fixed later?

Comment by prabhat-soni on Bored at home? Contribute to the EA Wiki! · 2020-08-06T10:13:56.502Z · score: 1 (1 votes) · EA · GW

There's been a bunch of past discussion concerning an EA Wiki, and it took me a few hours to find it all. I'm writing the links to past discussion below so that it saves someone time if they choose to go down this rabbit hole!

 

Possible candidates/sources for EA Wiki:

 

Dead URLs:


Relevant Forum articles:

Comment by prabhat-soni on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2020-08-04T03:26:03.089Z · score: 1 (1 votes) · EA · GW

Thanks for this post, it was very insightful. Do you have any ideas on the talent/funding gap scenario for other EA cause areas like global priorities research (I believe this doesn't come under meta EA), biosecurity, nuclear security, improving institutional decision making, etc?

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-02T02:34:43.162Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-01T12:15:50.429Z · score: 1 (1 votes) · EA · GW

How do you create tables?

Comment by prabhat-soni on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-29T22:32:16.329Z · score: 5 (3 votes) · EA · GW

Yes, I completely agree. In fact, most wars would probably require local-level knowledge and need to be prioritized by local altruists.

Comment by prabhat-soni on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-26T20:52:32.162Z · score: 6 (3 votes) · EA · GW

I think your question is: Is there some problem/intervention that is high-impact that EA has missed out because it is specific to my country, and so nobody has thought of it?


Let's go through which countries are good for specific causes:

  • Artificial General Intelligence: USA, China, UK
  • Engineered Pandemics: USA, China
  • Earning-to-give: rich countries like USA, Qatar, Singapore, Norway, UAE, Luxembourg, Saudi Arabia, Switzerland
  • Nuclear Security: Russia, USA, North Korea
  • Climate Change: Countries developing rapidly like Brazil, India and countries that emit a lot of greenhouse gases as of now like USA, UK, etc
  • Improving Institutional Decision Making: Corrupt countries like Colombia, Brazil, India Mexico, Ghana, Bolivia and influential countries like USA, UK
  • Malaria Interventions: A lot of the countries in Sub-Saharan Africa
  • Influencing long-term future: Potential superpowers like Russia, China, India, Brazil
  • Alternative meats: Brazil, China, USA, Israel, India
  • Food/Water Fortification: India, West African countries

The countries that are good for specific problems/interventions are good because they exhibit certain "structural" properties. For example, countries good for earning to give are rich; countries good for factory farming have high consumption of meat; countries good for institutional decision making are corrupt or influential; countries good for influencing long-term future are potential superpowers; and so on.

These "structural" properties are present in multiple (on average around 5) countries, and thus there are around 5 countries that are high-impact for a specific cause area/intervention. Also, these countries are generall geographically and culturally dispersed - often belonging to different continents.

Coming back to the original question: Is there some problem/intervention that is high-impact that EA has missed out because it is specific to my country, and so nobody has thought of it?

If what I have argued above is correct, the premise that "a problem/intervention is specific to my country" is generally false. Going by the trend that the top ~10 problems/interventions today are not region-specific, I see no reason why a very promising problem/intervention would be found that is region-specific. And, so I argue that region-level cause prioritization research is not particularly valuable.


EDIT: I'm proposing that a majority of the promising problems are not restricted to a particular region. Ofcourse, there are some exceptions to this like war, US immigration, (maybe) health development in Sub-saharan Africa, etc.

Comment by prabhat-soni on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-26T19:22:39.340Z · score: 8 (4 votes) · EA · GW

I think this is a very relevant point. I think (correct me if I'm wrong) the effectiveness of the best intervention in the world >>> the effectiveness of the best intervention in a random country X. So, it would be more beneficial to have 100 donors for effective global issues compared to 500 donors for effective national issues.

A caveat, however is value promotion. This is difficult to measure or quantify. There is a chance of large spillover effects due more people having an "effective giving" mindset. These people may further spread the idea of effective giving, or may become globally-aligned in the future. Off the top of my head, I think the spillover effects would be rather modest, but we'd probably need more "hard evidence" for this argument.

Comment by prabhat-soni on Should we think more about EA dating? · 2020-07-26T19:09:44.567Z · score: 10 (3 votes) · EA · GW

I don't think this idea is very practical -- atleast for the next few years. EA is a very global and spread-out community. Directly quoting the EA Survey 2019 Series: Geographic Distribution of EAs:

In the figure below it is clear that the number of EAs in the top “major hubs” is dwarfed by the number of EAs in “Other” cities which are not named in the figure below due to having fewer than 10 EAs.

Link to the figure they were talking about.

Let's take an extreme case, where you happen to live in the city with the highest number of EAs (i.e. San Francisco Bay Area). Even that is like 150 EAs, divided into ~100 males and ~50 females. Even this is a "barely enough" selection pool -- due to low number of people.

Ofcourse, if you're fine with long-distance/virtual dating, then that's a different story.

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-07-22T11:02:00.298Z · score: 1 (1 votes) · EA · GW

Should More EAs Focus on Entrepreneurship?

My argument for this is:

1. EAs want to solve problems in area that are neglected/unpopular.

=> 2. Less jobs, etc in those fields and lot of competition for jobs among existing EA orgs (e.g. GPI, FHI, OpenPhil, Deepmind, OpenAI, MIRI, 80K). I'm not sure, but I think there's an unnecessarily high amount of competition at the moment -- i.e. rejecting sufficiently qualified candidates.

=> 3. It is immensely beneficial to create new EA orgs that can absorb people.


Other questions:

  • Should we instead make existing orgs larger? Does quality of orgs go down when you create a lot of orgs?
  • What about oligopoly over market when there are very few orgs (e.g. due to whatever reason if GPI starts messing up consistently it is very bad for EA since they are on of the very few orgs doing global priorities research)
Comment by prabhat-soni on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-16T10:46:37.288Z · score: 5 (3 votes) · EA · GW

This was a very enjoyable post! You frequently analysed yourself from a 3rd person viewpoint, and very skeptical of your claims -- which is very healthy :)

Related to poverty eradication / systematic change

1. How exactly do you think we should measure the poverty line? Relative poverty? Absolute poverty? Enough money to buy x bottles of water a day? Enough money to produce x units of happiness?

2. Neo-colonialism has expanded beyond Europe and the US. Apparently, China is also doing this. China gives loans to poorer countries for development of ports, and when those countries default on their debt, China siezes control of the ports. And, what are your opinions on neo-colonialism between different parts of the same country?

3. Would de-growth result in better income equality and also lower total economic growth? If so, could you elaborate on what this tradeoff looks like (preferably in a quantitative sense)?

4. Is the amount of colonialism/neo-colonialism increasing/decreasing/same over the past ~100 years?

5. You mentioned using GPI instead of GDP as a national performance index? What do you think are the chances of GPI gaining widespread acceptance?

Related to personal career plans

1. You expressed a LOT of interest in Economics, and some interest in Law. What are your thoughts on a Master's in Public Policy?

2. Are entrepreneurial skills a rare asset within EA? How does supply-demand of entrepreneurial skills in EA look like?

3. You mentioned that even big tech companies aren't able to achieve large amounts of change. I would a little skeptical of this. One counter-example is that American English is slowly replacing British English, even in countries that used to historically speak British English. I think one of the biggest reasons for this is popular softwares like MS Word, Google docs and Google search having American English as their default language. However, I have a feeling that large changes like this generally happen when a company is REALLY succesful/popular (I'm not sure though).

Comment by prabhat-soni on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T09:08:43.437Z · score: 2 (2 votes) · EA · GW

Thoughts on modifications/improvements to The Windfall Clause?

Comment by prabhat-soni on Does generality pay? GPT-3 can provide preliminary evidence. · 2020-07-13T10:22:32.178Z · score: 2 (2 votes) · EA · GW

It seems like the hyperlink of the arxiv webpage is invalid (i.e. when you click on the arxiv link).

Comment by prabhat-soni on KR's Shortform · 2020-07-08T00:44:19.816Z · score: 6 (2 votes) · EA · GW

I may have misunderstood your question, so there's a chance that this is a tangential answer.

I think one mistake humans make is overconfidence in specific long-term predictions. Specific would mean like predicting when a particular technology will arrive, when we will hit 3 degrees of warming, when we will hit 11 billion population, etc.

I think the capacity of even smart humans to reasonably (e.g. >50% accuracy) predict when a specific event would occur is somewhat low; I would estimate around 20-40 years from when they are living.

You ask: "if you were alive in 1920 trying to have the largest possible impact today" what would you do? I would acknowledge that I cannot (with reasonable accuracy) predict the thing that will "the largest possible impact in 2020" (which is a very specific thing to predict) and go with broad-based interventions (which is a more sure-shot answer) like improving international relations, promoting moral values, promoting education, promoting democracy, promoting economic growth, etc (these are sub-optimal answers; but they're probably the best I could do).

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-06-30T22:43:18.808Z · score: 2 (2 votes) · EA · GW

Hi Ramiro, thanks for your comment. Based off this post, we can think of 2 techniques to promote longtermism. The first is what I mentioned - which is exploiting biases to get people inclined to longtermism. And the second is what you [might have] mentioned - a more rationality-driven approach where people are made aware of their biases with respect to longtermism. I think your idea is better since it is a more permanent-ish solution (there is security against future events that may attempt to bias an individual towards neartermism), has spillover effects into other aspects of rationality, and has lower risk with respect to moral uncertainity (correct me if I'm wrong).

I agree with the several biases/decision-making flaws that you mentioned! Perhaps, sufficient levels of rationality is a pre-requisite to one's acceptance of longtermism. Maybe a promising EA cause area could be promoting rationality (such a cause area probably exists I guess).

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-06-30T10:19:36.980Z · score: 3 (2 votes) · EA · GW

Changing behaviour of people to make them more longtermist

Can we use standard behavioral economics techniques like loss aversion (e.g. humanity will be lost forever), scarcity bias, framing bias and nudging to influence people to make longtermist decisions instead of neartermist ones? Is this even ethical, given moral uncertainty?

It would be awesome if you could direct me to any existing research on this!

Comment by prabhat-soni on kbog's Shortform · 2020-06-28T18:44:08.654Z · score: 1 (1 votes) · EA · GW

Why couldn't a manual of organizational best practices from non-EA organisations (I'm guessing there are probably many such manuals or other ways of communicating best practices) suffice? Which areas would it be unable to cover when applied directly to EA organisations? Are these areas particularly important to cover?

Comment by prabhat-soni on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T20:55:27.551Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification, Brendon!

Comment by prabhat-soni on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T15:24:55.884Z · score: 7 (5 votes) · EA · GW
We may need to invest more to tackle future problems

Which types of "investments" are you talking about? Are they specifically financial investments, or a broader range of investments?

In case you mean a broader range of investments, such investments could include: building the EA movement, making good moral values a social norm, developing better technologies that could help us tackle unforseen problems in the future, improving the biological intelligence level of humans. This definition could get problematic since many of these investments are seperate cause areas themselves.

Comment by prabhat-soni on [Cross Post] Why China could be a very important country. · 2020-06-14T08:18:34.714Z · score: 2 (2 votes) · EA · GW
I've also heard that countries like India and Russia also have a large amount of potential; they may get their own posts.

I think an interesting question is : how does the importance of China, Russia, India (and few other countries) compare with each other? If we could get a quantitative answer to this question, it would help to guide how we spend our resources in these high-profile, emerging-EA locations.

Comment by prabhat-soni on X-risks to all life v. to humans · 2020-06-10T15:26:03.585Z · score: 1 (1 votes) · EA · GW

Oh sorry, I must've misread! So the issue seems to be with the number 0.095%. The chance of a true existential event in B) would be 0.01% * 95% = 0.0095% (and not 0.095%). And, this leads us to 0.7/0.0095 =~ 73.68

Comment by prabhat-soni on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-10T08:27:47.824Z · score: 1 (1 votes) · EA · GW

Hey, the hyperlinks of the 'homepage' and 'GitHub' URLs are wrong