Posts

A List of EA Donation Pledges (GWWC, etc) 2020-08-08T15:26:50.884Z · score: 17 (8 votes)
Prabhat Soni's Shortform 2020-06-30T10:19:36.684Z · score: 2 (1 votes)

Comments

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-09T10:29:21.492Z · score: 2 (2 votes) · EA · GW

Thanks! Added!

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-09T10:29:02.766Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by prabhat-soni on A List of EA Donation Pledges (GWWC, etc) · 2020-08-08T15:34:49.924Z · score: 3 (2 votes) · EA · GW

Please comment pledges that I've missed out or new ones in this thread!

Comment by prabhat-soni on Addressing Global Poverty as a Strategy to Improve the Long-Term Future · 2020-08-08T06:52:42.925Z · score: 4 (4 votes) · EA · GW

I think additional research on this would be beneficial. This question is also a part of the Global Priorities Institute's research agenda.

Related questions the Global Priorities Institute is interested in:

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-07T07:33:01.597Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification!

Comment by prabhat-soni on Bored at home? Contribute to the EA Wiki! · 2020-08-06T11:23:57.946Z · score: 1 (1 votes) · EA · GW

I am skeptical if making an EA Wiki is better than uploading EA-relevant articles on Wikipedia (https://www.wikipedia.org/).

There are many other arguments for why it wouldn't be a good idea, but I want to focus on the target group.

Case 1: The target group is EAs. In this case, the EA Wiki would probably host in-depth/comprehensive knowledge that is not available on places EA's normally visit like 80000hours.org or effectivealtruism.org. It would serve for questions like "Has anyone in EA ever talked about __?". As of now, most of this "in-depth" knowledge is present in the form of EA Forum posts and comments. Most of the content on the EA Wiki would be copy-pasted from the EA Forum. The EA Forum is well-searchable, and it already fulfills this purpose. For long-run things like "how should the EA content be organized in the long run (e.g. 5 years later)", an EA Wiki may be more promising. But, for the reasons written above, it is difficult to see any real use of it in the short term (e.g. 1-2 years).

Case 2: The target group is non-EAs. The EA Wiki wouldn't show up in search engines. Period. Wikipedia articles appear much more easily on search engines and are linked to by other Wikipedia articles. A much better idea would be to upload EA-relevant articles on Wikipedia. Also, there is more scope for extending EA to other languages since Wikipedia supports articles in a 5-10 other languages.

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-06T10:28:40.819Z · score: 1 (1 votes) · EA · GW

I am unable to create tables, upload images, etc in a comment. I think this would be useful. Is this a deliberate design choice, or will it be fixed later?

Comment by prabhat-soni on Bored at home? Contribute to the EA Wiki! · 2020-08-06T10:13:56.502Z · score: 1 (1 votes) · EA · GW

There's been a bunch of past discussion concerning an EA Wiki, and it took me a few hours to find it all. I'm writing the links to past discussion below so that it saves someone time if they choose to go down this rabbit hole!

 

Possible candidates/sources for EA Wiki:

 

Dead URLs:


Relevant Forum articles:

Comment by prabhat-soni on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2020-08-04T03:26:03.089Z · score: 1 (1 votes) · EA · GW

Thanks for this post, it was very insightful. Do you have any ideas on the talent/funding gap scenario for other EA cause areas like global priorities research (I believe this doesn't come under meta EA), biosecurity, nuclear security, improving institutional decision making, etc?

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-02T02:34:43.162Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by prabhat-soni on EA Forum update: New editor! (And more) · 2020-08-01T12:15:50.429Z · score: 1 (1 votes) · EA · GW

How do you create tables?

Comment by prabhat-soni on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-29T22:32:16.329Z · score: 5 (3 votes) · EA · GW

Yes, I completely agree. In fact, most wars would probably require local-level knowledge and need to be prioritized by local altruists.

Comment by prabhat-soni on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-26T20:52:32.162Z · score: 6 (3 votes) · EA · GW

I think your question is: Is there some problem/intervention that is high-impact that EA has missed out because it is specific to my country, and so nobody has thought of it?


Let's go through which countries are good for specific causes:

  • Artificial General Intelligence: USA, China, UK
  • Engineered Pandemics: USA, China
  • Earning-to-give: rich countries like USA, Qatar, Singapore, Norway, UAE, Luxembourg, Saudi Arabia, Switzerland
  • Nuclear Security: Russia, USA, North Korea
  • Climate Change: Countries developing rapidly like Brazil, India and countries that emit a lot of greenhouse gases as of now like USA, UK, etc
  • Improving Institutional Decision Making: Corrupt countries like Colombia, Brazil, India Mexico, Ghana, Bolivia and influential countries like USA, UK
  • Malaria Interventions: A lot of the countries in Sub-Saharan Africa
  • Influencing long-term future: Potential superpowers like Russia, China, India, Brazil
  • Alternative meats: Brazil, China, USA, Israel, India
  • Food/Water Fortification: India, West African countries

The countries that are good for specific problems/interventions are good because they exhibit certain "structural" properties. For example, countries good for earning to give are rich; countries good for factory farming have high consumption of meat; countries good for institutional decision making are corrupt or influential; countries good for influencing long-term future are potential superpowers; and so on.

These "structural" properties are present in multiple (on average around 5) countries, and thus there are around 5 countries that are high-impact for a specific cause area/intervention. Also, these countries are generall geographically and culturally dispersed - often belonging to different continents.

Coming back to the original question: Is there some problem/intervention that is high-impact that EA has missed out because it is specific to my country, and so nobody has thought of it?

If what I have argued above is correct, the premise that "a problem/intervention is specific to my country" is generally false. Going by the trend that the top ~10 problems/interventions today are not region-specific, I see no reason why a very promising problem/intervention would be found that is region-specific. And, so I argue that region-level cause prioritization research is not particularly valuable.


EDIT: I'm proposing that a majority of the promising problems are not restricted to a particular region. Ofcourse, there are some exceptions to this like war, US immigration, (maybe) health development in Sub-saharan Africa, etc.

Comment by prabhat-soni on Is region-level cause prioritization research valuable to spot promising long-term priority causes worldwide? · 2020-07-26T19:22:39.340Z · score: 8 (4 votes) · EA · GW

I think this is a very relevant point. I think (correct me if I'm wrong) the effectiveness of the best intervention in the world >>> the effectiveness of the best intervention in a random country X. So, it would be more beneficial to have 100 donors for effective global issues compared to 500 donors for effective national issues.

A caveat, however is value promotion. This is difficult to measure or quantify. There is a chance of large spillover effects due more people having an "effective giving" mindset. These people may further spread the idea of effective giving, or may become globally-aligned in the future. Off the top of my head, I think the spillover effects would be rather modest, but we'd probably need more "hard evidence" for this argument.

Comment by prabhat-soni on Should we think more about EA dating? · 2020-07-26T19:09:44.567Z · score: 10 (3 votes) · EA · GW

I don't think this idea is very practical -- atleast for the next few years. EA is a very global and spread-out community. Directly quoting the EA Survey 2019 Series: Geographic Distribution of EAs:

In the figure below it is clear that the number of EAs in the top “major hubs” is dwarfed by the number of EAs in “Other” cities which are not named in the figure below due to having fewer than 10 EAs.

Link to the figure they were talking about.

Let's take an extreme case, where you happen to live in the city with the highest number of EAs (i.e. San Francisco Bay Area). Even that is like 150 EAs, divided into ~100 males and ~50 females. Even this is a "barely enough" selection pool -- due to low number of people.

Ofcourse, if you're fine with long-distance/virtual dating, then that's a different story.

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-07-22T11:02:00.298Z · score: 1 (1 votes) · EA · GW

Should More EAs Focus on Entrepreneurship?

My argument for this is:

1. EAs want to solve problems in area that are neglected/unpopular.

=> 2. Less jobs, etc in those fields and lot of competition for jobs among existing EA orgs (e.g. GPI, FHI, OpenPhil, Deepmind, OpenAI, MIRI, 80K). I'm not sure, but I think there's an unnecessarily high amount of competition at the moment -- i.e. rejecting sufficiently qualified candidates.

=> 3. It is immensely beneficial to create new EA orgs that can absorb people.


Other questions:

  • Should we instead make existing orgs larger? Does quality of orgs go down when you create a lot of orgs?
  • What about oligopoly over market when there are very few orgs (e.g. due to whatever reason if GPI starts messing up consistently it is very bad for EA since they are on of the very few orgs doing global priorities research)
Comment by prabhat-soni on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-16T10:46:37.288Z · score: 4 (2 votes) · EA · GW

This was a very enjoyable post! You frequently analysed yourself from a 3rd person viewpoint, and very skeptical of your claims -- which is very healthy :)

Related to poverty eradication / systematic change

1. How exactly do you think we should measure the poverty line? Relative poverty? Absolute poverty? Enough money to buy x bottles of water a day? Enough money to produce x units of happiness?

2. Neo-colonialism has expanded beyond Europe and the US. Apparently, China is also doing this. China gives loans to poorer countries for development of ports, and when those countries default on their debt, China siezes control of the ports. And, what are your opinions on neo-colonialism between different parts of the same country?

3. Would de-growth result in better income equality and also lower total economic growth? If so, could you elaborate on what this tradeoff looks like (preferably in a quantitative sense)?

4. Is the amount of colonialism/neo-colonialism increasing/decreasing/same over the past ~100 years?

5. You mentioned using GPI instead of GDP as a national performance index? What do you think are the chances of GPI gaining widespread acceptance?

Related to personal career plans

1. You expressed a LOT of interest in Economics, and some interest in Law. What are your thoughts on a Master's in Public Policy?

2. Are entrepreneurial skills a rare asset within EA? How does supply-demand of entrepreneurial skills in EA look like?

3. You mentioned that even big tech companies aren't able to achieve large amounts of change. I would a little skeptical of this. One counter-example is that American English is slowly replacing British English, even in countries that used to historically speak British English. I think one of the biggest reasons for this is popular softwares like MS Word, Google docs and Google search having American English as their default language. However, I have a feeling that large changes like this generally happen when a company is REALLY succesful/popular (I'm not sure though).

Comment by prabhat-soni on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T09:08:43.437Z · score: 2 (2 votes) · EA · GW

Thoughts on modifications/improvements to The Windfall Clause?

Comment by prabhat-soni on Does generality pay? GPT-3 can provide preliminary evidence. · 2020-07-13T10:22:32.178Z · score: 2 (2 votes) · EA · GW

It seems like the hyperlink of the arxiv webpage is invalid (i.e. when you click on the arxiv link).

Comment by prabhat-soni on KR's Shortform · 2020-07-08T00:44:19.816Z · score: 6 (2 votes) · EA · GW

I may have misunderstood your question, so there's a chance that this is a tangential answer.

I think one mistake humans make is overconfidence in specific long-term predictions. Specific would mean like predicting when a particular technology will arrive, when we will hit 3 degrees of warming, when we will hit 11 billion population, etc.

I think the capacity of even smart humans to reasonably (e.g. >50% accuracy) predict when a specific event would occur is somewhat low; I would estimate around 20-40 years from when they are living.

You ask: "if you were alive in 1920 trying to have the largest possible impact today" what would you do? I would acknowledge that I cannot (with reasonable accuracy) predict the thing that will "the largest possible impact in 2020" (which is a very specific thing to predict) and go with broad-based interventions (which is a more sure-shot answer) like improving international relations, promoting moral values, promoting education, promoting democracy, promoting economic growth, etc (these are sub-optimal answers; but they're probably the best I could do).

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-06-30T22:43:18.808Z · score: 2 (2 votes) · EA · GW

Hi Ramiro, thanks for your comment. Based off this post, we can think of 2 techniques to promote longtermism. The first is what I mentioned - which is exploiting biases to get people inclined to longtermism. And the second is what you [might have] mentioned - a more rationality-driven approach where people are made aware of their biases with respect to longtermism. I think your idea is better since it is a more permanent-ish solution (there is security against future events that may attempt to bias an individual towards neartermism), has spillover effects into other aspects of rationality, and has lower risk with respect to moral uncertainity (correct me if I'm wrong).

I agree with the several biases/decision-making flaws that you mentioned! Perhaps, sufficient levels of rationality is a pre-requisite to one's acceptance of longtermism. Maybe a promising EA cause area could be promoting rationality (such a cause area probably exists I guess).

Comment by prabhat-soni on Prabhat Soni's Shortform · 2020-06-30T10:19:36.980Z · score: 3 (2 votes) · EA · GW

Changing behaviour of people to make them more longtermist

Can we use standard behavioral economics techniques like loss aversion (e.g. humanity will be lost forever), scarcity bias, framing bias and nudging to influence people to make longtermist decisions instead of neartermist ones? Is this even ethical, given moral uncertainty?

It would be awesome if you could direct me to any existing research on this!

Comment by prabhat-soni on kbog's Shortform · 2020-06-28T18:44:08.654Z · score: 1 (1 votes) · EA · GW

Why couldn't a manual of organizational best practices from non-EA organisations (I'm guessing there are probably many such manuals or other ways of communicating best practices) suffice? Which areas would it be unable to cover when applied directly to EA organisations? Are these areas particularly important to cover?

Comment by prabhat-soni on Problem areas beyond 80,000 Hours' current priorities · 2020-06-23T20:55:27.551Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification, Brendon!

Comment by prabhat-soni on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T15:24:55.884Z · score: 7 (5 votes) · EA · GW
We may need to invest more to tackle future problems

Which types of "investments" are you talking about? Are they specifically financial investments, or a broader range of investments?

In case you mean a broader range of investments, such investments could include: building the EA movement, making good moral values a social norm, developing better technologies that could help us tackle unforseen problems in the future, improving the biological intelligence level of humans. This definition could get problematic since many of these investments are seperate cause areas themselves.

Comment by prabhat-soni on [Cross Post] Why China could be a very important country. · 2020-06-14T08:18:34.714Z · score: 2 (2 votes) · EA · GW
I've also heard that countries like India and Russia also have a large amount of potential; they may get their own posts.

I think an interesting question is : how does the importance of China, Russia, India (and few other countries) compare with each other? If we could get a quantitative answer to this question, it would help to guide how we spend our resources in these high-profile, emerging-EA locations.

Comment by prabhat-soni on X-risks to all life v. to humans · 2020-06-10T15:26:03.585Z · score: 1 (1 votes) · EA · GW

Oh sorry, I must've misread! So the issue seems to be with the number 0.095%. The chance of a true existential event in B) would be 0.01% * 95% = 0.0095% (and not 0.095%). And, this leads us to 0.7/0.0095 =~ 73.68

Comment by prabhat-soni on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-10T08:27:47.824Z · score: 1 (1 votes) · EA · GW

Hey, the hyperlinks of the 'homepage' and 'GitHub' URLs are wrong

Comment by prabhat-soni on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-09T08:03:14.297Z · score: 5 (4 votes) · EA · GW

This looks exciting! Since there's a limited time that someone may want to listen to us, it's important to prioritize concepts. Perhaps, we could use a {neglectedness - importance - ease of explaining} [or similar] framework to rank EA concepts?

Some similar ideas are discussed by Will MacASkill in https://www.youtube.com/watch?v=vCpFsvYI-7Y [30:40]

Comment by prabhat-soni on X-risks to all life v. to humans · 2020-06-09T04:27:26.886Z · score: 1 (1 votes) · EA · GW
Then for question 1, with A), we get a 0.7% chance of a true existential event, and with B) we get a 0.095% chance of a true existential event. So we still care more about A), but now by only ~7 times more.

I believe this should be ~70 times more.

Comment by prabhat-soni on Audience Targeting for EA communities · 2020-05-04T08:01:37.846Z · score: 2 (2 votes) · EA · GW

Hi Asaf,

I agree with the points you make. Additionally, I think it is easier to find EA's among altruism-related-communities (e.g. climate change, factory farming, etc) rather than effective/logic-related-communities (e.g. philosophers, engineers, scientists). This is because people willing to devote their career to altruistic causes are rare, while quite a lot of people think and reason logically.

Also, I'd love to know of any surveys or research that tries to find correlations between what EAs were doing pre-knowledge-of-EA and post-knowledge-of-EA. Or, what are the opinions on EA among people from different industries or subject-of-study.

Comment by prabhat-soni on Audience Targeting for EA communities · 2020-05-03T00:44:29.889Z · score: 2 (2 votes) · EA · GW

Note: I have been involved with EA since 2-3 months only, so my ideas may not be accurate.

One approach is to target people involved in social issues who believe in some of the more popular EA concept(s).

Climate Change
Out of all the EA priority areas, climate change is arguably the most popular one (among non-EAs/general population).
Quite a few people (among non-EAs) work on climate change because they think it's the most pressing problem. They believe in some of the more well-known EA concepts like:

1. Using supply/demand concept to choose a social issue (aka neglectedness).

2. The fact that climate change needs to be fixed quickly while other social issues can be solved later also (similar to idea of existential risk).


On the whole, more involved groups appear to prioritise Global Poverty and Climate Change less and longtermist causes more. ~ EA 2019 Survey

However, I must point out that Global Poverty is ranked the most popular EA cause area, followed by Climate Change. I suspect this is due to a lot of people in the EA movement having joined recently, and taking some time to understand EA's ideas on cause prioritization.

Similarly, it may be efficient to target people who are involved in nuclear security (they share ideas of existential risk and sudden catastrophes are more important than catastrophes that build up over time).

Essentially, we are looking for people who are working on a particular social cause because of logical reasons. This greatly increases their chance of being a fit with the ideas of EA, since this approach captures both the effective and altruism aspects of EA.