Posts

Factors other than ITN? 2020-09-26T04:34:08.244Z
A List of EA Donation Pledges (GWWC, etc) 2020-08-08T15:26:50.884Z
Prabhat Soni's Shortform 2020-06-30T10:19:36.684Z

Comments

Comment by Prabhat Soni on Is the current definition of EA not representative of hits-based giving? · 2021-04-29T06:00:35.726Z · EA · GW

Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P

Comment by Prabhat Soni on Is the current definition of EA not representative of hits-based giving? · 2021-04-28T05:11:54.857Z · EA · GW

I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:

Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.

Arden Koehler: If it turns out that we should consult a crystal ball in order to find out if that’s the best way, then we should do that?

Ben Todd: Yeah.

Arden Koehler: Okay. Yeah. So again, very abstract: whatever it is that turns out to be the best way of figuring out how to do the most good.

Ben Todd: Yeah. I mean, in general, you have this just big question of how narrow or broad to make the definition of effective altruism and it is a difficult thing to say.

I don't think this is an "official definition" (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.

Comment by Prabhat Soni on Can the EA community copy Teach for America? (Looking for Task Y) · 2021-04-14T03:11:01.932Z · EA · GW

Task Y candidate: Fellowship facilitator for EA Virtual Programs

EA Virtual Programs runs intro fellowships, in-depth fellowships, and The Precipice reading groups (plus occasional other programs). The time commitment for facilitators is generally 2-5 hours per week (depending on the particular program).

EA intro fellowships (and similar programs) have been successful at minting engaged EAs. There are large diminishing returns even in selecting applicants with a not-so-strong application since the application process does not predict future engagement well (see this and this). Thus, if a fellowship/reading group has to reject people, that's significant value lost. Rejected applicants generally re-apply at low rates (despite being encouraged to!).

Uncertainties:

  • Is EA Virtual Programs short on facilitators? I don't know. The answer to this question would presumably change post-COVID (IMO the answer could shift in either direction), and so in the interest of future-proofing this answer, I will not bother to find the current demand for facilitators.
  • Will EA Virtual Programs exist post-COVID? An organizer at the EA Virtual Programs informally said that nothing concrete has been decided yet, but the project was probably leaning towards continuing in some capacity. It is not clear to me whether there will even be significantly fewer applicants post-COVID (since most(?) university groups are running their fellowships independantly right now).

I know of atleast a few non-student working professionals who are facilitators for EA Virtual Programs, which I will take as evidence that this can be a Task Y.

Comment by Prabhat Soni on Rationality as an EA Cause Area · 2021-04-09T10:00:23.969Z · EA · GW

Thanks for explaining your views further! This seems about right to me, and I think this is an interesting direction that should be explored further.

Comment by Prabhat Soni on Rationality as an EA Cause Area · 2021-04-07T23:53:26.718Z · EA · GW

I think rationality should not be considered as a seperate cause area, but perhaps deserves to be a sub-cause area of EA movement building and AI safety.

  1. It seems very unlikely that promoting rationality (and hoping some of those folks would be attracted to EA) is more effective than promoting EA in the first place.
  2. I am unsure whether it is more effective to grow the the number of people interested in AI safety by promoting rationality or by directly reaching out to AI researchers (or other things one might do to grow the AI safety community).

Also, the post title is misleading since an interpretation of it could be that making people more rational is intrinsically valuable (or that due to increased rationality they would live happier lives). While this is likely true, this would probably be an ineffective intervention.

Comment by Prabhat Soni on "Hinge of History" Refuted (April Fools' Day) · 2021-04-01T16:18:57.005Z · EA · GW

Strong upvote. This post caused me to deprioritize longtermism and shift my focus to presently alive beings.

Comment by Prabhat Soni on Local priorities research: what is it, who should consider doing it, and why · 2021-03-29T14:36:39.775Z · EA · GW

The hyperlink is incorrect :P

Comment by Prabhat Soni on Contact us · 2021-02-27T23:03:03.766Z · EA · GW

Do you have a preference on whether to contact you or contact JP Addison (the programmer of the EA Forum) for technical bugs?

Comment by Prabhat Soni on Join our collaboration for high quality EA outreach events (OFTW + GWWC + EA Community) · 2021-02-27T22:33:53.499Z · EA · GW

What is the minimum threshold of expected attendees required for GWWC/OFTW to be interested in collaborating?

Comment by Prabhat Soni on Join our collaboration for high quality EA outreach events (OFTW + GWWC + EA Community) · 2021-02-27T22:33:37.498Z · EA · GW

What, if anything, changes in this mechanism/strategy post-COVID?

Comment by Prabhat Soni on A ranked list of all EA-relevant (audio)books I've read · 2021-02-27T18:56:39.422Z · EA · GW

I was looking for books on rationality. My top 4 shortlist was:

  • Rationality: From AI to Zombies by Eliezer Yudkowsky
  • Predictably Irrational by Dan Ariely
  • Decisive by Chip Heath and Dan Heath (This covers a lot of concepts EAs are familiar with such as confirmation bias and overconfidence, so I didn't feel it would add much to my knowledge base)
  • Thinking, Fast and Slow by Daniel Kahneman (More focussed on cognitive biases rather than rationality in general.)

I ended up going with Rationality: From AI to Zombies.

Comment by Prabhat Soni on List of Introductory EA Presentations · 2021-02-27T18:47:16.870Z · EA · GW

Hey I know this post is very old. But in case someone stumbles across this post, the best presentation for introducing EA in my opinion is:

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2021-02-15T18:08:44.297Z · EA · GW

Yep, that's what comes to my mind atleast :P

Comment by Prabhat Soni on Can the EA community copy Teach for America? (Looking for Task Y) · 2021-02-15T08:32:17.861Z · EA · GW

Task Y candidate: Writing (from scratch) and editing Wikipedia articles.

Comment by Prabhat Soni on Needed EA-related Articles on the English Wikipedia · 2021-02-08T16:46:33.303Z · EA · GW

Apparently existential risk does not have its own Wikipedia article.

Some related concepts like human extinction, global catastrophic risks, existential risk from AGI, biotechnology risk do have their own Wikipedia articles. On closer inspection, hyperlinks for "existential risk" on Wikipedia redirect to the global catastrophic risk Wiki page. A lot of Wiki articles have started using the term "existential risk". Should there be a seperate article for existential risk?

Comment by Prabhat Soni on Stanford EA has Grown During the Pandemic; Your Group Can Too · 2021-02-04T10:45:10.981Z · EA · GW

Another awesome (and low-effort for organizers) way to socialise is the EA Fellowship Weekend (which probably didn't exist when Kuhan wrote this post).

Comment by Prabhat Soni on Evidence on correlation between making less than parents and welfare/happiness? · 2021-01-25T17:56:46.200Z · EA · GW

BTW Jessica, the $75K figure from Kahneman's paper that you mentioned is from 2010. After adjusting for inflation, that's ~$90K in 2021 dollars (exact number depends on the inflation calculator you used).

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2021-01-22T19:55:26.837Z · EA · GW

Socrates' case against democracy

https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it

Socrates makes the following argument:

  1. Just like we only allow skilled pilots to fly airplanes, licensed doctors to operate on patients or trained firefighters to use fire enignes, similarly we should only allow informed voters to vote in elections.
  2. "The best argument against democracy is a five minute conversation with the average voter". Half of American adults don’t know that each state gets two senators and two thirds don’t know what the FDA does.
  3. (Whether a voter is informed can be evaluated by a short test on the basics of elections, for example.)

Pros: better quality of candidates elected, would give uninformed voters a strong incentive to learn aout elections.

Cons: would be crazy unpopular, possibility of the small group of informed voters acting acting in self-interest -- which would worsen inequality.

(I did a shallow search and couldn't find something like this on the EA Forum or Center for Election Science.)

Comment by Prabhat Soni on Big List of Cause Candidates · 2021-01-10T15:56:36.780Z · EA · GW

A cause candidate suggestion: atomically precise manufacturing / molecular nanotechnology. Relevant EA Forum posts on this topic:

Comment by Prabhat Soni on Big List of Cause Candidates · 2021-01-06T19:26:49.259Z · EA · GW

Sorry, you're right; the link I provided earlier isn't very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this).

Comment by Prabhat Soni on BrianTan's Shortform · 2021-01-05T03:05:32.342Z · EA · GW

Hey Brian, this might be of relevance to you!

https://forum.effectivealtruism.org/posts/NoQubcRa4aMb2zB3Y/prabhat-soni-s-shortform?commentId=w9dR5BiGMQqYhZ3NL

Comment by Prabhat Soni on Big List of Cause Candidates · 2021-01-04T06:41:21.300Z · EA · GW

A cause candidate: risks from whole brain emulation

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2021-01-04T05:45:12.564Z · EA · GW

A film titled "Superintelligence" has released in November 2020. Could it raise risks?

Epistemic status: There's a good chance I'm overthinking this, and overestimating the risk.

Superintelligence [Wiki] [Movie trailer]. When you Google "Superintelligence", the top results are no longer those relating to Nick Bostrom's book but rather this movie. A summary of the movie:

When a powerful superintelligence chooses to study Carol, the most average person on Earth, the fate of the world hangs in the balance. As the AI decides whether to enslave, save or destroy humanity, it's up to Carol to prove people are worth saving.

~ HBO Max.

I haven't watched the movie; I've only watched the trailer. A possible this might raise:

  • The general public gets a tangential understanding of superintelligence. In particular, people might tend to associate superintelligence with evil intentions/consequences, while ignoring the possibility of good intentions (aka aligned AI) and good consequences. It might be that, based on this line of thought, people are not welcoming of machine superintelligence.
Comment by Prabhat Soni on Prabhat Soni's Shortform · 2021-01-03T07:08:02.594Z · EA · GW

Thanks, this was helpful!

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-27T12:41:17.726Z · EA · GW

Insightful thoughts!

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T16:58:26.235Z · EA · GW

Kurzgesagt – In a Nutshell is a popular YouTube channel. A lot of its content is EA-adjacent. The most viewed videos in a bunch of EA topics are ones posted by Kurzgesagt. The videos are also of very high quality. Has anyone tried collaborating with them or supporting them? I think it could be high impact (although careful evaluation is probably required).

 

Most of their EA-adjacent videos:

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T16:36:16.229Z · EA · GW

test123

Comment by Prabhat Soni on List of EA-related email newsletters · 2020-12-26T14:39:03.711Z · EA · GW

Thanks for this! RationalNewsletter is the only rationality-related newsletter I could find.

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T14:00:17.586Z · EA · GW

njao

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T13:59:57.803Z · EA · GW

Cause prioritisation for negative utilitarians and other downside-focused value systems: It is interesting to note that reduction of extinction risk is not very high-impact in downside-focussed value systems.

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T13:59:37.502Z · EA · GW

Cause prioritisation for negative utilitarians and other downside-focused value systems:

https://longtermrisk.org/cause-prioritization-downside-focused-value-systems/

It is interesting to note that reduction of extinction risk is not very high-impact in downside-focussed value systems.

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T13:59:19.852Z · EA · GW

Cause prioritisation for negative utilitarians and other downside-focused value systems: 

https://longtermrisk.org/cause-prioritization-downside-focused-value-systems/

It is interesting to note that reduction of extinction risk is not very high-impact in downside-focussed value systems.

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-12-26T13:58:13.679Z · EA · GW

Cause prioritisation for negative utilitarians and other downside-focused value systems: 

https://longtermrisk.org/cause-prioritization-downside-focused-value-systems/

It is interesting to note that reduction of extinction risk is not very high-impact in downside-focussed value systems.

Comment by Prabhat Soni on Rationality as an EA Cause Area · 2020-12-22T14:52:30.477Z · EA · GW

Promoting effective altruism promotes rationality in certain domains. And, improving institutional decision making is related to improving rationality. But yeah, these don't cover everything in improving rationality.

Comment by Prabhat Soni on How much does a vote matter? · 2020-11-06T17:16:17.902Z · EA · GW

Thanks Nathan, this was helpful!

Comment by Prabhat Soni on Are we neglecting education? Philosophy in schools as a longtermist area · 2020-10-31T16:45:23.402Z · EA · GW

Hi Jack, thanks for writing this. I read this post when it was published a few months ago, so I may not remember everything written in this post.

I have another related proposal: moral science (~ ethics) education for primary and middle school students. Moral science is often taught to students till 8th grade (atleast it was taught in my school). So, moral science education in schools is already tractable.

I would classify this under broadly promoting positive moral values. The current set of moral values are far from ideal, and EAs could have an impact by changing the curriculum used in moral science education in primary and middle school. In particular some moral values like concern for animals, consequentialism, caring for future generations, cosmopolitanism and liberalism seem particularly neglected and important (source: 33:33 of this video by Will MacASkill).

One of the biggest reasons to work on "broadly promoting positive moral values" would be that it isn't very tractable to influence society's moral values (the other being that it might be undesirable). But, as I've argued above, this intervention seems somewhat tractable.

For an idea of what 8th grade moral science looks like see this, this and this.

Comment by Prabhat Soni on Some thoughts on EA outreach to high schoolers · 2020-10-31T16:19:44.424Z · EA · GW

Hey Jack, thanks for the reply. Yeah, I agree that it's not obvious which among among the two is more promising.

Comment by Prabhat Soni on Practical ethics given moral uncertainty · 2020-10-30T07:45:49.199Z · EA · GW

Thanks! After so long I finally understood moral uncertainity :P

Comment by Prabhat Soni on Institutions for Future Generations · 2020-10-27T16:27:25.758Z · EA · GW

Hey, thanks for writing this. There are some age/time-related reforms that you have mentioned: Longer Election Cycles, Legislative Youth Quotas, Age Limits on Electorate, Age-weighted Voting, Enfranchisement of the Young, and Guardianship Voting for the Very Young.

These reforms would only promote "short longtermism" (i.e. next 50-100 years) while what we actually care about is "cosmic longtermism" (i.e. next ~1 billion years). What are your thoughts on this?

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-10-21T13:17:36.142Z · EA · GW

Hey, thanks for your reply. By the Pareto Principle, I meant something like "80% of the good is achieved by solving 20% of the problem areas". If this is easy to misinterpret (like you did), then it might not be a great idea :P  The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-10-20T16:39:01.168Z · EA · GW

I've never seen anyone explain EA using the Pareto Principle (80/20 rule). The cause prioritisation / effectiveness part of EA is basically the Pareto principle applied to doing good. I'd guess 25-50% of the public knows of the Pareto principle. So, I think this might be a good approach. Thoughts?

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-10-13T03:43:59.041Z · EA · GW

Thanks! This was helpful!

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-10-12T16:41:29.681Z · EA · GW

Does a vaccine/treatment for malaria exist? If yes, why are bednets more cost-effective than providing the vaccine/treatment?

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-10-04T23:31:39.223Z · EA · GW

Is it high impact to work in AI policy roles at Google, Facebook, etc? If so, why is it discussed so rarely in EA?

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-10-01T07:54:35.770Z · EA · GW

Wonderful to learn more about you!

Yeah, I completely agree with you that there is massive potential for EA in India. EA India is pretty small as of now: ~50 people (and ~35 people if you don't count foreigners doing projects in India).

Also regarding introductions: I'll make e-mail introductions so could you send your e-mail?

Your ideas are indeed interesting. I'm far from an expert on this topic so I'll just send all the literature I know on this topic.

 

Recommended:

  • Future Perfect is an EA group/organisation at Vox that writes about EA-related stuff for mass media. You can see their website here and see a video about them here.
  • Regarding the "top 20 utilitarian profiles list": See https://80000hours.org/problem-profiles/#overall-list. They have ranked what they think are the top 9 problems. In fact, if you go to any of the problem profiles for individual problems, you will notice they have given a quantitative score for scale, tractability and negelectedness. 80,000 Hours uses a quantitative framework to rank problems, which you can read about here.
  • Regarding a "wiki-type editable problems list", Rethink Priorities has launched a Priority Wiki, which you can check out here and here. The second link wasn't working when I tried but maybe you'll be luckier!
  • The Fidelity model.

 

Might be helpful:

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-09-28T04:05:42.145Z · EA · GW

Hmm interesting ideas. I have one disagreement though, my best guess is that there are more rationalist people than altruistic people.

I think around 50% of the people who study some quantitative/tech subject and have good IQ qualify as rationalist (is this an okay proxy for rationalist people?). And my definition for altruistic people is someone who makes career decisions primarily due to altruistic people.

Based on these definitions, I think there are more rationalist people than altruistic people. Though, this might be biased since I study at a tech college (i.e. more rationalists) and live in India (i.e. less altruistic people, presumably because people tend to become altruistic when their basic needs are met).

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-09-26T04:26:07.878Z · EA · GW

Among rationalist people and altruistic people, on average, which of them are more likely to be attracted to effective altruism?

This has uses. If one type of people are significantly more likely to be attracted to EA, on average, then it makes sense to target them for outreach efforts. (e.g. at university fairs)

I understand that this is a general question, and I'm only looking for a general answer :P (but specifics are welcome if you can provide them!)

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-09-22T23:25:32.602Z · EA · GW

Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving  in deserts, and I would expect this to continue.

Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-09-18T20:44:17.060Z · EA · GW

Thanks Ryan for your comment!

It seems like we've identified a crux here: what will be the total number of people living in Greenland in 2100 / world with 4 degrees warming?

 

I have disagreements with some of your estimates.

The total drylands population is 35% of the world population

Large populations currently reside in places like India, China and Brazil. These currently non-drylands could be converted to drylands in the future (and also possibly desertified). Thus, the 35% figure could increase in the future.

So less than 10% of those from drylands have left.

Drylands are categorised into {desert, arid, semi-arid, dry sub-humid}. It's only when a place is in the desert category, that people seriously consider moving out (for reference all of California comes under arid or semi-arid category). In the future, deserts could form a larger share of drylands, and less arid regions could form a smaller share. So, you could have more than 10% of people from places called "drylands" leaving in the future.

The total number of migrants, however, is 3.5% of world population.

Yes, that is correct. But that is also a figure from 2019. A more relevant question would be how many migrants would there be in 2100? I think it's quite obvious that as the Earth warms, the number of climate migrants will increase.

So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants.

I don't really agree with the 5% estimate. Specifically for desertified lands, I would guess the %age of people migrating to be significantly higher.

Of the world's 300M migrants, Greenland currently has only ~10k.

This is a figure from 2020 and I don't think you can simply extrapolate this.

 

After revising my estimates to something more sensible, I'm coming with ~50M people in Greenland. So, Greenland would be far from being a superpower. I'm hesitant to share my calculations because my confidence level for my calculations is low -- I wouldn't be surprised if the actual number was upto 2 orders of magnitude smaller or greater.

A key uncertainity: Does desertification of large regions imply that in-country / local migration is useless?

 

The world, 4 degrees warmer. A map from Parag Khanna's book Connectography
Comment by Prabhat Soni on Prabhat Soni's Shortform · 2020-09-16T11:25:02.798Z · EA · GW

High impact career for Danish people: Influencing what will happen with Greenland

EDIT: Comments give a good counter-argument against my views!

Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.

Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence, but it will be difficult for a single effective altruist since multiple large countries lay claims on Antarctica (i.e. more competition). Greenland however is much more interesting.

 

It's kinda easy for Danes to influence Greenland

Denmark is a small-ish country with a population of ~5.7 million people. There's really not much competition if one wants to enter politics (if you're a Dane you might correct me on this). The level of competition is much lower than conventional EA careers since you only need to compete with people within Denmark.

 

There are unsolved questions wrt Greenland

  1. There's a good chance Denmark will sell Greenland because they could get absurd amounts of money. Moreover, Greenland is not of much value to them since Denmark will mostly remain inhabitable and they don't have a large population to resettle. Do you sell Greenland to a peaceful/neutral country? To the highest bidder? Is it okay to sell it to a historically aggresive country? Are there some countries you want to avoid selling it to because they will gain too much influence? USA, China and Russia have shown interest in buying Greenland.
  2. Should Denmark just keep Greenland, allow mass immigration and become the next superpower?
  3. Should Greenland remain autonomous?

 

Importance

  1. Greenland, with a billion+ people living in it, could be the next superpower. Just like how most of the emerging technology (e.g. AI, biotechnology, nanotechnology) are developed in current superpowers like USA and China, future technologies could be developed in Greenland.
  2. In a world of extreme climate change, it is possible that 1-2 billion people could live in Greenland. That's a lot of lives you could influence.
  3. Greenland has a strategic geographic location. If a country with bad intentions buys Greenland, that could be catastrophic for world peace.