Go apply for 80K Advising - (Yes, right now) 2022-03-28T05:18:11.895Z


Comment by devanshpandey on [deleted post] 2022-09-13T17:42:06.400Z

His pronouns, listed on his website, are he/they.

Comment by devansh (devanshpandey) on EA is about maximization, and maximization is perilous · 2022-09-02T17:41:19.267Z · EA · GW

And I’m nervous about what I perceive as dynamics in some circles where people seem to “show off” how little moderation they accept - how self-sacrificing, “weird,” extreme, etc. they’re willing to be in the pursuit of EA goals. I think this dynamic is positive at times and fine in moderation, but I do think it risks spiraling into a problem.

There seems to be an important trade-off here, where this is a valuable signal that the person "showing off" is aligned with your values and it's actually pretty useful to know that (especially since current gradients often push in favor of people who are not aligned paying lip service to EA ideas in order to gain money/status/power for themselves). 

The balance of how much we should ask or expect of this category of sacrifice seems like one that we should put lots of time as a community into thinking about, especially when we're trying to grow quickly and are unusually willing to provide resources to people.

Comment by devansh (devanshpandey) on How to Talk to Lefties in Your Intro Fellowship · 2022-08-13T21:30:23.791Z · EA · GW

It seems incredibly important that EA, as a community, maintains extremely high epistemic standards and is a place where we can generally assume that people, while not necessarily having the same worldviews or beliefs as us, can communicate openly and honestly about the reasons for why they're doing things. A primary reason for this is just the scale and difficulty of the things that we're doing.

That's what makes me quite uncomfortable with saying global health and development work is reparation for harms that imperialist countries have caused poor countries! We should work on aid to poor countries because it's effective, because we have a chance to use a relatively small-to-us amount of money to save lives and wildly improve the conditions of people in poor countries—not because aid represents reparations from formerly imperial countries to formerly subjugated ones

I think many people who identify with social justice and leftist ideologies are worth recruiting and retaining. But I care more about our community having good epistemics in general, about being able to notice when we are doing things correctly and when we are not, and conveying our message honestly seems really important for this. This objection is not "leftists have bad epistemics," like you mentioned at the start of this article - you should increase recruitment and retention but not lower your own epistemic standards as a communicator to do so.

I think parts of this post are quite good, and I think when you can do low-cost things that don't lower your epistemic standards (like using social justice focused examples, supporting the increase of diversity in your movement, saying things that you actually believe in order to support your arguments in ways that connect with people). But I think this post at the current moment needs a clear statement of not lowering epistemic standards when doing outreach to be advice that helps overall.

Comment by devansh (devanshpandey) on [link post] The Case for Longtermism in The New York Times · 2022-08-06T10:13:52.662Z · EA · GW

Here's a non-paywalled link available for the next 14 days.

Comment by devansh (devanshpandey) on What I mean by "funding overhang" (and why it doesn't mean money isn't helpful) · 2022-07-13T04:04:22.467Z · EA · GW

"It's worth noting that the scale of the funding overhang isn't absolute; there are"

Is this a typo?

Comment by devansh (devanshpandey) on "Tech company singularities", and steering them to reduce x-risk · 2022-05-13T20:07:14.234Z · EA · GW

>>after a tech company singularity, such as if the tech company develops safe AGI

I think this should be "after AGI"?

Comment by devansh (devanshpandey) on 'Dropping out' isn't a Plan · 2022-04-28T22:05:23.508Z · EA · GW

Ah, I see. I guess I kind of buy this, but I don't think it's nearly as cut-and-dry as you argue, or something. Not sure how much this generalizes, but to me "staying in school" has been an option that conceals approximately as many major suboptions as "leaving school." I'd argue that for many people, this is approximately true - that is, people have an idea of where they'd want to work or what they'd want to do given leaving school, but broadly "staying in school" could mean anything from staying on ~exactly the status quo to transferring somewhere in a different country, taking a gap year, etc.

Comment by devansh (devanshpandey) on 'Dropping out' isn't a Plan · 2022-04-28T20:55:34.830Z · EA · GW

I don't really see how the world is different whether or not you use the first or the second representation here? "Drop out and go work at a job" seems like a plan at a higher level of abstraction than "drop out and work in {area}," which is itself at a higher level of abstraction than "drop out and work in {area|position}," which is a higher level of abstraction than "drop out and work at ORG1." 

What's the bright line between the first and the second?

Comment by devansh (devanshpandey) on Go apply for 80K Advising - (Yes, right now) · 2022-03-29T14:44:55.277Z · EA · GW

Agreed (and only 20% kidding.) Having an 80k post pinned seems wonderful.

Comment by devansh (devanshpandey) on Go apply for 80K Advising - (Yes, right now) · 2022-03-29T14:44:04.546Z · EA · GW

Yep, this is my understanding as well!

Comment by devansh (devanshpandey) on Go apply for 80K Advising - (Yes, right now) · 2022-03-28T16:15:52.565Z · EA · GW

Ah, good point. Done!

Comment by devansh (devanshpandey) on Legal support for EA orgs - useful? · 2022-03-17T14:42:56.742Z · EA · GW

Yeah, this is a good consideration; if something like this ended up happening, it would be wonderful if Tyrone could get two or three lawyers to cover the major EA hubs (US, especially CA, the UK, and maybe the Bahamas) - either in physical location or in knowledge.

Comment by devansh (devanshpandey) on Legal support for EA orgs - useful? · 2022-03-17T04:56:23.150Z · EA · GW

FWIW, I am currently running an EA org, and legal help would be generally valuable to me both in the past and in the present. My impression is that scaling an EA law firm would involve a few people at the top being EAs and the rest being perfectly fine as normal, non-aligned lawyers; this would gain a bunch of the benefits of an EA law firm (primarily, I think, a general understanding of what EA's goals are and a "cost-benefit analysis" type thinking that tries to avoid being overly conservative and is perfectly fine advising its clients to do, e.g., things that are legally grey area but not enforced) 

So I'd say my answer is that this seems, at least to me personally, like a very potentially high-impact thing that would be incredibly helpful to me and other people starting small- and medium-sized organizations who need legal advice and support. This being said, I am probably not in the best position to answer that question (not having a birds'-eye view of the EA ecosystem like some others do) and so I'm very interested in other perspectives on this.

Comment by devanshpandey on [deleted post] 2022-03-14T19:04:55.131Z

It seems like this depends very, very heavily upon the actual person. I think the range of "when would it be best for a particular person to learn about EA" could be anywhere from, like, 10 (I imagine Eliezer) all the way up to old age for a small number of people, although the distribution seems like it hits a peak somewhere in the 14-25 range because that's when people have both critical thinking abilities and the ability to change their mind about important things.

That being said, if your goal is to get people who are going to make major contributions to the world in the future, it seems to me that that subgroup is pretty heavily correlated with "people who would have been interested in EA in high school, if only they had known about it." This post seems to agree. Younger than that, and I think only a very very tiny portion of the population can keep up with the ideas (and I have some concerns about trying to influence the minds of, like, the vast majority of 12 year olds); older, and they've lost idea-flexibility and are more set in their ways. 

The age group you're targeting seems fine if slightly young, although the way you describe your specific outreach method pattern-matches to a concerning one in my mind, and I'd potentially encourage you to consider the asymmetric arguments for why the type of outreach you're currently doing is fine/not fine. If you can cheaply take steps to target a slightly older audience, that seems good.

Comment by devansh (devanshpandey) on Organizations Encouraging Russian Desertion? · 2022-03-04T20:13:21.657Z · EA · GW

Immediate human suffering almost certainly gives way to larger geopolitical effects in moral weight. Weakening Russian efforts likely points in the direction of a lower chance of nuclear war, for example.

Comment by devansh (devanshpandey) on Is Combatting Ageism The Most Potentially Impactful Form of Social Activism? · 2022-02-11T05:06:44.785Z · EA · GW

As a sixteen year old, while I appreciate that this is being talked about and am a massive proponent of teens having more rights, I think your central point is fundamentally wrong—while we're forced to go to school every day, we certainly don't get death squads for minding our own business. To my knowledge, teens being sent to juvenile detention for habitual truancy is extremely rare, and it seems like a massive stretch to argue that teens are physically restrained and forced to go to school, especially by threat of death. With parental permission, for example, I can unenroll from school. Society giving parents ultimate authority over their kids is a different thing, and I agree that that is harmful; but I think you can combat that directly instead of railing loudly against the state's potential for murder and destruction to teenagers.

While your other claims seem somewhat valid, the exceptional hyperbole on this is fundamentally driving people away from your argument, and I think you'd find potentially orders of magnitude more people willing to hear you out if you focused more on your argument that teen mental capacity is near-fully developed. (For example, it would be wonderful if you campaigned for no-fault emancipation without parental consent and the rights of teens to unenroll from school). Your book seems, at a glance, better for this than this post.

Comment by devansh (devanshpandey) on Bounty for your best 2 minute answer to an EA 'frequently asked question' · 2022-02-02T15:43:20.930Z · EA · GW

(Note: this comment will probably draw heavily from The Precipice, because that's by far the best argument I've heard against temporal discounting. I don't have a copy of the book with me, so if this is close enough to that explanation you can just disqualify me :P)

In normal situations, it works well to discount things like money based off the time it takes to get them. After all, money is worth less as time goes on, due to inflation; something might happen to you, so you can't collect the money later on; and there's an inherent uncertainty in whether or not you'll actually get the reward you're promised, later. Human lives aren't subject to inflation—pain and suffering are pain and suffering across time, whether or not there are more people.Something might happen to the world, and I agree that it's important to discount based on that, but that discounting works out to be relatively small in the grand scheme of things. People in the long-term future are still inherently valuable because they're people, and their collective value is very important—and thus it should be a major consideration for people living now.

There's one thing I've been ignoring, and it's something called "pure time preference," essentially the inherent preference for having something earlier than later just because of its position in time. Pure time preference shouldn't be applied to the long term future for one simple reason—if you tried to apply a reasonable discount rate based on it *back* to Ancient Rome, the consuls would conclude that one moment of suffering for one of their subjects was worth as much as the entire human race today suffering for their entire lives.

Basically, we should discount the moral value of people based on the catastrophe risk - the chance that the world ends in the time from now to then, and the gains we strove for won't mean anything. (Which is a relatively small discount, all things considered, keeping substantial amounts of value in the longterm future—and gets directly reduced by working on existential risk) But it's not fair to people in the future to discount based on anything else - like pure time preference, or inflation - because given no catastrophe until then, their lives, joys, pains, and suffering are worth just as much as people today, or people living in Ancient Rome.

Comment by devansh (devanshpandey) on Earn To Volunteer: An Underutilised Path to Impact · 2022-01-29T04:34:12.518Z · EA · GW

This seems interesting, but I'm confused as to what the point of this is over "work at an EA org". It seems like most EA orgs, are bottlenecked a lot more on talent than money, and if you're doing high-talent work for an EA organization than your marginal hour is likely more valuable than $60/hr. I wonder what subset of the population would benefit substantially from this advice—it seems like earning to give and direct work cover most of the space that earning to volunteer might.

What kind of person is this advice targeted at, and why do you think that this is better than direct work for those people?

Comment by devansh (devanshpandey) on Momentum 2022 updates (we're hiring) · 2022-01-14T00:54:18.920Z · EA · GW

Medium article throws a 404, FWIW.

Comment by devansh (devanshpandey) on A huge opportunity for impact: movement building at top universities · 2021-12-17T16:27:14.666Z · EA · GW

I mean sure, but what's important here isn't really the absolute number of intelligent/ambitious people, but the relative concentration of them. One third of Nobel prizes going to people who didn't complete their undergrad at a top 100 global university means that 2/3 of the Nobel prizes did. Out of ~30K global universities, 2/3 of Nobels are concentrated in the top 100. The talent exists outside top universities, but focusing on them with limited resources seems more tractable than spreading thin with lower average intelligence/ambition.

Comment by devansh (devanshpandey) on A huge opportunity for impact: movement building at top universities · 2021-12-14T22:37:30.880Z · EA · GW

For what it's worth, the US higher education system is pretty stratified in terms of intelligence. The best universities are maybe a standard deviation above the 50th best university in SAT scores, and would probably be even higher if the SAT max wasn't 1600; plus, a lot of the most ambitious and potentially successful students go to them. Moreover, top universities generally attract those students from every field; while, for example, UIUC is probably better than most Ivies at CS, the Ivies will still poach a lot of those students largely because of prestige/reputational effects. Those factors combine to make it pretty likely that the kind of people that can have the most impact in these fields are disproportionately concentrated at top universities.

Comment by devansh (devanshpandey) on [Linkpost] Don't Look Up - a Netflix comedy about asteroid risk and realistic societal reactions (Dec. 24th) · 2021-11-19T19:05:18.364Z · EA · GW

How much would it cost to influence the film to make this happen?

Comment by devansh (devanshpandey) on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-11-15T19:31:43.581Z · EA · GW

On reading just the summary, the immediate consideration I had was that the EMH would imply that in the counterfactual where I don't invest in Mind Ease, someone else will, and if I do invest in Mind Ease, someone else will not. After reading the post, it looks like you have two important points here against this—first, early-stage venture markets are not necessarily as subject to the EMH, and second, it's different in this case because EA-aligned investors would be willing to take a lower financial return than they could get with the same risk otherwise in order to do good. Do you agree that impact investing in the broader financial market into established companies has very little counterfactual impact, or is there something I'm missing there? I'm interested in further research on this concept, and I'm not sure how much EA-aligned for-profits are already working on this.