Posts

Why SoGive is publishing an independent evaluation of StrongMinds 2023-03-17T22:46:35.480Z
Update On Six New Charities Incubated By Charity Entrepreneurship 2020-02-27T05:20:18.346Z

Comments

Comment by ishaan on StrongMinds should not be a top-rated charity (yet) · 2023-01-10T19:03:04.694Z · EA · GW

ie SoGive would thinks depression is worse than death. Maybe this isn't quite a "sanity check" but I doubt many people have that moral view.

I replied in the moral weights post w.r.t. "worse than death" thing. (I think that's a fundamentally fair, but fundamentally different point from what I meant re: sanity checks w.r.t not crossing hard lower bounds w.r.t. the empirical effects of cash on well being vs the empirical effect of mental health interventions on well being)

Comment by ishaan on Moral Weights according to EA Orgs · 2023-01-10T17:19:07.696Z · EA · GW

My response to this post overall is that I think some of what is going on here is that different people and different organizations mean very different things when we say "Depression". Since "depression" is not really a binary, the value of averting "1 case of severe depression" can change  a lot depending on how you define severity, in such a way that differences in reasonable definitions of "sufficiently bad depression" can plausibly differ by 1-3x when you break it down into "how many SD counts as curing depression" terms.

However, the in-progress nature of SoGives' mental health work makes pinning down what we do mean sort of tricky. What exactly did the participants in the SoGive Delphi Process mean when they said "severe depression"? How should I, as an analyst who isn't aiming to set the moral weights but is attempting to advise people using them, interpret that? These things are currently in flux, in the sense that I'm basically in the process of making various judgement calls about them right now, which I'll describe below.

You commented:


I'm not sure 2-5 SD-years is plausible for severe depression. 3 SDs would saturate the entire scale 0-24.

It's true that the PHQ-9 score of 27 points maxes out around 2-4sd. How many SD it is exactly depends on the spread of your population of course (for example if 1sd=6.1 points then the range of a 27 point scale spans 4.42sd ), and for some population spreads it would be 3sd.
 

ie SoGive would thinks depression is worse than death. Maybe this isn't quite a "sanity check" but I doubt many people have that moral view.

These two things are related actually! I think the trouble is that the word "severity depression" is ambiguous as to how bad it is, so different people can mean different things by it. 

One might argue that the following was an awkward workaround which should have been done differently, but basically, to make transparent my internal thought process here (In terms of what I thought after joining sogive, starting this analysis, and encountering these weights) was the following:

-> "hm, this implies we're willing to trade averting 25 years of depression against one (mostly neonatal) death. Is this unusual?" 

-> "Maybe we are thinking about the type of severe, suicidal depression that is an extremely net negative experience, a state which is worse than death." 

-> "Every questionnaire creator seems to have recommended cut-offs for gradients of depression such as "mild" and "moderate"  (e.g. the creators of the PHQ-9 scale are recommending 20 points as the cut-off for "severe" depression) but these aren't consistent between scales and are ultimately arbitrary choices."

-> "extrapolating linearly from the time-trade-off literature people seemed to think that a year of depression breaks even with dying a year earlier around 5.5sd. Maybe less if it's not linear."

-> "But maybe it should be more because what's really happening here is that we're seeing multiple patients improve by 0.5-0.8 sd. The people surveyed in that paper think that the difference between 2sd->3sd is bigger than 1sd->2sd.  People might disagree on the correct way to sum these up." 

-> concluding with me thinking that various reasonable people might set the standard for "averting severe depression" between 2-6 sd, depending on whether they wanted ordinary severity or worse than death severity

So, hopefully that answers your question as to why I wrote to you that  2-5sd is reasonable for severe depression. I'm going to try to justify this further in subsequent posts. Some additional thoughts that I had were:

-> I notice that this is still weighting depression more heavily than the people surveyed in the time-trade-off, but if we set it on the higher range of 3-6sd it still feels like a morally plausible view (especially considering that some people might have assigned lower moral weight to neonates). 

-> My role is to tell people what the effect is, not to tell them what moral weights to use.  However, I'm noticing that all the wiggle room to interpret what "severe" means is on me, and I notice that I keep wanting to nudge the SD-years I accept as higher in order to make the view match what I think is morally plausible.

-> I'll just provisionally use something between 3-5 sd-years for the purpose of completing analysis, because my main aim is to figure out what therapy does in terms of sd. 

-> But I should probably publish a tool that allows people to think about moral weights in terms of standard deviation, and maybe we can survey people for moral weights again in the future in a manner that lets them talk about standard deviations rather than whatever connotations they attached to "severe depression". Then we can figure out what people really think about various grades of depression and how much income and life they're willing to trade about it.

In fact the next thing I'm scheduled to publish is a write up that talks in detail about how to translate SD into something more morally intuitive. So hopefully that will help us make some progress on the moral weights issue.

So to summarize, I think (assuming your calculations w.r.t. everyone else's weights are correct) what's going on here is that it looks like SoGive is weighing depression 4x more than everyone, but those moral weights were set in the absence of a concrete recommendations, and in the end ...and arguably this is an artifact me choosing after the fact to set a really high SD threshold for "severity" as a reaction to the weights, and what really needs to happen is that we need to go through that process I described of polling people again in a way that breaks down "severity" differently... in the final analysis, once a concrete recommendation comes out, it probably won't be that different?  (Though you've added two items, sd<->daly/wellby and cash<->sd, on my list of things to check for robustness  and if it ends up being notable I'm definitely going to flag it, so thank you for that).  I do think that this story will ultimately end with some revisiting of moral weights, how they should be set, and what they mean, and how to communicate them.

(There's another point that came up in the other thread though, regarding "does it pass the sanity check w.r.t. cash transfer effects on well being", which this doesn't address. although it falls outside the scope of my current work I have been wanting to get a firmer sense of the empirical cash <-> wellby <-> sd depression correlations and apropos of your comments perhaps this should be made more explicit in moral weights agendas.)

Comment by ishaan on StrongMinds should not be a top-rated charity (yet) · 2023-01-10T02:02:38.023Z · EA · GW

To expand a little on "this seems implausible":  I feel like there is probably a mistake somewhere in the notion that anyone involves thinks that <doubling income as having 1.3 WELLBY and severe depression has having a 1.3 WELLBY effect.> 

 The mistake might be in your interpretation of HLI's document (it does look like the 1.3 figure is a small part of some more complicated calculation regarding the economic impacts of AMF and their effect on well being, rather than intended as a headline finding about the cash to well being conversion rate). Or it could be that HLI has an error or has inconsistencies between reports. Or it could be that it's not valid to apply that 1.3 number to "income doubling"  SoGive weights for some reason because it doesn't actually refer to the WELLBY value of doubling.  

I'm not sure exactly where the mistake is, so it's quite possible that you're right, or that we are both missing something about how the math behind this works which causes this to work out, but I'm suspicious because it doesn't really fit together with various other pieces of information that I know. For instance -  it doesn't really square with how HLI reported Psychotherapy is 9x GiveDirectly when the cost of treating one person with therapy is around $80, or how they estimated that it took $1000 worth of cash transfers to produce 0.92 SDs-years of subjective-well-being improvement ("totally curing just one case of severe depression for a year" should correspond to something more like 2-5 SD-years). 

I wish I could give you a clearer "ah, here is where i think the mistake is" or perhaps a "oh, you're right after all" but I too am finding the linked analysis a little hard to follow and am a bit short on time (ironically, because I'm trying to publish a different piece of Strongminds analysis before a deadline).  Maybe one of the things we can talk about once we schedule a call is how you calculated this and whether it works? Or maybe HLI will comment and clear things up regarding the 1.3 figure you pulled out and what it really means.

Comment by ishaan on StrongMinds should not be a top-rated charity (yet) · 2023-01-10T01:59:46.365Z · EA · GW

Good stuff. I haven't spent that much time looking at HLIs moral weights work but I think the answer is "Something is wrong with how you've constructed weights, HLI is in fact weighing mental health harder than SoGive". I think a complete answer to this question requires me checking up on your calculations carefully, but I haven't done so yet, so it's possible that this is right.

If if were true that HLI found anything on the order of roughly doubling someone's consumption  improved well being as much as averting 1 case of depression, that would be very important as it would mean that SoGive moral weights fail some basic sanity checks. It would imply that we should raise our moral weight on cash-doubling to at least match the cost of therapy even under a purely subjective-well-being oriented framework to weighting. (why not pay 200 to double income, if it's as good as averting depression and you would pay 200 to avert depression?) This seems implausible.

I haven't actually been directly researching the comparative moral weights aspect, personally - I've been focusing primarily on <what's the impact of therapy on depression in terms of effect size> rather than on the "what should the moral weights be" question (though I have put some attention to the "how to translate effect sizes into subjective intuitions" question, but that's not quite the same thing). That said when I have more time I will look more deeply into this and check if our moral weights are failing some sort of sanity check on this order, but, I don't think that they are.

Regarding the more general question of "where would we stand if we altered our moral weights to be something else", ask me again in a month or so when all the spreadsheets are finalized, moral weights should be relatively easy to adjust once the analysis is done. 

(as sanjay alludes to in the other thread, I do think all this is a somewhat separate discussion from the GWWC list - my main point with the GWWC list was that StrongMinds is not in the big picture actually super out of place with the others, in terms of how evidence-backed it is relative to the others, especially when you consider the big picture of the background academic literature about the intervention rather than their internal data. But I wanted to address the moral weights issue directly as it does seem like an important and separate point.)

Comment by ishaan on StrongMinds should not be a top-rated charity (yet) · 2023-01-06T08:05:39.451Z · EA · GW

I'm a researcher at SoGive conducting an independent evaluation of StrongMinds which will be published soon. I think the factual contents of your post here are correct. However, I suspect that after completing the research, I would be willing to defend the inclusion of StrongMinds on the GGWC list, and that the SoGive write-up will probably have a more optimistic tone than your post. Most of our credence comes from the wider academic literature on psychotherapy, rather than direct evidence from StrongMinds (which we agree suffers from problems, as  you have outlined). 

Regarding HLI's analysis, I think it's a bit confusing to talk about this without going into the details because there are both "estimating the impact"  and "reframing how we think about moral-weights" aspects to the research. Ascertaining what the cost and magnitude of therapy's effects are must be considered separately from the "therapy will score well when you use subjective-well-being as the standard by which therapy and cash transfers and malaria nets are graded" issue. As of now I do roughly think that HLI's numbers regarding what the costs and effect sizes of therapy are on patients are in the right ballpark. We are borrowing the same basic methodology for our own analysis. You mentioned being confused by the methodology -  there are a few points that still confuse me as well, but we'll soon be publishing a spreadsheet model with a step by step explainer on the aspects of the model that we are borrowing, which may help.  

If you ( @Simon_M or anyone else wishing to work at a similar level of analysis) is planning on diving into these topics in depth, I'd love to get in touch on the Forum and exchange notes.

Regarding the level of evidence: SoGive's analysis framework outlines a "gold standard" for high impact, with "silver" and "bronze" ratings assigned to charities with lower-but-still-impressive cost-effectiveness ratings. However, we also distinguish between "tentative" ratings and "firm" ratings, to acknowledge that some high impact opportunities are based on more speculative estimates which may be revised as more evidence comes in.  I don’t want to pre-empt our final conclusions on StrongMinds, but I wouldn’t be surprised if “Silver (rather than Gold)” and/or “Tentative (rather than Firm)” ended up featuring in our final rating. Such a conclusion still would be a positive one, on the basis of which donation and grant recommendations could be made. 

There is precedent for effective altruists recommending donations to charities for which the evidence is still more tentative. Consider that Givewell recommends "top charities", but also recommends less proven potentially cost-effective and scalable programs (formerly incubation grants). Identifying these opportunities allows the community to explore new interventions, and can unlock donations that counterfactually would not have been made, as different donors may make different subjective judgment calls about some interventions, or may be under constraints as to what they can donate to.

Having established that there are different criteria that one might look at in order to determine when an organization should be included in a list, and that more than one set of standards which may be applied, the question arises: What sort of standards does the GWWC top charities list follow, and is StrongMinds really out of place with the others? Speaking the following now personally, informally and not on behalf of any current or former employer: I would actually say that StrongMinds has much more evidence backing than many of the other charities on this list (such as THL, Faunalytics, GFI, WAI, which by their nature don't easily lend themselves to RCT data). Even if we restrict our scope to the arena of direct (excluding e.g. excluding pandemic research orgs) global health interventions, I wouldn't be surprised if bright and promising potential stars such as Suvita and LEEP are actually at a somewhat similar stage as StrongMinds - they are generally evidence-based enough to deserve their endorsement on this list, but I'm not sure they've been as thoroughly vetted by external evaluators the way more established organizations such as Malaria Consortium might be.  Because of all this, I don't think StrongMinds seems particularly out of place next to the other GWWC recommendations. (Bearing in mind again that I want to speak casually as an individual for this last paragraph, and I am not claiming special knowledge of all the orgs mentioned for the purposes of this statement).

Finally, it's great to see posts like this on the EA forum, thanks for writing it!

Comment by ishaan on Announcing the EA Merch Store! · 2022-12-30T19:42:17.952Z · EA · GW

Cool project! I suggest that the shrimp heart should be a different color, as most shrimp usually are not pink and only turn pink after cooking (although there are some exceptions to this so maybe this is too nitpicky and it's fine?). I am also not sure whether or not a living shrimp typically would have a curled up pose. Alternatively if you'd rather not do a full image redesign, or if there is a concern that people will not realize it is a shrimp if it looks too different from what they're used to seeing, it might help to instead have go vegan! text or something to clarify that it isn't that the sticker bearer likes eating shrimp.

Comment by ishaan on What should CEEALAR be called? · 2021-06-16T02:03:56.140Z · EA · GW

I thought "EA hotel" was pretty great as a straightforward description, good substitutes might have a word for "ea" and a word for "hotel". So like:

Bentham's Base
Helpers' House

Swap with Lodge, Hollow, Den if alliteration is too cute
 e.g. "Bentham's House", "Bentham's Lodge" both sound pretty serious.

Or just forget precedent and brand something new e.g. Runway (or Runway Athena)

Some "just kidding" alliterative options that I couldn't resist:
Crypto crib, Prioritization Place, Utilitarian's Union, Consequentialist Club, Greg's iGloo

Comment by ishaan on EA is a Career Endpoint · 2021-05-20T01:40:15.777Z · EA · GW

What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it?

I guess the "easy" answer is "do a poll with select interviews" but otherwise I'm not sure. I guess it would depends on which specific types of information you mean? To some degree organizations will state what they want and need in outreach. If you're referring to advice like what I said re: "indicate that you know what EA is in your application", a compilation of advice posts like this one about getting a job in EA might help. Or you could try to research/interview to find more concrete aspects of what the "criteria +bar to clear on those criteria" is for different funders if you see a scenario where the answer isn't clearly legible. (If it's a bar at all. For some stuff it's probably a matter of networking and knowing the right person.)

Another general point on collecting advice is that I think it's easy to accidentally conflate "in EA" (or even "in the world") with "in the speaker's particular organization, in that particular year, within that specific cause area" when listening to advice…The same goes for what both you and I have said above. For example, my perspective on early-career is informed by my particular colleagues, while your impression that "funders have more money than they can spend" or the work being all within "a small movement" etc is not so applicable for someone who wants to work in global health. Getting into specifics is super important. 

Comment by ishaan on EA is a Career Endpoint · 2021-05-20T00:44:04.528Z · EA · GW

Heh, I was wondering if I'd get called out on that. You're totally right, everything that happens in the world constitutes evidence of something! 

What I should have said is that humans are prone to fundamental attribution error and it is bad to privilege the hypothesis that it's evidence of real skill/experience/resume signalling/degree etc, because then you risk working on the wrong things. Rejections are evidence, but they’re mostly evidence of a low baseline acceptance rate, and only slightly  evidence of other things.

I can imagine someone concluding things like "I'd better get a PhD in the subject so I can signal as qualified and then try again" in a scenario where maybe the thing that would've shifted their chances is rewording a cover letter, spending a single day researching some examples of well-designed CEAs before the work task, or applying on a different year.

Comment by ishaan on EA is a Career Endpoint · 2021-05-18T13:33:02.837Z · EA · GW

Another factor which may play a role in the seeming arbitrariness of it all, is that orgs are often looking for a very specific thing, or have specific values or ideas that they emphasize, or are sensitive to specific key-words, which aren't always obvious and legible from the outside - leading to communications gaps. To give the most extreme example I've encountered of this, sometimes people don't indicate that they know what EA is about in their initial application, perhaps not realizing that they're being considered alongside non-EA applicants or that it might matter. For specific orgs, communication gaps might get more specific. If you're super interested in joining an org, getting a bit of intel on this can really help (and is a lot easier than trying to get experience somewhere else before re-applying!).

Comment by ishaan on EA is a Career Endpoint · 2021-05-18T13:11:22.218Z · EA · GW

Also don't worry about repeated rejections. Even if you are rejected, your application had an expected value, it increased the probability that a strong hire was made and that more impact was achieved. The strength of the applicant pool matters. Rejection of strong applicants is a sign of a thriving and competitive movement. It means that the job that you thought was important enough to apply to is more likely to be done well by whoever does it.

Rejection should not be taken as evidence that your talent or current level of experience is insufficient. I think that (for most people reading this forum) it's often less a lack of the trust/vetting issue, and more a bit of randomness. I've applied lots of places. In some I did not even make it into the first round, totally rejected. In others I was a top candidate or accepted. I don't think this variance is because of meaningfully differing fit or competitiveness, I think it's because recruiting, grantmaking, any process where you have to decide between a bunch of applications, is idiosyncratic. I'm sure anyone who has screened applications knows what I'm talking about, it's not an exact science. There's a lot of applicants and little time, sometimes snap judgements must be made in a few seconds - at the end we pick a hopefully suitable candidate, but we also miss lots of suitable candidates, sometimes overlooking several "best" candidates. And then there's semi-arbitrary differences in what qualities different screeners emphasize (the interview? a work task? EA engagement? Academic degrees?). When there's a strong applicant pool, it means things are a bit more likely to go well.

(All that said, EA is big enough that all this stuff differs a lot by specific org as well as broader cause area)

Comment by ishaan on EA is a Career Endpoint · 2021-05-18T13:05:32.012Z · EA · GW

Counter-point: If you are interested in an EA job or grant, please do apply to it, even if you haven't finished school. If you're reading the EA forum, you are likely in the demographic of people where (some) EA orgs and grant makers want your application.

I just imagined the world where none of my early-career colleagues had applied to EA things. I think that world is plausibly counterfactually worse. Possibly a world with fewer existing EA adjacent orgs, smaller EA adjacent orgs, or fewer high impact EA jobs. I think dynamic where we have a thriving community of EAs who apply for EA jobs and grants is a major strength of the movement. I think EA orgs benefit so much from having strong applicants relative to the wider hiring market. I hope everyone keeps erring on the side of applying! 

But also yes definitely do look outside of EA - try your best to actually evaluate impact, don't get biased by whether or not something is labeled "EA". 
 

Comment by ishaan on EA Debate Championship & Lecture Series · 2021-04-05T18:01:22.201Z · EA · GW

Thanks for hosting this event! It was a pleasure to participate. 

Comment by ishaan on The Intellectual and Moral Decline in Academic Research · 2020-09-28T17:23:09.089Z · EA · GW

Without making claims about the conclusions, I think this argument is of very poor quality and shouldn't update anyone in any direction.

"As taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent"

Taking all claims at face value, you should not be persuaded that more money causes retractions just because retractions increased roughly in proportion with the overall growth of the industry. I checked the cited work to see if there were any mitigating factors which justified making this claim (since maybe I didn't understand it, and since sometimes people make bad arguments for good conclusions) and it actually got worse - they ignored the low rate of retraction ( It's 0.2%), they compared US-only grants with global retractions, they did not account for increased oversight and standards, and so on.

The low quality of the claim, in combination with the fact that the central mission of this think tank is lobbying for reduced government spending in universities and increase political conservatism on campuses in North Carolina, suggests that the logical errors and mishandling of statistics we are seeing here is partisan motivated reasoning in action.

Comment by ishaan on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-22T04:48:42.979Z · EA · GW

This matches my understanding, however, I think it is normal for non-profits of the budget size that the EA ecosystem currently is to have this structure.

Bridgespan identified 144 nonprofits that have gone from founding to at least $50 million in revenue since 1970...[up to 2003]...we identified three important practices common among nonprofits that succeeded in building large-scale funding models: (1) They developed funding in one concentrated source rather than across diverse sources; (2) they found a funding source that was a natural match to their mission and beneficiaries; and (3) they built a professional organization and structure around this funding model.

- How Non-Profits Get Really Big

Some common alternatives are outlined here: Ten Non-Proft Funding Models.

Within this framework, I would describe the EA community currently using a hybrid between "Member Motivator" (cultivating membership of many individual donors who feel personally involved with the community - such as the GWWC model) and "Big Bettor" (such as the relationship between Good Ventures and the ecosystem of EA organizations).

Comment by ishaan on How have you become more (or less) engaged with EA in the last year? · 2020-09-10T18:29:26.090Z · EA · GW

This time last year, I started working at Charity Entrepreneurship after having attended the 2019 incubation program (more about my experience here). I applied to the 2019 incubation program after meeting CE staff at EAG London 2018. Prior to that, my initial introduction to EA was in 2011 via LessWrong, and the biggest factor in retaining my practical interest sufficiently to go to a conference was that I was impressed by the work of GiveWell. The regular production of interesting content by the community also helped remind me about it over the years. 80k's career advice also introduced me to some concepts (for example replacability) which may have made a difference.

Going forward I anticipate more engagement with both EA specifically and the concept of social impact more generally, because due to working at CE I have acquired a better practical understanding of how to maximize impact in general than I did before, as well as more insight about how to leverage the EA community specifically towards achieving impact (whereas my prior involvement consisted mostly of reading and occasionally commenting).

Comment by ishaan on Are there any other pro athlete aspiring EAs? · 2020-09-08T19:19:05.103Z · EA · GW

It's a cool idea! Athletes do seem to have a lot of very flexible and general-purpose fundraising potential, I think it makes a lot of sense to try to direct it effectively. Charity Entrepreneurship (an incubation program for founding effective non-profits) works with Player's Philanthropy Fund (a service which helps athletes and other entities create dedicated funds that can accept tax-deductible contributions in support of any qualified charitable mission) to help our new charities who have not completed the fairly complex process of formally registering as a non-profit get off the ground. You can actually see us on the roster, alongside various athletes. This doesn't mean we are actually working with athletes - we are just using some of the same operations infrastructure, but it might be a useful thing to know. In general I've noticed that there is quite a bit of infrastructure similar to PPF aimed at helping athletes do charitable fundraising, which I think is a good sign that this idea is promising.

Comment by ishaan on The community's conception of value drifting is sometimes too narrow · 2020-09-04T21:12:27.320Z · EA · GW

I think that what is causing some confusion here is that "value drift" is (probably?) a loanword from AI-alignment which (I assume?) originally referred to very fundamental changes in goals that would unintentionally occur within iterative versions of self improving intelligences, which...isn't really something that humans do. The EA community borrowed this sort of scary alien term and is using it to describe a normal human thing that most people would ordinarily just call "changing priorities".

A common sense way to say this is that you might start out with great intentions, your priorities end up changing, and then your best intentions never come to life. It's not that different from when you meant to go to the gym every morning...but then a phone call came, and then you had to go to work, and now you are tired and sitting on the couch watching television instead.

Logistically, it might make sense to do the phone call now and the gym later. The question is: "Will you actually go to the gym later?" If your plan involves going later, are you actually going to go? And if not, maybe you should reschedule this call and just going to the gym now. I don't see it as a micro death that you were hoping to go to the gym but did not, it's that over the day other priorities took precedence and then you became too tired. You're still the same person who wanted to go... you just ...didn't go. Being the person who goes to the gym requires building a habit and reinforcing the commitment, so if you want to go then you should keep track of which behaviors cause you to actually go and which behaviors break the habit and lead to not going.

Similarly you should track "did you actually help others? And if your plan involves waiting for a decade ...are you actually going to do it then? Or is life going to have other plans?" That's why the research on this does (and ought to) focus on things like "are donations happening", "is direct work getting done" and so on. Because that's what is practically important if your goal is to help others. You might argue for yourself "it's really ok, I really will help others later in life" or you might argue "what if I care about some stuff more than helping others" and so on, but I think someone who is in the position of attempting to effectively help others in part through the work of other people (whether through donations or career or otherwise) over the course of decades should to some degree consider what usually happens to people's priorities in aggregate when modeling courses of action.

Comment by ishaan on Book Review: Deontology by Jeremy Bentham · 2020-08-18T00:11:11.010Z · EA · GW

Cool write up!

Before I did research for this essay, I envisioned Bentham as a time traveller from today to the past: he shared all my present-day moral beliefs, but he just happened to live in a different time period. But that’s not strictly true. Bentham was wrong about a few things, like when he castigated the Declaration of Independence

Heh, I would not be so sure that Bentham was wrong about this! It seems like quite a morally complex issue to me and Bentham makes some good points.

what was their original their only original grievance? That they were actually taxed more than they could bear? No; but that they were liable to be so taxed...

This line of thought is all quite true. Americans (at least, the free landholders whose interests were being furthered by the declaration) at the time were among the wealthiest people in the world, and payed among the lowest taxes - less taxed than the English subjects. They weren't oppressed by any means, British rule had done them well.

But rather surprising it must certainly appear, that they should advance maxims so incompatible with their own present conduct. If the right of enjoying life be unalienable, whence came their invasion of his Majesty’s province of Canada? Whence the unprovoked destruction of so many lives of the inhabitants of that province?

This too, remains pertinent to the modern discourse. In response to Pontiac's Rebellion, a revolt of Native Americans led by Pontiac, an Ottawa chief, King George III declared all lands west of the Appalachian Divide off-limits to colonial settlers in the Proclamation of 1763.

Americans did not like that. The Declaration of independence ends with the following words:

“He (King George III) has excited domestic insurrections amongst us, and has endeavored to bring on the inhabitants of our frontiers, the merciless Indian savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes, and conditions.”

The Declaration of Independence voided the Proclamation of 1763, which contributed to the destruction of the Native Americans, a fact which is not hindsight but was understood at the time. Notice how indigenous communities still thrive in Canada, where the proclamation was not voided. There is also argument that slavery was prolonged as a result of it, and that this too is not hindsight but was understood at the time.

Of course, I doubt the British were truly motivated by humanitarian concern, and it's not clear to me from this piece that even Bentham is particularly motivated to worry about the indigenous peoples (vs. just using their suffering as a rhetorical tool to point out the hypocrisy of the out-group where it fits his politics) - you can tell he focuses more on the first economic point than the second humanitarian one. But his critiques would all be relevant had this event occurred today.

Really I think with the hindsight of history, that entire situation is less a moral issue and more a shift in the balance of power between two equally amoral forces - both of whom employed moral arguments in their own favor, but only one of which won and was subsequently held up as morally correct.

I think the lesson to be learned here might be less that Bentham was ahead of his time, and more that we are not as "ahead" in our time as we might imagine - e.g. we continue to teach everyone that stuff which was bad is good, we continue to justify our violence in similar terms. One thing I've noticed in reading old writings is that so many people often knew that what was going on was bad and that history would frown upon it but they continued to do it (e.g. Jefferson's and many other's writings on slavery largely condemn it, but they kept doing it more or less because that was the way that things were done, which is also not unlike today).

Comment by ishaan on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-07T00:29:28.890Z · EA · GW

Idk but in theory they shouldn't, as pitch is sensed by the hairs on the section of the cochlea that resonates at that the relevant frequency.

Comment by ishaan on Do research organisations make theory of change diagrams? Should they? · 2020-07-29T19:34:23.851Z · EA · GW

A forum resource on ToC in research which I found insightful: Are you working on a research agenda? A guide to increasing the impact of your research by involving decision-makers

Should they

Yes, but ToC don't improve impact in isolation (you can imagine a perfectly good ToC for an intervention which doesn't do much). Also, if you draw a nice diagram, but it doesn't actually inform any of your decisions or change your behavior in any way, then it hasn't really done anything. A ToC is ideally combined with cost-benefit analyses, the comparing of multiple avenues of action, etc and it should pay you back in the form of generating some concrete, informative actions e.g. consulting stakeholders to check your research questions, generally creating checkpoints at which you are trying to get measurements and indicators and opinions from relevant people.

For more foundational and theoretical questions where the direct impact isn't obvious, there may be a higher risk of drawing a diagram which doesn't do anything. I think there's ways to avoid this - understand the relevance of your research to other (ideally more practical) researchers who you've spoken to about it such as a peer review process, make a conceptual map of where your work fits in to other ideas which then lead to impact, try to get as close to the practical level as you realistically can. If it's really hard to tie it to the practical level it is sometimes a sign that you might need to re-evaluate the activity.

Do they

Back in academia, I didn't even know what a "theory of change" was, so I think not. But, one is frequently asked to state the practical and the theoretical value of your research, and the peer review and grant writing process implicitly incorporates elements of stakeholder relevance. However, as an academic, if you fail to make your own analyses, separately from this larger infrastructure, you may end up following institutional priorities (of grant makers, of academic journals, etc) which differ from "doing the most good" as you conceptualize it.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-16T02:24:33.503Z · EA · GW

The tricky part of social enterprise from my perspective is that high impact activities are hard to find, and I figure they would be even harder to find when placed under the additional constraint that they must be self sustaining. Which is not to say that you might not find one (see here and here), just that, finding an idea that works is arguably the trickiest part.

for-profit social enterprises may be more sustainable because of a lack of reliance on grants that may not materialise;

This is true, but keep in mind, impact via social enterprise may be "free" in terms of funding (so very cost-effective), but, it comes with opportunity costs in terms of your time. When you generate impact via social enterprise, you are essentially your own funder. Therefore, for a social enterprise to beat your earning-to-give baseline, its net impact must exceed the good you would have done via whatever you might have otherwise donated to a GiveWell top charity if you instead were donating as much money as you would in a high earning path. (This is of course also true for non-profit/other direct work paths). Basically, social enterprises aren't "free" (since your time isn't free) so it's a question of finding the right idea and then also deciding if the restrictions inherent in trying to be self-sustaining are easier than the restrictions (and funding counterfactuals) inherent in getting external funding.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T03:24:12.062Z · EA · GW
However, I'm sceptical of charity entrepreneurship's ability to achieve systemic change - I'd probably (correct me if I'm wrong) need a graduate degree in economics to tackle the global economic system.

It might plausibly be helpful to hire staff who had graduate degree in economics, but I think you would not necessarily need a graduate degree in economics yourself in order to start an organization focused on improving economic policy. Of course it's hard to say for sure until it's tried - but there's a lot that goes into running an organization, and it takes many different skills and types of people to make it come together. Domain expertise is only one part of it. A lot of great charities (e.g. GiveWell, AMF) were started by people who didn't enter with domain expertise or related degrees. (None of which is to say that economics isn't a strong option for a variety of paths, only that you shouldn't put the path of starting an organization in the "I need a degree first" box.)

(As for my opinion more generally, I do think that social entrepreneurship would under-perform relative to purely EtG (if you give to the right place), and also under-perform relative to focused non-profit or policy work (if you work on the right thing), because it has to simultaneously turn profit and achieve impact, which really limits the flexibility to work on the higher impact things. But it primarily depends on what specifically you're working on, in every case.)

Comment by ishaan on Where is it most effective to found a charity? · 2020-07-06T16:49:45.036Z · EA · GW

I've never done this myself, but here's bits of info I've absorbed through osmosis by working with people who have.
-Budget about 50-100 hours of work for registration. Not sure which countries require more work in this regard.
-If you're working with a lot of international partners, some countries have processes that are more recognized than others. The most internationally well-known registration type is America's 501(c)(3) - which means that even if you were to for example work somewhere like India, people are accustomed to working with 501(c)(3) and know the system. Less important if you aren't working with partners.
-If you are planning to get donations from mostly individuals, consider where those individuals are likely to live and what the laws regarding tax deductibleness are. Large grantmakers are more likely to be location agnostic.
-You don't need to live where you register, but if you want to grant a work visa to fly in an employee to a location, generally you will need to be registered in that location.

If you're interested in starting a charity you should consider auditing Charity Entrepreneurship's incubation program, and apply for the full course next year. Audit course will have information about how to pick locations for the actual intervention (which usually matters more than where you register for your impact). The full course for admitted students additionally provides guidance and support for operations/registration type stuff.

Comment by ishaan on EA Forum feature suggestion thread · 2020-06-28T13:02:17.988Z · EA · GW

I posted some things in this comment, and then realized the feature I wanted already existed and I just hadn't noticed it - which brings to mind another issue: how come one can retract, overwrite, but not delete a comment?

Comment by ishaan on Dignity as alternative EA priority - request for feedback · 2020-06-26T14:00:48.236Z · EA · GW
What evidence would you value to help resolve what weight an EA should place on dignity?

Many EAs tend to think that most interventions fail, so if you can't measure how well something works, chances are high that it doesn't work at all. To convince people who think that way, it helps to have a strong justification to incorporate a metric which is harder to measure over a well established and easier to measure metrics such as mortality and morbidity.

In the post on happiness you linked by Michael, you'll notice that he has a section on comparing subjective well being to traditional health metrics. A case is made that improving health does not necessarily improve happiness. This is important, because death and disability is easier to measure than things like happiness and dignity, so if it's a good proxy it should be used. If it turned out the that the best way to improve dignity is e.g. prevent disability, then in light of how much easier to measure disability prevention is, it would not be productive to switch focus. (Well, maybe. You might also take a close association between metrics as a positive sign that you're measuring something real. )

To get the EA community excited about a new metric, if it seems realistically possible then i'd recommend following Michael's example in this respect. After establishing a metric for dignity, try to determine how well existing top givewell interventions do on it, see what the relationship is with other metrics, and then see if there are any interventions that plausibly do better.

I think this could plausibly be done. I think there's a lot of people who favor donations to GiveDirectly because of the dignity/autonomy angle (cash performs well on quite a few metrics and perspectives, of course) - I wouldn't be surprised if there are donors who would be interested in whether you can do better than cash from that perspective.

Comment by ishaan on EA considerations regarding increasing political polarization · 2020-06-25T14:42:10.619Z · EA · GW
Why effective altruists should care

Opposing view: I don't think these are real concerns. The Future of Animal Consciousness Research citation boils down to "what if research in animal cognition is one day suppressed due to being labeled speciesist" - that's not a realistic worry. The vox thinkpeice emphasizes that we are in fact efficiently saving lives - I see no critiques there that we haven't also internally voiced to ourselves, as a community. I don't think it's realistic to expect coverage of us not to include these critiques, regardless of political climate. According to google search, the only folks even discussing that paper are long-termist EAs. I don't think AI alignment is any more politically polarized except as a special case of "vague resentment towards silicon valley elites" in general.

Sensible people on every part of the political spectrum will agree that animal and human EA interventions are good or at least neutral. The most controversial it gets is that people will disagree with the implication that they are best ways to do good...and why not? We internally often disagree on that too. Most people won't understand ai alignment enough to have an opinion beyond vague ideas about tech and tech-people. Polarization is occurring, but none of this constitutes evidence regarding political polarization's potential effect on EA.

Comment by ishaan on EA and tackling racism · 2020-06-16T20:09:14.154Z · EA · GW

a) Well, I think the "most work is low-quality aspect" is true, but also fully-general to almost everything (even EA). Engagement requires doing that filtering process.

b) I think seeking not to be "divisive" here isn't possible - issues of inequality on global scales and ethnic tension on local scales are in part caused by some groups of humans using violence to lock another group of humans out of access to resources. Even for me to point that out is inherently divisive. Those who feel aligned with the higher-power group will tend to feel defensive and will wish not to discuss the topic, while those who feel aligned with lower-power groups as well as those who have fully internalized that all people matter equally will tend to feel resentful about the state of affairs and will keep bringing up the topic. The process of mind changing is slow, but I think if one tries to let go of in-group biases (especially, recognizing that the biases exist) and internalizes that everyone matters equally, one will tend to shift in attitude.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:59:58.533Z · EA · GW
I've seen a lot of discussion of criminal justice reform

Well, I do think discussion of it is good, but if you're referring to resources directed to the cause area...it's not that I want EAs to re-direct resources away from low-income countries to instead solving disparities in high income countries, and I don't necessarily consider this related to the self-criticism as a community issue. I haven't really looked into this issue, but: on prior intuition I'd be surprised if American criminal justice reform compares very favorably in terms of cost-effectiveness to e.g. GiveWell top charities, reforms in low income countries, or reforms regarding other issues. (Of course, prior intuitions aren't a good way to make these judgements, so right now that's just a "strong opinion, weakly held".)

My stance is basically no on redirecting resources away from basic interventions in low income countries and towards other stuff, but yes on advocating that each individual tries to become more self-reflective and knowledgeable about these issues.

I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias

I agree, that's not an example of bias. This is one of those situations where a word gets too big to be useful - "supportive of capitalism" has come to stand for a uselessly large range of concepts. The same person might be critical about private property, or think it has sinister/exploitative roots, and also support sensible growth focused economic policies which improve outcomes via market forces.

I think the fact that EA has common sense appeal to a wide variety of people with various ideas is a great feature. If you are actually focused on doing the most good you will start becoming less abstractly ideological and more practical and I think that is the right way to be. (Although I think a lot of EAs unfortunately stay abstract and end up supporting anything that's labeled "EA", which is also wrong).

My main point is that if someone is serious about doing the most good, and is working on a topic that requires a broad knowledge base, then a reasonable understanding the structural roots of inequality (including how gender and race and class and geopolitics play into it) should be one part of their practical toolkit. In my personal opinion, while a good understanding of this sort of thing generally does lead to a certain political outlook, it's really more about adding things to your conceptual toolbox than it is about which -ism you rally around.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:51:34.269Z · EA · GW
What are some of the biases you're thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?

The longer answer to this question: I am not sure how to give a productive answer to this question. In the classic "cognitive bias" literature, people tend to immediately accept that the biases exist once they learn about them (…as long as you don't point them out right at the moment they are engaged in them). That is not the case for these issues.

I had to think carefully about how to answer because (when speaking to the aforementioned "randomly selected people who went to prestigious universities", as well as when speaking to EAs) such issues can be controversial and trigger defensiveness. These topics are political and cannot be de-politicized, I don't think there is any bias I can simply state that isn't going to be upvoted by those who agree and dismissed as a controversial political opinion by those who don't already agree, which isn't helpful.

It's analogous to if you walked into a random town hall and proclaimed "There's a lot of anthropomorphic bias going on in this community, for example look at all the religiosity" or "There's a lot of species-ism going on in this community, look at all the meat eating". You would not necessarily make any progress on getting people to understand. The only people who would understand are those who know exactly what you mean and already agree with you. In some circles, the level of understanding would be such that people would get it. In others, such statements would produce minor defensiveness and hostility. The level of "understanding" vs "defensiveness and hostility" in the EA community regarding these issues is similar to that of randomly selected prestigious university students (that is, much more understanding than the population average, but less than ideal). As with "anthropomorphic bias" and as with "speciesism", there are some communities where certain concepts are implicitly understood by most people and need no explanation, and some communities where they aren't. It comes down to what someone's point of view is.

Acquiring an accurate point of view, and moving a community towards an accurate point of view, is a long process of truth seeking. It is a process of un-learning a lot of things that you very implicitly hold true. It wouldn't work to just list biases. If I start listing out things like (unfortunately poorly named) "privilege-blindness" and (unfortunately poorly named) "white-fragility" I doubt it's not going to have any positive effect other than to make people who already agree nod to themselves, while other people roll their eyes, and other people google the terms and then roll their eyes. Criticizing things such that something actually goes through is pretty hard.

The productive process involves talking to individual people, hearing their stories, having first-hand exposure to things, reading a variety of writings on the topic and evaluating them. I think a lot of people think of these issues as "identity political topics" or "topics that affect those less fortunate" or "poorly formed arguments to be dismissed". I think progress occurs when we frame-shift towards thinking of them as "practical every day issues that affect our lives", and "how can I better articulate these real issues to myself and others" and "these issues are important factors in generating global inequality and suffering, an issue which affects us all".

Comment by ishaan on EA and tackling racism · 2020-06-14T19:49:49.161Z · EA · GW
What are some of the biases you're thinking of here?

This is a tough question to answer properly, both because it is complicated and because I think not everyone will like the answer. There is a short answer and a long answer.

Here is the short answer. I'll put the long answer in a different comment.

Refer to Sanjay's statement above

There are some who would argue that you can't tackle such a structural issue without looking at yourselves too, and understanding your own perspectives, biases and privileges...But I worried that tackling the topic of racism without even mentioning the risk that this might be a problem risked seeming over-confident.

At time of writing, this is sitting at negative-5 karma. Maybe it won't stay there, but this innocuous comment was sufficiently controversial that it's there now. Why is that? Is anything written there wrong? I think it's a very mild comment pointing out an obviously true fact - that a communities should also be self-reflective and self-critical when discussing structural racism. Normally EAs love self-critical, skeptical behavior. What is different here? Even people who believe that "all people matter equally" and "racism is bad" are still very resistant to having self-critical discussions about it.

I think that understanding the psychology of defensiveness surrounding the response to comments such as this one is the key to understanding the sorts of biases I'm talking about here. (And to be clear - I don't think this push back against this line of criticism is specific to the EA community, I think the EA community is responding as any demographically similar group would...meaning, this is general civilizational inadequacy at work, not something about EA in particular)

Comment by ishaan on EA and tackling racism · 2020-06-10T20:27:07.521Z · EA · GW

I broadly agree, but in my view the important part to emphasize is what you said on the final thoughts (about seeking to ask more questions about this to ourselves and as a community) and less on intervention recommendations.

Is EA really all about taking every question and twisting it back to malaria nets ...?... we want is to tackle systemic racism at a national level (e.g. in the US, or the UK).

I bite this bullet. I think you do ultimately need to circle back to the malaria nets (especially if you are talking more about directing money than about directing labor). I say this as someone who considers myself as much a part of the social justice movement as I do part of the EA movement.Realistically, I don't think it's really plausible that tackling stuff in high income countries is going to be more morally important than malaria net-type activities, at least when it comes to fungible resources such as donations (the picture gets more complex with respect to direct work of course). It's good to think about what the cost-effective ways to improve matters in high income countries might be, but realistically I bet once you start crunching numbers you will probably find that malaria-net-type-activities should still the top priority by a wide margin if you are dealing with fungible resources. I think the logical conclusions of anti-racist/anti-colonialist thought converge upon this as well. In my view, the things that social justice activists are fighting for ultimately do come down to the basics of food, shelter, medical care, and the scale of that fight has always been global even if the more visible portion generally plays out on ones more local circles.

However, I still think putting thought into how one would design such interventions should be encouraged, because:

our doubts about the malign influence of institutional prejudice...should reach ourselves as well.

I agree with this, and would encourage more emphasis on this. The EA community (especially on the rationality/lesswrong part of the community) puts a lot of effort into getting rid of cognitive biases. But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don't really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities). And that's kind of weird. We're a group of people who are trying to achieve social impact. We're often people who wield considerable resources and have to work with power structures all the time. It's a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average.I don't really mean this as a call to action (realistically, I think given the low current state of awareness it seems probable that attempting action is going to result in misguided or heavy-handed solutions). What I do suggest is - a lot of you spend some of your spare time reading and thinking about cognitive biases, trying to better understand yourself and the world, and consider this a worthwhile activity. I think, it would be worth applying a similar spirit to spending time to really understand these issues as well.

Comment by ishaan on Effective Animal Advocacy Resources · 2020-05-25T04:33:25.479Z · EA · GW

Super helpful, I'm about to cite this in the CE curriculum :)

Comment by ishaan on Why I'm Not Vegan · 2020-04-10T17:40:04.006Z · EA · GW
I get much more than $0.43 of enjoyment out of a year's worth of eating animal products

I think we would likely not justify a moral offset for harming humans at (by the numbers you posted) $100/year or eating children at $20/pound (100*15 years / 75 pounds). This isn't due to sentimentality, deontology, taboo, or biting the bullet - I think a committed consequentialist, one grounded in practicality, would agree that no good consequences would likely come from allowing that sort of thing, and I think that this probably logically applies to meat.

I think overall it's better to look first at the direct harm vs direct benefit, and how much you weigh the changes to your own experience against the suffering caused. The offset aspect is not unimportant, but I think it's a bit misleading when not applied evenly in the other direction.

I am sympathetic to morally weighing different animals orders of magnitude differently. We have to do that in order to decide how to prioritize between different interventions.

That said, I don't think human moral instincts for these sorts of cross-species trolley problems are well equipped for numbers bigger than 3-5. Your moral instincts can (I would say, accurately) inform you that you would rather avert harm to a person than to 5 chickens, but when you get into the 1000s you're pretty firmly in torture vs dust specks territory and should not necessarily just trust your instincts. That doesn't mean orders of magnitude differences are wrong, but it does mean they're potentially subject to a lot of bias and inconsistency if not accompanied by some methodology.

Comment by ishaan on Help in choosing good charities in specific domains · 2020-02-20T19:07:53.955Z · EA · GW

Charity Entrepreneurship is incubating new family planning and animal welfare organizations, which will aim to operate via principles of effective altruism - potentially relevant to your interests.

Comment by ishaan on Who should give sperm/eggs? · 2020-02-12T23:37:53.893Z · EA · GW

Since you are asking "who" should do it (rather than whether more or less people in general should do it, which seems the more relevant question since it would carry the bulk of the effect), I would wish to replace any anonymous donors with people who are willing to take a degree of responsibility for and engagement with the resulting child and their feelings about it, since looking at opinion polls from donor conceived people has made me think there's a reasonable chance they experience negative emotions about the whole thing at non-negligible rates and it is possible that this might be mitigated by having a social relationship to the donor.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2020-01-17T06:44:51.687Z · EA · GW

Spend some time brainstorming and compare multiple alternative courses of action and potential hurdles to those actions before embarking on it, consider using a spreadsheet to augment your working memory when you evaluate actions by various criteria, get a sense of expected value per time on a given task so you can decide how long it's worth to spend on it, enforce this via time capping / time boxing and if you are working much longer on a given task much than you estimated then re-evaluate what you are doing, time track which task you spend your working hours on to become more aware of time in general. Personally I don't think I fully appreciated how valuable time was and how much i was sometimes wasting unintentionally before tracking it (although I could see some people finding this stressful)

Of course this is all sort of easier said than done haha. I think to some degree watching other people actually doing things which one is supposed to do helps enforce the habit.


Comment by ishaan on Growth and the case against randomista development · 2020-01-17T06:28:24.021Z · EA · GW

Any discussion of how much it might cost to change a given economic policy / the limiting factor that has kept it from changing thus far?

(I think this is also the big question with health policy)

Comment by ishaan on Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? · 2020-01-13T00:21:50.493Z · EA · GW

"Rejecting" would be a bit unusual, but of course you should honestly advise a well qualified candidate if you think their other career option is higher impact. I think it would be ideal if everyone gives others their honest advice about how to do the most good, roughly regardless of circumstance.

I've only seen a small slice of things, but my general sense is that people in the EA community do in fact live up to this ideal, regularly turning down and redirecting talent as well as funding and other resources towards the thing that they believe does the most good.

Also, although it might ultimately add up to the same thing, I think it brings more clarity to think along the lines of "counterfactual impact" (estimating how much unilateral impact an individual's alternative career choices have) rather than "comparative advantage" which is difficult to assess without detailed awareness of the multiple other actors you are comparing to.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2019-12-16T17:14:34.956Z · EA · GW

I went to the program, was quite impressed with what I saw there, and decided to work at charity entrepreneurship.

Before attending the program, as career paths, I was considering academia, earning to give, direct work in the global poverty space, and a few other more offbeat options. After the program, I'd estimate that I've significantly increased the expected value of my own career (perhaps by 3x-12x or more) in terms of impact by attending the program, thanks to

1) the direct impact of CE itself and associated organizations. I can say that in terms of what I've directly witnessed, there's a formidable level of productive work occurring at this organization. My own level of raw productivity has risen quite a bit by being in proximity and picking up good habits. I'm pretty convinced that this productivity translates into impact, (although on that count, you can evaluate the key assumptions and claims yourself by looking at the cost effectiveness models and historical track record).

2) practical meta-skills I've picked up regarding how to think about personal impact. Not only did I change my mind and update on quite a few important considerations, but there were also quite a few things that I didn't even realize were considerations before attending the program. I think my decision making going forward will be better now.

3) connections and network to other effective altruists, and general knowledge about the effective altruism movement. Prior to attending the program my engagement with the community was on a rather abstract level. Now, if I wanted to harness the EA community to accomplish a concrete action in the global poverty or animal space, I'd know roughly what to do and who to talk to and how to get started.

4) the career capital from program related activities.

Also, I had a good time. If you enjoy skill building and like interacting with other effective altruists, the program is quite fun.

Happy to answer any questions.

Comment by ishaan on Introducing Good Policies: A new charity promoting behaviour change interventions · 2019-11-20T13:11:34.932Z · EA · GW

I'm sure there's a better document somewhere addressing these, but I'll just quickly say that people tend to regret starting smoking tobacco and often want to stop, tobacco smoking reduces quality of life, and that smokers often support raising tobacco taxes if the money goes to addressing the (very expensive!) health problems caused by smoking (e.g. this sample, and I don't think this pattern is unique). So I think bringing tobacco taxes in line with recommendations is good under most moral systems, even those which strongly prioritize autonomy - this is a situation where smokers seem to be straightforwardly stating that they'd rather not behave this way.

Eric Garner died because the police approached him on suspicion of selling illegal cigarettes and then killed him - I don't think that's realistically attributable to tobacco taxation.

Comment by ishaan on List of EA-related email newsletters · 2019-10-10T08:42:43.054Z · EA · GW

For global health, don't forget Givewell's newsletter!

For meta, CharityEntrepreneurship has one as well (scroll to the middle of the page for the newsletter)

Comment by ishaan on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-18T19:00:29.858Z · EA · GW
Do you have any opinions that you would be reluctant to express in front of a group of your peers? If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think what you're told.

Not necessarily! You might just be less averse to disagreement. Or perhaps you (rightly or wrongly) feel less personally vulnerable to the potential consequences of stating unpopular opinions and criticism.

Or, maybe you did quite a lot of independent thinking that differed dramatically from what you were "told", and then gravitated towards one or more social circles that happen to have greater tolerance for the things you believe, which perhaps one or more of your communities of origin did not.

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-18T17:32:43.036Z · EA · GW

I agree that more people trying to do cost effectiveness analyses is good! I regret that the tone seemed otherwise and will consider it more in the future. I engaged with it primarily because I too often wonder about how one might improve impact outside of impact-focused environments, and I generally find it an interesting direction to explore. I also applaud that you made the core claim clearly and boldly and I would like to see more of that as well - all models suffer these flaws to some degree and it's a great virtue to make clear claims that are designed such that any mistakes will be caught (as described here). Thanks for doing the piece and I hope you can use these comments to continue to create models of this and other courses of action :)

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-17T20:23:03.360Z · EA · GW

I think the biggest improvement would be correcting the fact that this model (accidentally, I think) assumes that improving any arbitrary high budget charity by 5% is equally as impactful as improving a Givewell equivalent charity by 5%. Most charity's impact is an order of magnitude smaller.

You could solve this with a multiplier for the charity's impact at baseline.

If I understand correctly, you figure that if you become a trustee of a £419668/year budget charity, if only you can improve the cost effectiveness by 5%, then you can divide that by 42 hours a year, to get £419668*5%/42 hours=£500/hour in the value of your donated time. (A style tip - it would be helpful to put the key equation describing roughly what you've done in the description, to make it all legible without having to go into the spreadsheet.)

I think it is fair to say that, were you to successfully perform this feat, you have indeed done something roughly as impactful as providing a £500/hour value to the charity you are trustee-ing for. So, if you improved a Givewell-top-charity-equivalent's cost effectiveness by 5% for a year, then maybe you could fairly take 5% of that charity's yearly budget and divide it by your hours for that year, as you've done, to calculate your Givewell-top-charity-equivalent impact in terms of how it would compare to donated dollars.

But if you improve a £419668/yr budget charity which is only 1% as cost-effective as a Givewell-top-charity-equivalent by 5%, then that makes your hourly impact 1%*£419668*5%/42 hours = £5/hour of Givewell-top-charity-equivalent impact - you'd be better served working a bit extra and donating 5 dollars to Givewell.

I don't think this model has credence even after these adjustments as I'm skeptical of the structure, but you did make those assumptions explicitly which is good. If you think the effect takes ~42 hours/year then this hypothesis is potentially cheap to just test in practice, and then revise your model with more information. Have you joined any boards and tried this in practice, if yes how did it go?

edit - ah, you're using the term "5% increase" very differently.

Instead it assumes a 5% increase, perhaps from £0 of impact to 5% of the annual income or perhaps from 100% of annual income to 105%

So just to be clear, this implies that making 100% of your annual income in impact would mean that you are the most cost effective charity in the world (or whatever other benchmark you want to set at "100%"). Used in this sense, the word "5% increase" doesn't mean "the shelter saves 5% more kittens" but that charity as a whole has gone from being part of the long tail of negligible impact to being 1/20th as cost effective as the most cost effective charity in the world. This isn't the way percents are usually expressed / this seems like a confused way to express this concept since the 100% benchmark is arbitrary/unknown - it would be better in that case to express it on an absolute scale rather than a percentage.

Comment by ishaan on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-21T23:32:44.550Z · EA · GW

brainstorming / regurgitating some random additional ideas -

Goodhart's law - a charity may from the outset design itself or self-modify itself around Effective Altruist metrics, thereby pandering to the biases of the metrics and succeeding in them despite being less Good than a charity which scored well on the same metrics despite no prior knowledge of them. (Think of the difference between someone who has aced a standardized test due to intentional practice and "teaching to the test" vs. someone who aced it with no prior exposure to standardized tests - the latter person may possess more of the quality that the test is designed to measure). This is related to "influencing charities" issue, but focusing on the potential for defeating of the metric itself, rather than direct effects of the influence.

Counterfactuals of donations (other than the matching thing)- a highly cost effective charity which can only pull from an effective altruist donor pool might have less impact than a slightly less cost effective charity which successfully redirects donations from people who wouldn't have donated to a cost effective charity (this is more of an issue for the person who controls talent, direction, and other factors, not the person who controls money).

Model inconsistency - Two very different interventions will naturally be evaluated by two very different models, and some models may inherently be harsher or more lenient on the intervention than others. This will be true even if all the models involved are as good and certain as they can realistically be.

Regression to the mean - The expected value of standout candidates will generally regress to the mean of the pool from which they are drawn, since at least some of the factors which caused them to rise to the top will be temporary (including legitimate factors that have nothing to do with mistaken evaluations)


Comment by ishaan on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-19T05:08:22.452Z · EA · GW

I think this description generally falls in line with what I've experienced and heard secondhand and is broadly true. However, there are some differences between my impression of it and yours. (But it sounds like you've collected more accounts, more systematically, and I've actually only gone up to the M.A. level in grad school, so I'm leaning towards trusting your aggregate)

Peer review is a disaster

I think we can get at better ways than peer review, but also, don't forget that people will sort of inevitably have Feelings about getting peer reviewed, especially if the review is unfavorable, and this might bias them to say that it's unfair or broken. I wouldn't expect peer review is particularly better or worse than what you'd expect from what is basically a group of people with some knowledge of a topic and some personal investment in the matter having a discussion - it can certainly be a space for pettiness, both by the reviewer and from the reviewed, as well as a space for legitimate discussion.

PIs mostly manage people -- all the real work is done by grad students and postdocs

I think this is sometimes true, but I would not consider this a default state of affairs. I think some, but not all, grad students and post docs can conceive of and execute a good project from start to finish (more, in top universities). However, I think most successful PIs are constantly running projects of their own as well. Moreover, a lot of grad students and post docs are running projects that either the PI came up with, or independently created projects that are ultimately a small permutation within a larger framework that the PI came up with. I do think it sometimes happens that some people believe they are doing all the work and sort of forget the degree of training and underestimate how much the PI is behind the scenes.

management and fundraising (and endless administrative responsibilities bestowed on any tenure-track professor) and can 100% focus on doing science and publishing papers, while getting mentoring from your senior PI and while being helped by all the infrastructure established labs

My impression was actually that grant writing, management, and setting up infrastructure is the bulk of Doing Science, properly understood. (Whereas, I get the impression that this write up sort of frames it as some sort of side show to the Real Work of Doing Science). With "fundraising", the writer of the grant is the one who has to engage in the big picture thinking, make the pitch, and plan the details to a level of rigor sufficient to satisfy an external body. With "infrastructure", one must set up the lab protocols so that they're actually measuring what they are meant to. It's easy to do this wrong, and what's worse, it's easy to do this wrong and not even realize you are doing it wrong and have those mistakes make it all the way up to a nonsensical and wrong publication. I think there is a level of fairly deep expertise involved in setting up protocols. And "management" in this context also involves a lot of teaching people skills and concepts, including sometimes a fair bit of hand-holding during the process of publishing papers (students' first drafts aren't always great, even if the student is very good).

People outside of biology generally think that doing a PhD means spending 6 years at the bench performing your advisor's experiments and is only possible with perfect undergrad GPA, not realizing that neither of these are true of you're truly capable

Very true in one sense - I agree that academia is very forgiving about credentials and gpa relative to other forms of post-graduate education, and people are definitely excited and responsive to being cold contacted by motivated students who will do their own projects. However, keep in mind that if you're planning to work on whatever you want, rather than your adviser's experiments, you will have more trouble fully utilizing the adviser's management/infrastructure/expertise and to a lesser extent grants.

For a unique and individual project, you might have to build some of your infrastructure on your own. This means things may take much longer and are more likely not to work the first few times - all of which is a wonderful learning experience, but this does not always align with the incentive of publishing papers and graduating quickly. I think some fields (especially the ones closer to math) have the sort of "pure researcher" track you have in mind, but it's rare in social and biological sciences in part because the most needed people are in fact those with scientific expertise who can train and manage a team and build infrastructure/protocol as well s fund raise and set an agenda- i think it would be tough to realistically delegate this to anyone who doesn't know the science.

(But - again, this is only my impression from doing a masters and from conversations I've had with other people. Getting a sense of a whole field isn't really easy and I imagine different regions and so on are very different.)

Comment by ishaan on 'Longtermism' · 2019-08-19T03:34:22.142Z · EA · GW

I think it's worth pointing out that "longtermism" as minimally defined here is not pointing to the same concept that "people interested in x-risk reduction" was probably pointing at. I think the word which most accurately captures what it was pointing at is generally called "futurism" (examples [1],[2]).

This could be a feature or a bug, depending on use case.

  • It could be a feature if you want a word to capture a moral underpinning common to many futurist's intuitions while being, as you said, remaining "compatible with any empirical view about the best way of improving the long-run future", or to form a coalition among people with diverse views about the best ways to improve the long-run future.
  • It could be a bug if people started informally using "longtermism" interchangably with "far futurism", especially if it created a motte-bailey style of argument in which to an easily defensible minimal definition claim that "future people matter equally" was used to response to skepticism regarding claims that any specific category of efforts aiming to influence the far future is necessarily more impactful.

If you want to retain the feature of being "compatible with any empirical view about the best way of improving the long-run future" you might prefer the no-definition approach, because criteria ii is not philosophical, but an empirical view about what society currently wrongly privileges.

From the perspective of addressing the "bug" aspect however, I think criteria ii and iii are good calls. They make some progress in narrowing who is a "longtermist", and they specify that it is ultimately a call to a specific action (so e.g someone who thinks influencing the future would be awesome in theory but is intractable in practice can fairly be said to not meet criteria iii). In general, I think that in practice people are going to use "longtermist" and "far futurist" interchangeably regardless of what definition is laid out at this point. I therefore favor the second approach, with a minimal definition, as it gives a nod to the fact that it's not just a moral stance and but advocates some sort of practical response.





Comment by ishaan on How do you, personally, experience "EA motivation"? · 2019-08-16T21:18:17.015Z · EA · GW

The way I feel when the concept of a person in the abstract is invoked feels like a fainter version of the love I would feel towards a partner, a parent, a sibling, a child, a close friend, and towards myself. The feeling drives me to act in the direction of making them happy, growing their capabilities, furthering their ambitions, fulfilling their values, and so on. In addition to feeling happy when my loved ones are happy, there is also an element of pride when my loved ones grow or accomplish something, as well as fulfillment when our shared values are achieved. When engaging with the concept of abstract people, I can very easily imagine real people - each with a rich life history, unique ways of thinking, a web of connection, and so on...people who I would love if I were to know them. This motivates me to work hard to provide for their well being and growth, to undergo risks and dangers and sacrifices to protect them from harm, to empower and facilitate them in their undertakings, and to secure a future in which they may flourish - in the same ordinary sense that I imagine many other people do for themselves, their children and families, their tribes and nations, all people, all beings, and so on. I feel a sense of being united with all people as we work together to steer the universe towards our shared purpose.


You've italicized "effectively" as part of the question, but I don't think I feel any real distinction between "wanting to help people" and "wanting to help people effectively" - when I'm doing a task, it seems like doing it effectively is rather straightforwardly better than doing it ineffectively. "Effective altruism" does imply a level of impartiality regarding who benefits which I don't possess (since I care about myself, my friends, my family, and so on more than strangers), but it is otherwise the same. Even if I were I only to help people who I directly knew and personally loved in a non-abstract sense, I would still seek to do so effectively.


Comment by ishaan on What posts you are planning on writing? · 2019-07-26T07:57:32.301Z · EA · GW

That very EA survey data, combined with Florida et all The Rise Of The Megaregion data which characterizing the academic/intellectual/economic output of each region. It would be a brief post, the main takeaway is that EA geographic concentration seems associated with a region's prominence in academia, whereas things like economic prominence, population size, etc don't seem to matter much.