Update On Six New Charities Incubated By Charity Entrepreneurship 2020-02-27T05:20:18.346Z · score: 50 (23 votes)


Comment by ishaan on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-07T00:29:28.890Z · score: 4 (3 votes) · EA · GW

Idk but in theory they shouldn't, as pitch is sensed by the hairs on the section of the cochlea that resonates at that the relevant frequency.

Comment by ishaan on Do research organisations make theory of change diagrams? Should they? · 2020-07-29T19:34:23.851Z · score: 5 (3 votes) · EA · GW

A forum resource on ToC in research which I found insightful: Are you working on a research agenda? A guide to increasing the impact of your research by involving decision-makers

Should they

Yes, but ToC don't improve impact in isolation (you can imagine a perfectly good ToC for an intervention which doesn't do much). Also, if you draw a nice diagram, but it doesn't actually inform any of your decisions or change your behavior in any way, then it hasn't really done anything. A ToC is ideally combined with cost-benefit analyses, the comparing of multiple avenues of action, etc and it should pay you back in the form of generating some concrete, informative actions e.g. consulting stakeholders to check your research questions, generally creating checkpoints at which you are trying to get measurements and indicators and opinions from relevant people.

For more foundational and theoretical questions where the direct impact isn't obvious, there may be a higher risk of drawing a diagram which doesn't do anything. I think there's ways to avoid this - understand the relevance of your research to other (ideally more practical) researchers who you've spoken to about it such as a peer review process, make a conceptual map of where your work fits in to other ideas which then lead to impact, try to get as close to the practical level as you realistically can. If it's really hard to tie it to the practical level it is sometimes a sign that you might need to re-evaluate the activity.

Do they

Back in academia, I didn't even know what a "theory of change" was, so I think not. But, one is frequently asked to state the practical and the theoretical value of your research, and the peer review and grant writing process implicitly incorporates elements of stakeholder relevance. However, as an academic, if you fail to make your own analyses, separately from this larger infrastructure, you may end up following institutional priorities (of grant makers, of academic journals, etc) which differ from "doing the most good" as you conceptualize it.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-16T02:24:33.503Z · score: 3 (2 votes) · EA · GW

The tricky part of social enterprise from my perspective is that high impact activities are hard to find, and I figure they would be even harder to find when placed under the additional constraint that they must be self sustaining. Which is not to say that you might not find one (see here and here), just that, finding an idea that works is arguably the trickiest part.

for-profit social enterprises may be more sustainable because of a lack of reliance on grants that may not materialise;

This is true, but keep in mind, impact via social enterprise may be "free" in terms of funding (so very cost-effective), but, it comes with opportunity costs in terms of your time. When you generate impact via social enterprise, you are essentially your own funder. Therefore, for a social enterprise to beat your earning-to-give baseline, its net impact must exceed the good you would have done via whatever you might have otherwise donated to a GiveWell top charity if you instead were donating as much money as you would in a high earning path. (This is of course also true for non-profit/other direct work paths). Basically, social enterprises aren't "free" (since your time isn't free) so it's a question of finding the right idea and then also deciding if the restrictions inherent in trying to be self-sustaining are easier than the restrictions (and funding counterfactuals) inherent in getting external funding.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T03:24:12.062Z · score: 5 (3 votes) · EA · GW
However, I'm sceptical of charity entrepreneurship's ability to achieve systemic change - I'd probably (correct me if I'm wrong) need a graduate degree in economics to tackle the global economic system.

It might plausibly be helpful to hire staff who had graduate degree in economics, but I think you would not necessarily need a graduate degree in economics yourself in order to start an organization focused on improving economic policy. Of course it's hard to say for sure until it's tried - but there's a lot that goes into running an organization, and it takes many different skills and types of people to make it come together. Domain expertise is only one part of it. A lot of great charities (e.g. GiveWell, AMF) were started by people who didn't enter with domain expertise or related degrees. (None of which is to say that economics isn't a strong option for a variety of paths, only that you shouldn't put the path of starting an organization in the "I need a degree first" box.)

(As for my opinion more generally, I do think that social entrepreneurship would under-perform relative to purely EtG (if you give to the right place), and also under-perform relative to focused non-profit or policy work (if you work on the right thing), because it has to simultaneously turn profit and achieve impact, which really limits the flexibility to work on the higher impact things. But it primarily depends on what specifically you're working on, in every case.)

Comment by ishaan on Where is it most effective to found a charity? · 2020-07-06T16:49:45.036Z · score: 4 (3 votes) · EA · GW

I've never done this myself, but here's bits of info I've absorbed through osmosis by working with people who have.
-Budget about 50-100 hours of work for registration. Not sure which countries require more work in this regard.
-If you're working with a lot of international partners, some countries have processes that are more recognized than others. The most internationally well-known registration type is America's 501(c)(3) - which means that even if you were to for example work somewhere like India, people are accustomed to working with 501(c)(3) and know the system. Less important if you aren't working with partners.
-If you are planning to get donations from mostly individuals, consider where those individuals are likely to live and what the laws regarding tax deductibleness are. Large grantmakers are more likely to be location agnostic.
-You don't need to live where you register, but if you want to grant a work visa to fly in an employee to a location, generally you will need to be registered in that location.

If you're interested in starting a charity you should consider auditing Charity Entrepreneurship's incubation program, and apply for the full course next year. Audit course will have information about how to pick locations for the actual intervention (which usually matters more than where you register for your impact). The full course for admitted students additionally provides guidance and support for operations/registration type stuff.

Comment by ishaan on EA Forum feature suggestion thread · 2020-06-28T13:02:17.988Z · score: 1 (1 votes) · EA · GW

I posted some things in this comment, and then realized the feature I wanted already existed and I just hadn't noticed it - which brings to mind another issue: how come one can retract, overwrite, but not delete a comment?

Comment by ishaan on Dignity as alternative EA priority - request for feedback · 2020-06-26T14:00:48.236Z · score: 2 (2 votes) · EA · GW
What evidence would you value to help resolve what weight an EA should place on dignity?

Many EAs tend to think that most interventions fail, so if you can't measure how well something works, chances are high that it doesn't work at all. To convince people who think that way, it helps to have a strong justification to incorporate a metric which is harder to measure over a well established and easier to measure metrics such as mortality and morbidity.

In the post on happiness you linked by Michael, you'll notice that he has a section on comparing subjective well being to traditional health metrics. A case is made that improving health does not necessarily improve happiness. This is important, because death and disability is easier to measure than things like happiness and dignity, so if it's a good proxy it should be used. If it turned out the that the best way to improve dignity is e.g. prevent disability, then in light of how much easier to measure disability prevention is, it would not be productive to switch focus. (Well, maybe. You might also take a close association between metrics as a positive sign that you're measuring something real. )

To get the EA community excited about a new metric, if it seems realistically possible then i'd recommend following Michael's example in this respect. After establishing a metric for dignity, try to determine how well existing top givewell interventions do on it, see what the relationship is with other metrics, and then see if there are any interventions that plausibly do better.

I think this could plausibly be done. I think there's a lot of people who favor donations to GiveDirectly because of the dignity/autonomy angle (cash performs well on quite a few metrics and perspectives, of course) - I wouldn't be surprised if there are donors who would be interested in whether you can do better than cash from that perspective.

Comment by ishaan on EA considerations regarding increasing political polarization · 2020-06-25T14:42:10.619Z · score: 19 (8 votes) · EA · GW
Why effective altruists should care

Opposing view: I don't think these are real concerns. The Future of Animal Consciousness Research citation boils down to "what if research in animal cognition is one day suppressed due to being labeled speciesist" - that's not a realistic worry. The vox thinkpeice emphasizes that we are in fact efficiently saving lives - I see no critiques there that we haven't also internally voiced to ourselves, as a community. I don't think it's realistic to expect coverage of us not to include these critiques, regardless of political climate. According to google search, the only folks even discussing that paper are long-termist EAs. I don't think AI alignment is any more politically polarized except as a special case of "vague resentment towards silicon valley elites" in general.

Sensible people on every part of the political spectrum will agree that animal and human EA interventions are good or at least neutral. The most controversial it gets is that people will disagree with the implication that they are best ways to do good...and why not? We internally often disagree on that too. Most people won't understand ai alignment enough to have an opinion beyond vague ideas about tech and tech-people. Polarization is occurring, but none of this constitutes evidence regarding political polarization's potential effect on EA.

Comment by ishaan on EA and tackling racism · 2020-06-16T20:09:14.154Z · score: 2 (4 votes) · EA · GW

a) Well, I think the "most work is low-quality aspect" is true, but also fully-general to almost everything (even EA). Engagement requires doing that filtering process.

b) I think seeking not to be "divisive" here isn't possible - issues of inequality on global scales and ethnic tension on local scales are in part caused by some groups of humans using violence to lock another group of humans out of access to resources. Even for me to point that out is inherently divisive. Those who feel aligned with the higher-power group will tend to feel defensive and will wish not to discuss the topic, while those who feel aligned with lower-power groups as well as those who have fully internalized that all people matter equally will tend to feel resentful about the state of affairs and will keep bringing up the topic. The process of mind changing is slow, but I think if one tries to let go of in-group biases (especially, recognizing that the biases exist) and internalizes that everyone matters equally, one will tend to shift in attitude.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:59:58.533Z · score: 3 (3 votes) · EA · GW
I've seen a lot of discussion of criminal justice reform

Well, I do think discussion of it is good, but if you're referring to resources directed to the cause's not that I want EAs to re-direct resources away from low-income countries to instead solving disparities in high income countries, and I don't necessarily consider this related to the self-criticism as a community issue. I haven't really looked into this issue, but: on prior intuition I'd be surprised if American criminal justice reform compares very favorably in terms of cost-effectiveness to e.g. GiveWell top charities, reforms in low income countries, or reforms regarding other issues. (Of course, prior intuitions aren't a good way to make these judgements, so right now that's just a "strong opinion, weakly held".)

My stance is basically no on redirecting resources away from basic interventions in low income countries and towards other stuff, but yes on advocating that each individual tries to become more self-reflective and knowledgeable about these issues.

I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias

I agree, that's not an example of bias. This is one of those situations where a word gets too big to be useful - "supportive of capitalism" has come to stand for a uselessly large range of concepts. The same person might be critical about private property, or think it has sinister/exploitative roots, and also support sensible growth focused economic policies which improve outcomes via market forces.

I think the fact that EA has common sense appeal to a wide variety of people with various ideas is a great feature. If you are actually focused on doing the most good you will start becoming less abstractly ideological and more practical and I think that is the right way to be. (Although I think a lot of EAs unfortunately stay abstract and end up supporting anything that's labeled "EA", which is also wrong).

My main point is that if someone is serious about doing the most good, and is working on a topic that requires a broad knowledge base, then a reasonable understanding the structural roots of inequality (including how gender and race and class and geopolitics play into it) should be one part of their practical toolkit. In my personal opinion, while a good understanding of this sort of thing generally does lead to a certain political outlook, it's really more about adding things to your conceptual toolbox than it is about which -ism you rally around.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:51:34.269Z · score: 11 (9 votes) · EA · GW
What are some of the biases you're thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?

The longer answer to this question: I am not sure how to give a productive answer to this question. In the classic "cognitive bias" literature, people tend to immediately accept that the biases exist once they learn about them (…as long as you don't point them out right at the moment they are engaged in them). That is not the case for these issues.

I had to think carefully about how to answer because (when speaking to the aforementioned "randomly selected people who went to prestigious universities", as well as when speaking to EAs) such issues can be controversial and trigger defensiveness. These topics are political and cannot be de-politicized, I don't think there is any bias I can simply state that isn't going to be upvoted by those who agree and dismissed as a controversial political opinion by those who don't already agree, which isn't helpful.

It's analogous to if you walked into a random town hall and proclaimed "There's a lot of anthropomorphic bias going on in this community, for example look at all the religiosity" or "There's a lot of species-ism going on in this community, look at all the meat eating". You would not necessarily make any progress on getting people to understand. The only people who would understand are those who know exactly what you mean and already agree with you. In some circles, the level of understanding would be such that people would get it. In others, such statements would produce minor defensiveness and hostility. The level of "understanding" vs "defensiveness and hostility" in the EA community regarding these issues is similar to that of randomly selected prestigious university students (that is, much more understanding than the population average, but less than ideal). As with "anthropomorphic bias" and as with "speciesism", there are some communities where certain concepts are implicitly understood by most people and need no explanation, and some communities where they aren't. It comes down to what someone's point of view is.

Acquiring an accurate point of view, and moving a community towards an accurate point of view, is a long process of truth seeking. It is a process of un-learning a lot of things that you very implicitly hold true. It wouldn't work to just list biases. If I start listing out things like (unfortunately poorly named) "privilege-blindness" and (unfortunately poorly named) "white-fragility" I doubt it's not going to have any positive effect other than to make people who already agree nod to themselves, while other people roll their eyes, and other people google the terms and then roll their eyes. Criticizing things such that something actually goes through is pretty hard.

The productive process involves talking to individual people, hearing their stories, having first-hand exposure to things, reading a variety of writings on the topic and evaluating them. I think a lot of people think of these issues as "identity political topics" or "topics that affect those less fortunate" or "poorly formed arguments to be dismissed". I think progress occurs when we frame-shift towards thinking of them as "practical every day issues that affect our lives", and "how can I better articulate these real issues to myself and others" and "these issues are important factors in generating global inequality and suffering, an issue which affects us all".

Comment by ishaan on EA and tackling racism · 2020-06-14T19:49:49.161Z · score: 3 (5 votes) · EA · GW
What are some of the biases you're thinking of here?

This is a tough question to answer properly, both because it is complicated and because I think not everyone will like the answer. There is a short answer and a long answer.

Here is the short answer. I'll put the long answer in a different comment.

Refer to Sanjay's statement above

There are some who would argue that you can't tackle such a structural issue without looking at yourselves too, and understanding your own perspectives, biases and privileges...But I worried that tackling the topic of racism without even mentioning the risk that this might be a problem risked seeming over-confident.

At time of writing, this is sitting at negative-5 karma. Maybe it won't stay there, but this innocuous comment was sufficiently controversial that it's there now. Why is that? Is anything written there wrong? I think it's a very mild comment pointing out an obviously true fact - that a communities should also be self-reflective and self-critical when discussing structural racism. Normally EAs love self-critical, skeptical behavior. What is different here? Even people who believe that "all people matter equally" and "racism is bad" are still very resistant to having self-critical discussions about it.

I think that understanding the psychology of defensiveness surrounding the response to comments such as this one is the key to understanding the sorts of biases I'm talking about here. (And to be clear - I don't think this push back against this line of criticism is specific to the EA community, I think the EA community is responding as any demographically similar group would...meaning, this is general civilizational inadequacy at work, not something about EA in particular)

Comment by ishaan on EA and tackling racism · 2020-06-10T20:27:07.521Z · score: 22 (14 votes) · EA · GW

I broadly agree, but in my view the important part to emphasize is what you said on the final thoughts (about seeking to ask more questions about this to ourselves and as a community) and less on intervention recommendations.

Is EA really all about taking every question and twisting it back to malaria nets ...?... we want is to tackle systemic racism at a national level (e.g. in the US, or the UK).

I bite this bullet. I think you do ultimately need to circle back to the malaria nets (especially if you are talking more about directing money than about directing labor). I say this as someone who considers myself as much a part of the social justice movement as I do part of the EA movement.Realistically, I don't think it's really plausible that tackling stuff in high income countries is going to be more morally important than malaria net-type activities, at least when it comes to fungible resources such as donations (the picture gets more complex with respect to direct work of course). It's good to think about what the cost-effective ways to improve matters in high income countries might be, but realistically I bet once you start crunching numbers you will probably find that malaria-net-type-activities should still the top priority by a wide margin if you are dealing with fungible resources. I think the logical conclusions of anti-racist/anti-colonialist thought converge upon this as well. In my view, the things that social justice activists are fighting for ultimately do come down to the basics of food, shelter, medical care, and the scale of that fight has always been global even if the more visible portion generally plays out on ones more local circles.

However, I still think putting thought into how one would design such interventions should be encouraged, because:

our doubts about the malign influence of institutional prejudice...should reach ourselves as well.

I agree with this, and would encourage more emphasis on this. The EA community (especially on the rationality/lesswrong part of the community) puts a lot of effort into getting rid of cognitive biases. But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don't really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities). And that's kind of weird. We're a group of people who are trying to achieve social impact. We're often people who wield considerable resources and have to work with power structures all the time. It's a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average.I don't really mean this as a call to action (realistically, I think given the low current state of awareness it seems probable that attempting action is going to result in misguided or heavy-handed solutions). What I do suggest is - a lot of you spend some of your spare time reading and thinking about cognitive biases, trying to better understand yourself and the world, and consider this a worthwhile activity. I think, it would be worth applying a similar spirit to spending time to really understand these issues as well.

Comment by ishaan on Effective Animal Advocacy Resources · 2020-05-25T04:33:25.479Z · score: 4 (3 votes) · EA · GW

Super helpful, I'm about to cite this in the CE curriculum :)

Comment by ishaan on Why I'm Not Vegan · 2020-04-10T17:40:04.006Z · score: 16 (7 votes) · EA · GW
I get much more than $0.43 of enjoyment out of a year's worth of eating animal products

I think we would likely not justify a moral offset for harming humans at (by the numbers you posted) $100/year or eating children at $20/pound (100*15 years / 75 pounds). This isn't due to sentimentality, deontology, taboo, or biting the bullet - I think a committed consequentialist, one grounded in practicality, would agree that no good consequences would likely come from allowing that sort of thing, and I think that this probably logically applies to meat.

I think overall it's better to look first at the direct harm vs direct benefit, and how much you weigh the changes to your own experience against the suffering caused. The offset aspect is not unimportant, but I think it's a bit misleading when not applied evenly in the other direction.

I am sympathetic to morally weighing different animals orders of magnitude differently. We have to do that in order to decide how to prioritize between different interventions.

That said, I don't think human moral instincts for these sorts of cross-species trolley problems are well equipped for numbers bigger than 3-5. Your moral instincts can (I would say, accurately) inform you that you would rather avert harm to a person than to 5 chickens, but when you get into the 1000s you're pretty firmly in torture vs dust specks territory and should not necessarily just trust your instincts. That doesn't mean orders of magnitude differences are wrong, but it does mean they're potentially subject to a lot of bias and inconsistency if not accompanied by some methodology.

Comment by ishaan on Help in choosing good charities in specific domains · 2020-02-20T19:07:53.955Z · score: 3 (3 votes) · EA · GW

Charity Entrepreneurship is incubating new family planning and animal welfare organizations, which will aim to operate via principles of effective altruism - potentially relevant to your interests.

Comment by ishaan on Who should give sperm/eggs? · 2020-02-12T23:37:53.893Z · score: 4 (3 votes) · EA · GW

Since you are asking "who" should do it (rather than whether more or less people in general should do it, which seems the more relevant question since it would carry the bulk of the effect), I would wish to replace any anonymous donors with people who are willing to take a degree of responsibility for and engagement with the resulting child and their feelings about it, since looking at opinion polls from donor conceived people has made me think there's a reasonable chance they experience negative emotions about the whole thing at non-negligible rates and it is possible that this might be mitigated by having a social relationship to the donor.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2020-01-17T06:44:51.687Z · score: 5 (3 votes) · EA · GW

Spend some time brainstorming and compare multiple alternative courses of action and potential hurdles to those actions before embarking on it, consider using a spreadsheet to augment your working memory when you evaluate actions by various criteria, get a sense of expected value per time on a given task so you can decide how long it's worth to spend on it, enforce this via time capping / time boxing and if you are working much longer on a given task much than you estimated then re-evaluate what you are doing, time track which task you spend your working hours on to become more aware of time in general. Personally I don't think I fully appreciated how valuable time was and how much i was sometimes wasting unintentionally before tracking it (although I could see some people finding this stressful)

Of course this is all sort of easier said than done haha. I think to some degree watching other people actually doing things which one is supposed to do helps enforce the habit.

Comment by ishaan on Growth and the case against randomista development · 2020-01-17T06:28:24.021Z · score: 3 (2 votes) · EA · GW

Any discussion of how much it might cost to change a given economic policy / the limiting factor that has kept it from changing thus far?

(I think this is also the big question with health policy)

Comment by ishaan on Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? · 2020-01-13T00:21:50.493Z · score: 3 (3 votes) · EA · GW

"Rejecting" would be a bit unusual, but of course you should honestly advise a well qualified candidate if you think their other career option is higher impact. I think it would be ideal if everyone gives others their honest advice about how to do the most good, roughly regardless of circumstance.

I've only seen a small slice of things, but my general sense is that people in the EA community do in fact live up to this ideal, regularly turning down and redirecting talent as well as funding and other resources towards the thing that they believe does the most good.

Also, although it might ultimately add up to the same thing, I think it brings more clarity to think along the lines of "counterfactual impact" (estimating how much unilateral impact an individual's alternative career choices have) rather than "comparative advantage" which is difficult to assess without detailed awareness of the multiple other actors you are comparing to.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2019-12-16T17:14:34.956Z · score: 13 (8 votes) · EA · GW

I went to the program, was quite impressed with what I saw there, and decided to work at charity entrepreneurship.

Before attending the program, as career paths, I was considering academia, earning to give, direct work in the global poverty space, and a few other more offbeat options. After the program, I'd estimate that I've significantly increased the expected value of my own career (perhaps by 3x-12x or more) in terms of impact by attending the program, thanks to

1) the direct impact of CE itself and associated organizations. I can say that in terms of what I've directly witnessed, there's a formidable level of productive work occurring at this organization. My own level of raw productivity has risen quite a bit by being in proximity and picking up good habits. I'm pretty convinced that this productivity translates into impact, (although on that count, you can evaluate the key assumptions and claims yourself by looking at the cost effectiveness models and historical track record).

2) practical meta-skills I've picked up regarding how to think about personal impact. Not only did I change my mind and update on quite a few important considerations, but there were also quite a few things that I didn't even realize were considerations before attending the program. I think my decision making going forward will be better now.

3) connections and network to other effective altruists, and general knowledge about the effective altruism movement. Prior to attending the program my engagement with the community was on a rather abstract level. Now, if I wanted to harness the EA community to accomplish a concrete action in the global poverty or animal space, I'd know roughly what to do and who to talk to and how to get started.

4) the career capital from program related activities.

Also, I had a good time. If you enjoy skill building and like interacting with other effective altruists, the program is quite fun.

Happy to answer any questions.

Comment by ishaan on Introducing Good Policies: A new charity promoting behaviour change interventions · 2019-11-20T13:11:34.932Z · score: 11 (9 votes) · EA · GW

I'm sure there's a better document somewhere addressing these, but I'll just quickly say that people tend to regret starting smoking tobacco and often want to stop, tobacco smoking reduces quality of life, and that smokers often support raising tobacco taxes if the money goes to addressing the (very expensive!) health problems caused by smoking (e.g. this sample, and I don't think this pattern is unique). So I think bringing tobacco taxes in line with recommendations is good under most moral systems, even those which strongly prioritize autonomy - this is a situation where smokers seem to be straightforwardly stating that they'd rather not behave this way.

Eric Garner died because the police approached him on suspicion of selling illegal cigarettes and then killed him - I don't think that's realistically attributable to tobacco taxation.

Comment by ishaan on List of EA-related email newsletters · 2019-10-10T08:42:43.054Z · score: 4 (3 votes) · EA · GW

For global health, don't forget Givewell's newsletter!

For meta, CharityEntrepreneurship has one as well (scroll to the middle of the page for the newsletter)

Comment by ishaan on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-18T19:00:29.858Z · score: 20 (9 votes) · EA · GW
Do you have any opinions that you would be reluctant to express in front of a group of your peers? If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think what you're told.

Not necessarily! You might just be less averse to disagreement. Or perhaps you (rightly or wrongly) feel less personally vulnerable to the potential consequences of stating unpopular opinions and criticism.

Or, maybe you did quite a lot of independent thinking that differed dramatically from what you were "told", and then gravitated towards one or more social circles that happen to have greater tolerance for the things you believe, which perhaps one or more of your communities of origin did not.

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-18T17:32:43.036Z · score: 3 (3 votes) · EA · GW

I agree that more people trying to do cost effectiveness analyses is good! I regret that the tone seemed otherwise and will consider it more in the future. I engaged with it primarily because I too often wonder about how one might improve impact outside of impact-focused environments, and I generally find it an interesting direction to explore. I also applaud that you made the core claim clearly and boldly and I would like to see more of that as well - all models suffer these flaws to some degree and it's a great virtue to make clear claims that are designed such that any mistakes will be caught (as described here). Thanks for doing the piece and I hope you can use these comments to continue to create models of this and other courses of action :)

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-17T20:23:03.360Z · score: 14 (6 votes) · EA · GW

I think the biggest improvement would be correcting the fact that this model (accidentally, I think) assumes that improving any arbitrary high budget charity by 5% is equally as impactful as improving a Givewell equivalent charity by 5%. Most charity's impact is an order of magnitude smaller.

You could solve this with a multiplier for the charity's impact at baseline.

If I understand correctly, you figure that if you become a trustee of a £419668/year budget charity, if only you can improve the cost effectiveness by 5%, then you can divide that by 42 hours a year, to get £419668*5%/42 hours=£500/hour in the value of your donated time. (A style tip - it would be helpful to put the key equation describing roughly what you've done in the description, to make it all legible without having to go into the spreadsheet.)

I think it is fair to say that, were you to successfully perform this feat, you have indeed done something roughly as impactful as providing a £500/hour value to the charity you are trustee-ing for. So, if you improved a Givewell-top-charity-equivalent's cost effectiveness by 5% for a year, then maybe you could fairly take 5% of that charity's yearly budget and divide it by your hours for that year, as you've done, to calculate your Givewell-top-charity-equivalent impact in terms of how it would compare to donated dollars.

But if you improve a £419668/yr budget charity which is only 1% as cost-effective as a Givewell-top-charity-equivalent by 5%, then that makes your hourly impact 1%*£419668*5%/42 hours = £5/hour of Givewell-top-charity-equivalent impact - you'd be better served working a bit extra and donating 5 dollars to Givewell.

I don't think this model has credence even after these adjustments as I'm skeptical of the structure, but you did make those assumptions explicitly which is good. If you think the effect takes ~42 hours/year then this hypothesis is potentially cheap to just test in practice, and then revise your model with more information. Have you joined any boards and tried this in practice, if yes how did it go?

edit - ah, you're using the term "5% increase" very differently.

Instead it assumes a 5% increase, perhaps from £0 of impact to 5% of the annual income or perhaps from 100% of annual income to 105%

So just to be clear, this implies that making 100% of your annual income in impact would mean that you are the most cost effective charity in the world (or whatever other benchmark you want to set at "100%"). Used in this sense, the word "5% increase" doesn't mean "the shelter saves 5% more kittens" but that charity as a whole has gone from being part of the long tail of negligible impact to being 1/20th as cost effective as the most cost effective charity in the world. This isn't the way percents are usually expressed / this seems like a confused way to express this concept since the 100% benchmark is arbitrary/unknown - it would be better in that case to express it on an absolute scale rather than a percentage.

Comment by ishaan on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-21T23:32:44.550Z · score: 11 (7 votes) · EA · GW

brainstorming / regurgitating some random additional ideas -

Goodhart's law - a charity may from the outset design itself or self-modify itself around Effective Altruist metrics, thereby pandering to the biases of the metrics and succeeding in them despite being less Good than a charity which scored well on the same metrics despite no prior knowledge of them. (Think of the difference between someone who has aced a standardized test due to intentional practice and "teaching to the test" vs. someone who aced it with no prior exposure to standardized tests - the latter person may possess more of the quality that the test is designed to measure). This is related to "influencing charities" issue, but focusing on the potential for defeating of the metric itself, rather than direct effects of the influence.

Counterfactuals of donations (other than the matching thing)- a highly cost effective charity which can only pull from an effective altruist donor pool might have less impact than a slightly less cost effective charity which successfully redirects donations from people who wouldn't have donated to a cost effective charity (this is more of an issue for the person who controls talent, direction, and other factors, not the person who controls money).

Model inconsistency - Two very different interventions will naturally be evaluated by two very different models, and some models may inherently be harsher or more lenient on the intervention than others. This will be true even if all the models involved are as good and certain as they can realistically be.

Regression to the mean - The expected value of standout candidates will generally regress to the mean of the pool from which they are drawn, since at least some of the factors which caused them to rise to the top will be temporary (including legitimate factors that have nothing to do with mistaken evaluations)

Comment by ishaan on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-19T05:08:22.452Z · score: 8 (6 votes) · EA · GW

I think this description generally falls in line with what I've experienced and heard secondhand and is broadly true. However, there are some differences between my impression of it and yours. (But it sounds like you've collected more accounts, more systematically, and I've actually only gone up to the M.A. level in grad school, so I'm leaning towards trusting your aggregate)

Peer review is a disaster

I think we can get at better ways than peer review, but also, don't forget that people will sort of inevitably have Feelings about getting peer reviewed, especially if the review is unfavorable, and this might bias them to say that it's unfair or broken. I wouldn't expect peer review is particularly better or worse than what you'd expect from what is basically a group of people with some knowledge of a topic and some personal investment in the matter having a discussion - it can certainly be a space for pettiness, both by the reviewer and from the reviewed, as well as a space for legitimate discussion.

PIs mostly manage people -- all the real work is done by grad students and postdocs

I think this is sometimes true, but I would not consider this a default state of affairs. I think some, but not all, grad students and post docs can conceive of and execute a good project from start to finish (more, in top universities). However, I think most successful PIs are constantly running projects of their own as well. Moreover, a lot of grad students and post docs are running projects that either the PI came up with, or independently created projects that are ultimately a small permutation within a larger framework that the PI came up with. I do think it sometimes happens that some people believe they are doing all the work and sort of forget the degree of training and underestimate how much the PI is behind the scenes.

management and fundraising (and endless administrative responsibilities bestowed on any tenure-track professor) and can 100% focus on doing science and publishing papers, while getting mentoring from your senior PI and while being helped by all the infrastructure established labs

My impression was actually that grant writing, management, and setting up infrastructure is the bulk of Doing Science, properly understood. (Whereas, I get the impression that this write up sort of frames it as some sort of side show to the Real Work of Doing Science). With "fundraising", the writer of the grant is the one who has to engage in the big picture thinking, make the pitch, and plan the details to a level of rigor sufficient to satisfy an external body. With "infrastructure", one must set up the lab protocols so that they're actually measuring what they are meant to. It's easy to do this wrong, and what's worse, it's easy to do this wrong and not even realize you are doing it wrong and have those mistakes make it all the way up to a nonsensical and wrong publication. I think there is a level of fairly deep expertise involved in setting up protocols. And "management" in this context also involves a lot of teaching people skills and concepts, including sometimes a fair bit of hand-holding during the process of publishing papers (students' first drafts aren't always great, even if the student is very good).

People outside of biology generally think that doing a PhD means spending 6 years at the bench performing your advisor's experiments and is only possible with perfect undergrad GPA, not realizing that neither of these are true of you're truly capable

Very true in one sense - I agree that academia is very forgiving about credentials and gpa relative to other forms of post-graduate education, and people are definitely excited and responsive to being cold contacted by motivated students who will do their own projects. However, keep in mind that if you're planning to work on whatever you want, rather than your adviser's experiments, you will have more trouble fully utilizing the adviser's management/infrastructure/expertise and to a lesser extent grants.

For a unique and individual project, you might have to build some of your infrastructure on your own. This means things may take much longer and are more likely not to work the first few times - all of which is a wonderful learning experience, but this does not always align with the incentive of publishing papers and graduating quickly. I think some fields (especially the ones closer to math) have the sort of "pure researcher" track you have in mind, but it's rare in social and biological sciences in part because the most needed people are in fact those with scientific expertise who can train and manage a team and build infrastructure/protocol as well s fund raise and set an agenda- i think it would be tough to realistically delegate this to anyone who doesn't know the science.

(But - again, this is only my impression from doing a masters and from conversations I've had with other people. Getting a sense of a whole field isn't really easy and I imagine different regions and so on are very different.)

Comment by ishaan on 'Longtermism' · 2019-08-19T03:34:22.142Z · score: 15 (6 votes) · EA · GW

I think it's worth pointing out that "longtermism" as minimally defined here is not pointing to the same concept that "people interested in x-risk reduction" was probably pointing at. I think the word which most accurately captures what it was pointing at is generally called "futurism" (examples [1],[2]).

This could be a feature or a bug, depending on use case.

  • It could be a feature if you want a word to capture a moral underpinning common to many futurist's intuitions while being, as you said, remaining "compatible with any empirical view about the best way of improving the long-run future", or to form a coalition among people with diverse views about the best ways to improve the long-run future.
  • It could be a bug if people started informally using "longtermism" interchangably with "far futurism", especially if it created a motte-bailey style of argument in which to an easily defensible minimal definition claim that "future people matter equally" was used to response to skepticism regarding claims that any specific category of efforts aiming to influence the far future is necessarily more impactful.

If you want to retain the feature of being "compatible with any empirical view about the best way of improving the long-run future" you might prefer the no-definition approach, because criteria ii is not philosophical, but an empirical view about what society currently wrongly privileges.

From the perspective of addressing the "bug" aspect however, I think criteria ii and iii are good calls. They make some progress in narrowing who is a "longtermist", and they specify that it is ultimately a call to a specific action (so e.g someone who thinks influencing the future would be awesome in theory but is intractable in practice can fairly be said to not meet criteria iii). In general, I think that in practice people are going to use "longtermist" and "far futurist" interchangeably regardless of what definition is laid out at this point. I therefore favor the second approach, with a minimal definition, as it gives a nod to the fact that it's not just a moral stance and but advocates some sort of practical response.

Comment by ishaan on How do you, personally, experience "EA motivation"? · 2019-08-16T21:18:17.015Z · score: 9 (8 votes) · EA · GW

The way I feel when the concept of a person in the abstract is invoked feels like a fainter version of the love I would feel towards a partner, a parent, a sibling, a child, a close friend, and towards myself. The feeling drives me to act in the direction of making them happy, growing their capabilities, furthering their ambitions, fulfilling their values, and so on. In addition to feeling happy when my loved ones are happy, there is also an element of pride when my loved ones grow or accomplish something, as well as fulfillment when our shared values are achieved. When engaging with the concept of abstract people, I can very easily imagine real people - each with a rich life history, unique ways of thinking, a web of connection, and so on...people who I would love if I were to know them. This motivates me to work hard to provide for their well being and growth, to undergo risks and dangers and sacrifices to protect them from harm, to empower and facilitate them in their undertakings, and to secure a future in which they may flourish - in the same ordinary sense that I imagine many other people do for themselves, their children and families, their tribes and nations, all people, all beings, and so on. I feel a sense of being united with all people as we work together to steer the universe towards our shared purpose.

You've italicized "effectively" as part of the question, but I don't think I feel any real distinction between "wanting to help people" and "wanting to help people effectively" - when I'm doing a task, it seems like doing it effectively is rather straightforwardly better than doing it ineffectively. "Effective altruism" does imply a level of impartiality regarding who benefits which I don't possess (since I care about myself, my friends, my family, and so on more than strangers), but it is otherwise the same. Even if I were I only to help people who I directly knew and personally loved in a non-abstract sense, I would still seek to do so effectively.

Comment by ishaan on What posts you are planning on writing? · 2019-07-26T07:57:32.301Z · score: 4 (3 votes) · EA · GW

That very EA survey data, combined with Florida et all The Rise Of The Megaregion data which characterizing the academic/intellectual/economic output of each region. It would be a brief post, the main takeaway is that EA geographic concentration seems associated with a region's prominence in academia, whereas things like economic prominence, population size, etc don't seem to matter much.

Comment by ishaan on What posts you are planning on writing? · 2019-07-25T22:33:14.417Z · score: 6 (5 votes) · EA · GW

Here's some stuff which I may consider writing when I have more time. The posts are currently too low on the priorities list to work on, but if anyone thinks one of these is especially interesting or valuable, I might prioritize it higher, or work on it a little when I need a break from my current main project. For the most part I'm unlikely to prioritize writing in the near future though because I suspect my opinions are going to rapidly change on a lot of these topics soon (or my view on their usefulness / importance / relevance).

1) Where Does EA take root? The characteristics of geographic regions which have unusually high numbers of effective altruists, with a eye towards guessing which areas might be fertile places to attempt more growth. (Priority 4/10, mostly because I mostly already have the data due to working on another thing, but I'm not sure to which growth is a priority)

2) Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon, but I'm ).

3) A (as far as I know novel) thought experiment meant to complicate utilitarianism, which has produced some very divergent responses when I pose it conversation so far. The intention is to call into question what exactly it is that we suppose ought to be maximized. (priority 3/10)

4) How to turn philosophical intuitions about "happiness", "suffering", "preference", 'hedons" and other subjective phenomenological experiences into something which can be understood within a science/math framework, at least for the purposes of making moral decisions. (priority 3/10)

5) Applying information in posts (3) and (4) to make practical decisions about some moral "edge cases". Edge cases include things like: non-human life, computer algorithms, babies and fetuses, coma, dementia, severe brain damage and congenital abnormalities. (priority 3/10)

6) How are human moral and epistemic foundations formed? If you understand the "No Universally Compelling Arguments" set of concepts, this post is basically helping people apply that principle in practical terms referencing real human minds and cultures, integrating various cultural anthropology and post modernist works. (priority 2/10)

Comment by ishaan on Ways Frugality Increases Productivity · 2019-07-19T20:58:35.668Z · score: 8 (4 votes) · EA · GW

I super agree with the title, but I think the text actually really undersells it! Runway not only increases your flexibility to not earn, but also reduces your stress and removes all sorts of psychologically difficult power dynamics that come with having a boss or otherwise being beholden to external factors for your well being (Yes, you may still have a boss or external factors, but now you won't need their continued approval or success to pay bills, and that makes all the difference). Also, frugality enables you to really splurge without worrying when it really counts. Additionally, If you do not have any large and expensive possessions, tend to live in low cost apartments, and don't have any dependents, you can move to whatever location it is most productive for you to be in with little to no overhead - whether that be across town or across the globe. Frugality in an urban context also forces close living situations (housemates) which can dramatically increase your social network. Further, you end up building scrappy skills and habits (e.g. negotiating apartments, meal planning, knowledge of public services, biking) which can really come in handy even when you're not being frugal.

If you have the privilege to be in circumstances where you are able to make money without spending most of it, it's good to take advantage of this if you can. Don't feel bad about it if you can't - it's not always simple or possible for everyone. But if you feel like it would be pretty easy for you to be frugal and you're choosing not to because you think spending a lot more makes you more productive, I strongly suggest reconsider.

Another point worth considering is that if you are sufficiently frugal, and if "productivity" is truly your goal here, you can "increase your productivity" by taking that money and hiring a second person to work on your project with you. Can all your time saving expenses increase your productivity more than a whole second person? (I'm sure there are some circumstances for which the answer is yes, but I imagine that is rare.)

Comment by ishaan on Considering people’s hidden motives in EA outreach · 2019-06-01T21:41:14.086Z · score: 15 (8 votes) · EA · GW

You've laid out your opinions clearly. It is well cited, and has interesting and informative accompanying sources. It's a good post. However, I disagree with some portions of the underlying attitudes, (even while not particularly objecting to some of the recommended methods)

In an ideal world where all people are rational, the ideas mentioned in this forum post would be completely useless.

The thing is, this is a purely inside view. It sort of presupposes effective altruist ideas are correct, and that the only barrier to widespread adoption is irrationality, rather than any sensible sort of skepticism.

While humans can be irrational in distributing status, there is such a thing as legitimately earned status. If we put on our idealist hats for just a moment and forget all the extremely silly things humans accord status to, status can represent the "outside view" - if institutions we respect seem to respect EA, that should increase our confidence in EA ideas. Not because we're status climbing apes, but because "capable of convincing me" shouldn't be a person's only bar for trusting an argument. One should sensibly understands the limited scope of ones own judgement regarding big topics.

Now, taking our idealist hats off, obviously we can't just trust what most people think, or consider all "high status" institutions as equally legitimate. We have to be discerning. But there are institutions (such as academia, in my opinion) whose approval matters because it functions as legitimate external validation. It's not just social currency, it's a well earned social currency. Not only that, it's an opportunity to send our good ideas elsewhere to develop and mutate, as well as an opportunity to allow our bad ideas to be culled.

Unfortunately, people often are much less rational than we’d like to admit. Acknowledging this might be a pragmatic way for EA to improve outreach effectiveness.

The other issue is that when one is forming a broad, high level strategy for engaging in the world, it should feel good. The words one uses should make one feel warm inside, not exasperated at the irrationality of the world and the necessity of stooping to slimy feeling methods to win. Lest anyone irrationally (/s) dismiss this as a "warm fuzzy altruism", in Bosch's linked taxonomy, let me pragmatically (/s) employ an appeal to authority: Yudkowsky has made the same point. If it feels cynical and a touch Machiavellian, it usually will not ultimately produce morally wholesome results. Personally, I think if you want to really convince people, you shouldn't use methods that would make them feel like you tricked them if they knew what you were doing.

Not to's just sort of impractical for EA to attempt "we know you are irrational and we're not above pushing your irrationality buttons" strategies. EA organizations are generally scrupulous about transparency so that we can hold each other accountable. This means that any cynical outreach attempts will be transparent as well. In general my sense is that idealist institutions can't effectively wield some of these more cynical methods.

Also as a sort of aside, I don't think there's anything irrational about appealing to emotions. The key is to appeal to emotions in a way that we bring out behavior which is a true expression of people's values. Often, when someone has a "bad" ideology, it is emotions of compassion that bring them out of it. Learning to better engage people on an emotional level is not in any way opposed to presenting logical and rational cases for things.

How can EA help people increase their status? a non-cynical way?

By acquiring well-earned legitimacy! Make real positive impacts in areas other people care about. That means you can also help individual effective altruists make real measurable impacts that they can put on their resume and thereby increase their career capital. Create arguments that other intellectuals agree with and cite. Mentor other people and give them skills. Create mechanisms for people to be public about their donations and personal sacrifices they might make to further a cause in a socially graceful way (it inspires others to do the same). These are all things that the Effective Altruist community is currently doing, and it's been working regardless of whether or not people are wearing suits.

What all these methods have in common is that they work with people's rationality (and true altruistic motives), rather than work around their irrationality (and hidden selfish motives)- these are methods that encourage involvement with EA because people are convinced that them personally being involved with EA involvement will help further their (altruistic, but also otherwise) goals. The status raising effects in these methods are secondary to real accomplishment, they put forth honest signals of competence and skill, which the larger society recognizes because it is actually valuable. The appeals to emotion work via being connected to the reality of actually accomplishing the tasks that those emotions are oriented towards.

So, I would generally agree with your call for EAs to think about more ways to gain legitimacy. I just want to strongly prioritized well-earned legitimacy...whereas this post comes off as though it's largely about gaining less legitimate forms of status. (Perhaps due to an implicit feeling that all status is illegitimate?)

Comment by ishaan on Which scientific discovery was most ahead of its time? · 2019-05-31T01:10:03.894Z · score: 7 (2 votes) · EA · GW

I think part of the "continuity" comes from the fact that things that were "ahead of their time" tended not to be useful yet and get lost. Or worse, perhaps several people had to independently come up with, and support, and learn about an idea enough to use it for it to be actually adopted, or it just ends up sitting in some tinkerer's basement or a dusty old tome.

So, you can flip this question: Which discoveries and inventions seem to have occurred after their time (e.g. they were technologically possible, the prerequisite ideas were pretty well known, and they would have been immensely useful practically in that time and place) and why didn't civilization get at them before?

Comment by ishaan on There's Lots More To Do · 2019-05-30T23:21:23.492Z · score: 6 (4 votes) · EA · GW

Well, firstly, how much credence should we assign the actual analysis in that post?

Before we begin talking about how we should behave "even if" the cost per life saved is much higher than 5k - is there some consensus as to whether the actual facts and analysis of that post are actually true or even somewhat credible? (separate from the conclusions, which, I agree, seem clearly wrong for all the reasons you said).

As in, if they had instead titled the post "Givewell's Cost-Per-Life-Saved Estimates are Impossibly Low" and concluded "if the cost per life saved estimate was truly that low, we could have already gone ahead and saved all the cheap lives, and the cost would be higher - so there's something deeply wrong here"... would people be agreeing with it?

(Because if so, shouldn't the relevant lower bound for cpls on the impact evaluations be updated if they're wrong, and shouldn't that probably be the central point of discussion?

And if not...we should probably add a note clarifying for any reader joining the discussion late, that we're not actually sure whether the post is correct or not, before going into the implications of the conclusions. We certainly wouldn't want to start thinking that there aren't lives that can be saved at low cost if there actually are)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-30T14:40:06.395Z · score: 1 (1 votes) · EA · GW

I think that's a little unfair. It wasn't just have an "unexamined assumption", he just declared that solidarity was the best way and named some organizations he liked, with no attempt at estimating and quantifying. And he's critiquing EA, an ideology whose claim to fame is impact evaluations. Can an EA saying "okay that's great, I agree that could be true... but how about having a quantitative impact evaluation... of any kind, at all, just to help cement the case" really be characterized as "whataboutism" / methodology war?

(I don't think I agree with your first paragraph, but I do think it's fair to argue that "but not all readers are in high income countries" is whataboutism until I more fully expand on what I think the practical implications are on impact evaluation. I'm going to save the discussion about the practical problems that arise from being first world centric for a different post, or drop them, depending on how my opinion changes after I've put more thought into it.)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-30T01:00:32.663Z · score: 0 (5 votes) · EA · GW
This is with regards to political ideologies where either the disagreement over fundamental values, or at least basic facts that inform our moral judgements, are irreconcilable. Yet there will also be political movements with which EA can reconcile, as we would share the same fundamental values, but EA will nonetheless be responsible to criticize or challenge, on the grounds those movements are, in practice, using means or pursuing ends that put them in opposition to those of EA.

I'm going to critique Connor's article, and in doing so attempt to "lead by example" in showing how I think critiques of this type are best engaged.

The best way to show solidarity is to strike at the heart of global inequality in our own land.

There's two problems with Connor's article, and they both have to do with this sentence.

The less important problem: Who is the "our" in the phrase "our own land"? We're on the internet, yet Connor just assumes the reader's allegiances, identity, location, etc. Why is everyone who is not in some particular land implicitly excluded from the conversation? Why is "us" not everyone and "our land" not the Earth?

EA is just as guilty of this, for example when people talk about dollars going farther "overseas". This is the internet, donors and academics and direct workers and so on live in every country, so where is "local" and where is "overseas", exactly? For all EA's globalist ambitious, there is this assumption that people who are actually in a low-middle income country aren't a part of the conversation. (I agree with everything the "dollar overseas" article actually says, just to be clear. The problem is what the phrasing means about the assumptions of the writers.)

It's bad when Connor does it and it's bad when effective altruists do it. Yes, we are writing for a specific audience, but that audience is anyone who takes the time to understand EA ideas and can speak the language written. This is part of what I'm talking about when I say that EA makes some very harmful assumptions about who exactly the agents of change are going to be and the scope of who "effective altruists" potentially are. This problem is not limited to EAs, it is widespread.

The problem isn't the phrasing, of course, it's what the phrasing indicates about the writer.

The more important problem, and on this forum, this one is preaching to the choir of course, is 2) You can't just assume that your solidarity group is the most effective way to do things. Someone still has to do an impact evaluation on your social movement and the flow of talent and resources through that movement, including the particular activities of any particular organization enacting that movement.

Thus far, Effective Altruists are at the forefront of actually attempting to do this in a transparent way for altruistic organizations. The expansion to policy change is still in its infancy, but ...I would not be surprised if impact evaluations of attempting political movements and policy changes begin surfacing at some point.

Nor can you just assume that the best way to do things is local and that people should for some mysterious reason focus on things "in their own lands". Yes, it may in fact be beneficial to be local at times, have to actually check, you have to have some reasonable account of why this is the most effective thing for you to do.

Once you agree on certain very basic premises (that all humans are roughly equally important moral subjects, that the results of your actions are important, etc) I think all effective altruism really asks is that you attempt process of actually estimating the effect of your use of resources and talent in a rigorous way. This applies regardless of whether your method is philanthropy or collective action.

(What would Connor say if they read my comment? I suspect they would at the very least admit that it was not ideal to implicitly assume their audience like that. But I'd like to think any shrewd supporter of collective action would eventually ask..."Well okay, how do I actually do an impact evaluation of my collective action related plans?" And the result would hopefully be more rigorous and effective collective action, which is more likely to actually accomplish what it was intended to accomplish. I think it's important that the response deconstructed the false dichotomy between "collective action" and "effective altruism". The critic should begin asking: "okay, disagreements aside, what might these effective altruist frameworks for evaluating impact do for me?" and "If I think that this other thing is more effective, how can I quantitatively prove it?")

I think the "less important problem" is related to the "more important problem". For Connor, even if we grant that collective action is the best thing, the implicitly western "us" limits his vision as to what forms collective action could take, and which social movements people like himself might direct money, talent, or other resources towards. (For EAs, I would speculate that the implicit "us" limits our vision in different, more complicated ways, having to do with under-valuing certain forms of human capital in accomplishing EA goals - Just as Connor just assumes local is better, I think EAs sometimes just assume certain things that EAs tend to assume about exactly who is well placed to make effective impact (and therefore, who needs EA oriented advice, resources, education, training, etc). it's a subject I'm still thinking about, and it's the one I hope to write about later.

Comment by ishaan on Drowning children are rare · 2019-05-29T18:14:37.029Z · score: 8 (5 votes) · EA · GW

I think examining the number of low hanging fruits is important. I'm not yet sure if this analysis is correct, but I too would like to know exactly how many low hanging fruits there are, and exactly how low hanging they are, and whether this information is consistent with EA org's actions. If your analysis is true, people should put more energy into expanding cause areas beyond health stuff.

I think it might be nice if someone attempted a per-intervention spreadsheet / graph estimating how much more expensive the "next marginal life saved / qaly / disease prevented / whatever" would get, with each additional dollar spent...while sort of assuming that that currently existing organizations can successfully scale or new organizations can be formed to handle the issue. (So, sort of like "room for more funding", but focusing instead on the scale of the problem rather than the scale of the organization that deals with the problem). Has someone already done so? I know plenty of people have looked at problem scales in general, but I haven't seen much on predicting the marginal-cost changes as we progress along the scales.

Okay, that said: this last paragraph was in the original post but not the cross-post

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.

I think there's potentially a much deeper problem with this statement, which goes beyond any in the impact analysis. Even if one forgets all moral philosophy, disregard all practical analyses, and use nothing but concrete practical personal experience and a gut sense of right and wrong to guide one's behavior...well, for me at least, that still makes living frugally to conserve scarce resources for others seem like a correct thing to do?

I know people who live in poverty, personally - both in the "below the American poverty line" sense (I guess I'm technically below that line myself in a grad student sort of way, but I know people who are rather more permanently under it), and in the "global poor" sense. Even by blood alone, I'm only two generations removed from people who have temporarily experienced global poverty of the <$2/day magnitude. So for me at least, it remains obvious on a personal face-to-face level that among humans the global poor are the ones who can make best personal use of scarce resources. I imagine there are people whose social circles don't include people in local or global poverty, but that's not an immutable fact of life - one can change that, if one thinks social circles are essential ingredients to making impact.

I don't really agree with the framing of "Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits" as something obviously distinct from helping the global poor. I don't feel like I or my lived ones could never experience global poverty. I feel like I'm part of a community and friendly with people who might directly experience or interact with global poverty. If being a low info donor doesn't help...are there not things one can do to become a "high info donor" or direct worker for that matter?

I think that if I believed similarly to you - and if I understand correctly, you think: that abstractions are misleading, that face-to-face community building and support of loved ones and people you actually know is the important thing here, that it's important to build your own models of the world rather than trust more knowledgeable people to do impact evaluations for you, that it's really hard to overcome deceptive marketing practices by donation seekers......then, rather than claiming that there is no imperative to live frugally and engage with global poverty. If I believed this I think I'd advocate that more EAs set some time aside to get some hands-on, face to face involvement in with the people who generate impact evaluations (or at least, actually read the impact evaluation), that donors spend more time meeting people who do direct work, and that both donors and direct work spend more time interacting with the supposed direct beneficiaries of their work. That seems really different from saying that the "utilitarian imperative" is wrong. (And maybe you do advocate all these other things as well, I don't mean to imply you don't...but why advocate for just staying within yourself and your circle?)

If there's a lot of misinformation and misleading going on, I do think there's ways to get around that by acting to put oneself in more situations where one has more opportunities for direct experience and building one's own models of the world. Going straight to the idea that you should just take care of yourself and people you currently know seems ...a bit like giving up? And even if you don't think a global scope is appropriate, is there not enough poverty within your immediate community and social circle that there remains an urgency to be frugal and use resources to help others?

I just don't see how your analysis, even if totally correct, leads to the conclusion that the imperative to frugality and redistribution is destroyed. I mean, as long as we're calling it "living like a monk", at least some of the actual monks did it for exactly that purpose, in the absence of any explicit utilitarianism, with the people they tried to help largely on a face to face basis. it's not an idea that rests particularly heavily on EA foundations or impact evaluations.

(I don't want to be construed as defending frugality in particular, just claiming the general sense of the ethos of redirecting resources to people who may need it more, and the personal frugality that is sometimes motivated by that ethos, as being positive... and that the foundations of it do not rely on trusting Givewell, Effective Altruism, and so on)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-29T05:21:50.212Z · score: 19 (10 votes) · EA · GW

This is sort of an off the cuff ramble of an answer for a topic which deserves more careful thinking, so I might make some hand-wavy statements and grand sweeping claims which I will not endorse later, but:

First off, I feel that it's a little unhelpful to frame the question this way. It implicitly forces answers to conflate some fairly separate concepts together 1) The System 2) leftists 3) critiques of EA.

Here's a similarly sort of unhelpful way to ask a question:

What are these "cognitive biases" that effective altruist critiques of veganism are seeking to make us aware of?

How would you answer?

Most effective altruists support veganism! The central insight motivating most vegan practices is similar to the central insight of EA. Don't lose sight of that just because some branches of effective altruists think AI risk rather than veganism is a best possible way to go about doing good, and cite cognitive biases as the reason why people might not realize that AI risk is the top priority.

Cognitive Biases are a highly useful but fully generalizable concept that can be used to support or critique literally anything. You should seek to understand cognitive biases them in their own right...not only in the light of how someone has used them to form a "critique of veganism" by advocating for AI risk instead.

That's how you'd answer, right? So, in answer to your question:

What exactly is the system EA's (leftist) critics are seeking to change?

Most ideologically consistent leftists support EA, or would begin supporting it once they learn what it is. Utilitarianism / widening of the moral circle is very similar to ordinary lefty egalitarianism. Don't lose sight of that just because some branches of the left don't think some particular EA method are the best possible way to save the world, and cite Failure to Challenge the System as the reason.

The System is a highly useful but fully generalizable concept that can be used to support or critique literally anything. You should seek to understand it in its own right...not only in the light of how someone might invoke it to form a "critique of (non-systemic) effective altruism" by advocating for systemic change instead

I hope this analogy made my point - this question implicitly exaggerates a very minor conflict, setting up an oppositional framework which does not really need to exist.

...okay, so to actually attempt to answer the quesiton rather than subvert it. Please note that the following are not my own views, but a fairly off the cuff representation of my understanding of a set of views that other people hold. Some of these are "oversimplified" versions of views that I do roughly hold, while others are views that I think are false or misguided.

What is the system?: Here's one oversimplified version of the story: from the lower to upper paleolithic, egalitarian hunter gatherers gradually depleted the natural ecology. Prior to the depletion, generally most able bodied persons could easily provide for themselves and several dependents via foraging. Therefore, it was difficult for anyone to coerce anyone else, no concepts of private property were developed, and people weren't too fussy about who was related to whom.

In the neolithic, the ecology was generally getting depleted and resources were getting scarce. Hard work and farming became increasingly necessary to survive and people had incentive to violently hoard land, hoard resources, and control the labor of others. "The System" is the power structures that emerged thereby. It includes concepts of private property, slavery, marriage (which was generally a form of slavery), social control of reproduction, social control of sex, caste, class, racism, etc - all mechanisms ultimately meant to justify the power held by the powerful. Much like cognitive biases, these ideas are deeply built into the way all of us think, and distort our judgement. (E.g. do you believe "stealing" is wrong? Some might argue that this is the cultural programming of The System talking. Without conceptions of property, there can be no notion of stealing)

Despite resource scarcity declining due to tech advance, the bulk of human societies are still operating off those neolithic power hierarchies, and the attending harmful structures and concepts are still in place. "Changing the system" often implies steps to re-equalizing the distribution of power and resources, or otherwise dismantling the structures that keep power in the hands of the powerful.

By insisting that the circle of moral concern includes all of humanity (at least), and actively engaging in a process which redistributes resources to the global poor, effective altruists would generally be considered as a source of positive contributing to the dismantling of "The System". I do think the average leftist would think Effective Altruism, properly pitched, is generally a good idea - As would the average person regardless of ideology, realistically, if you stuck to the basic premises and didn't get too into some of the more unusual conclusions that they sometimes are taken to.

So how come some common left critiques of EAs invoke "The System"?:

Again, I don't (entirely) agree with all these views, I'm explaining them.

1) Back when the public perception of EA was that it was about "earning to give" and "donating"...especially when it seemed like "earning to give" meant directing your talent to extractive corporate institutions, the critique was that donations do not actually alter the system of power. Consider that a feudal lord may "give" alms to the serf out of noblesse oblige, but the fundamentally extractive relationship between the lord and serf remains unchanged. I put "give" in quotes because, if you really want to understand The System, you have to stop implicitly thinking of the "lord's" "ownership" of the things they "nobly" "give" to the "serf" as in any way legitimate in the first place. The lord and serf may both conceptualize this exchange as the lord showing kindness towards the serf, but the reality is that the lord, or his ancestors, actually create and perpetuate the situation in the first place. Imagine the circularity of the lord calculating he had made a magnanimous "impact" by giving the serf a bit of the gold... that was won by trading the grain which the serf had toiled for in the first place. Earning to give is a little reminiscent of this...particularly in fields like finance, where you're essentially working for the "lord" in this analogy.

2) Corporate environments maximize profit. Effective altruists maximize impact. As both these things are ultimately geared towards maximizing something that ultimately boils down to a number, effective altruist language often sounds an awful lot like corporate language, and people who "succeed" in effective altruism look and sound an awful lot like people who "succeed" in corporate environments. This breeds a sense of distrust. There's a long history within leftism of groups of people "selling out" - claiming to try to change the system from inside, but then turning their backs on the powerless once they got power. To some degree, this similarity may create distasteful perceptions of a person's "value" within effective altruism that is analogous to the distasteful perception of a person's "value" in a capitalist society. (E.g. capitalist society treats people who are good at earning money as sort of morally superior. Changing "earning money" to "causing impact" can cause similarly wrong thinking)

3) EAs to some extent come off as viewing the global poor as "people to help" rather than "people to empower". The effective altruist themself is viewed as the hero and agent of change, not the people they are helping. There is not that much discussion of the people we are helping as agents of change who might play an important part in their own liberation. (This last one happens to be a critique I personally agree with fairly wholeheartedly, and plan to write more on later)

To the extent the systemic change criticism of EA is incorrect, as EA enters the policy arena more and more, we will once again come in friction with leftist (and other political movements), unlike EA has since its inception. The difference this time is we would be asserting the systemic change we're pursuing is more effective (and/or in other ways better) than the systemic change other movements are engaging in. And if that's the case, I think EA needs to engage the communities of our critics just as critically as they have engaged us. This is something I've begun working on myself.

I would strongly recommend not creating a false dichotomy between "EA" and "Leftists", and setting up these things as somehow opposed or at odds. I'm approximately an EA. I'm approximately a leftist. While there are leftist-style critiques of EA, and EA-style critiques of leftism, I wouldn't say that there's any particular tension between these frameworks.

There is really no need to draw lines and label things according to ideology in that manner. I think the most productive reply to a "X-ist" critique of EA is an X-ist support of EA, or better yet, a re-purposing of EA to fulfill X-ist values. (Yes, there are some value systems for which this cannot work...but the egalitarian left is definitely not among those value systems)

to the extent the systemic change criticism of EA is correct, EA should internalize this criticism, and should effectively change socioeconomic systems better than leftists ever expected from us


And to that I would add, don't needlessly frame EA as fundamentally in opposition to anyone's values. EA can be framework for figuring out strategic ways to fulfill your values regardless of what those values are. (Up to a point - but again, "leftists" are well within the pale of that point.)

...and perhaps better than leftist political movements themselves (lots of them don't appear to be active or at least effective in actually changing "the system" they themselves criticize EA for neglecting).

Well, I think this is an unhelpful tone. It is, again, setting up EA as something different and better than leftism, rather than a way for us to fulfill our values - even if our values aren't all exactly the same as each others. This isn't particular to leftism. If you wanted the members of a church congregation to donate to Givewell, you should focus on shared values of charity, not "EAs could save more souls than Christianity ever could". The goal for EA is not to engage against other ideologies, the goal (to the extent that EA ideas are good and true, which obviously they may not all be) is to become part of the fabric of common sense by which other ideologies operate and try to perpetuate their goals.

Beyond the tone it's also just not true, in my opinion. Seems to me that social change does in fact occur constantly due to political movements, all the time. What's more, I'm pretty sure that the widespread acceptance of the basic building block concepts of effective altruism (such as, all people are equally important) are largely due to these leftist social movements. I don't think it's a stretch to say that EA itself is at least in part among the products of these social movements.

Comment by ishaan on Drowning children are rare · 2019-05-29T04:09:29.169Z · score: 14 (5 votes) · EA · GW

That was my first question too, but I think figured out the answer? Maybe? (Let me know if I got this right BenHoffman?)

BenHoffman's central claim is not that people aren't suffering preventable diseases. It is only that "drowning children" (a metaphor for people who can be saved with a few thousand dollars) are rare.

So they're questioning why, if the current price of saving a life is so low, and the amount of available funding so high, why hasn't all that low hanging fruit of saving "drowning children" been funded already? And if it has been, the marginal price should be higher by now?

And the answer supposedly can't be "there's simply too many low hanging fruits, too many drowning children" because, if you assume that all low hanging fruits are Communicable, maternal, neonatal, and nutritional diseases disease related, there's a maximum of ten million fruits (low hanging or not) and the most generous thing for the "there's just too many low hanging fruits for us to pick them all and that's why the price remains low" is to assume all possible fruits are low hanging. And that's why it makes sense to assume that they're all at the marginal price. The claim is that if you were truly purchasing all the low hanging lives saved, and your budget was that high, the marginal price should have gone up by now because you should have already bought up all the cheap life saving methods.

(I'm just exploring the thought process behind this particular subsection of the analysis, which is not to be taken as being agreement with the overall argument, in whole or in part.)

Comment by ishaan on How to improve your productivity: a systematic approach to sustainably increasing work output · 2019-05-28T20:06:48.389Z · score: 1 (1 votes) · EA · GW

I haven't tried a mini-stepper! Next time I'm at the gym I'll check if they have one I can try. Even if it does not work as well, it would certainly be a lot cheaper and more portable.

Untested Speculation: People using steppers/bikes etc. might stop exerting conscious attention to move once they get sufficiently absorbed in their work. A special property of treadmills is that if you stop, you'll be carried backwards and away from your keyboard - this trains you out of stopping pretty instantly. Steppers/bikes/etc wouldn't automatically have this property - though perhaps one could mimic the training by adding a "don't stop!" signalling noise or something. Ultimately I think it's probably important that the movement not require much conscious attention.

Comment by ishaan on EA Survey 2018 Series: Cause Selection · 2019-05-23T06:06:11.405Z · score: 1 (1 votes) · EA · GW Effective Givers.pdf?dl=0

This isn't really what I was looking for, but it's an"online national sample of Americans" polled on giving to deworming vs make a wish and the local choir. I'm hoping to find something more focused on the diversity of causes within EA, and more well defined and more adjacent populations.

I mentioned college professors above, but I can think of lots of different populations e.g ."students from specific colleges" or "members of adjacent online forums", or "startup founders" or 'doctors without borders people" or "teach for america people" or even "Non-EA friends and relatives of EAs" which might be illustrative as points of comparison - some easier to poll than others. Generally I think the most useful data comes from those who are representative of people who are already sort of adjacent to EA, represent key institutions, and whose buy-in would be most practically useful for movement building over decades, which is why I went for "college professors" first.

Comment by ishaan on EA Survey 2018 Series: Cause Selection · 2019-05-23T04:02:24.373Z · score: 4 (3 votes) · EA · GW

If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I'm sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?

Either way: one _might_ conclude that "climate change" and "global poverty" are more "mainstream" priorities, where "mainstream" is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?

Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more "mainstream" or more "hegemonic" in some fashion? Bearing in mind that "mainstream" / distance from EA is a continuum, and it would be useful to sample multiple points on that continuum. (For example, "College Professors" might be representative of opinions that are both more mainstream and more hegemonic within a certain group)

(I'll try to come back to this comment and link any relevant data myself if I come across it later)

Comment by ishaan on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2019-05-22T03:44:12.913Z · score: 3 (2 votes) · EA · GW

What are your thoughts on this?

In particular

> a cause supported by the community that seems more funding constrained than talent constrained – is ending factory farming. Jon Bockman of Animal Charity Evaluators, told me that vegan advocacy charities have lots of enthusiastic volunteers but not enough funds to hire them, meaning that funding is the greater bottleneck (unless you have the potential to be a leader and innovator in the movement).

Please see also my reply to Benjamin Todd's comment for a longer version of this question, which I wanted to address to both of you, but I don't think this forum has user tagging functionality.

Comment by ishaan on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2019-05-22T03:08:53.348Z · score: 5 (3 votes) · EA · GW

In 2015 you (Benjamin) wrote a post which, if I'm reading it right, aspires to answer the same question, but is in very direct contradiction with the conclusions of your (Katherine's) post regarding which causes are relatively talent constrained. I would be interested in hearing about the sources of this disagreement from both of you (Assuming it is a disagreement, and not just the fact that time has passed and things have changed, or an issue of metrics or semantics)

here is the relevant excerpt

...Most of the causes the effective altruism community supports are more talent constrained than funding constrained. For example (in all of the following, I’ve already taken account of replaceability): 1) International development... 2) Building the effective altruism community and priorities research... 3) AI safety research...
...The main exception to this – a cause supported by the community that seems more funding constrained than talent constrained – is ending factory farming. Jon Bockman of Animal Charity Evaluators, told me that vegan advocacy charities have lots of enthusiastic volunteers but not enough funds to hire them, meaning that funding is the greater bottleneck (unless you have the potential to be a leader and innovator in the movement). So, the more weight you put on this cause, the more funding constrained you’ll see the community. But the situation could reverse if you think developing meat substitutes is the best approach, because that could be pursued by for-profit companies or within academia.

It sounds like both of you (Katherine and Benjamin) agree that AI is "talent constrained". Pretty straightforward, it's hard to find sufficiently talented people with the specialized skills necessary.

It sounds like the two of you diverge on global poverty, for reasons that make sense to me.

Katherine's analysis, as I understand it, is straightforwardly looking at what Givewell says the current global poverty funding gap is...which means that impact via talent basically relies on doing more good with the existing money, performing better than what is currently out there. (And how was your talent gap estimated? Is it just a counting up of the number of currently hiring open positions on the EA job board?)

Benjamin's analysis, as I understand it, is that EA's growing financial influence means that more money is going to come in pretty soon, and also that effective altruists are pretty good at redirecting outside funds to their causes (so, if you build good talent infrastructure and rigorously demonstrate impact and a funding gap, funding will come)

Is this a correct summary of your respective arguments? I understand how two people might come to different conclusions here, given the differing methods of estimating and depending on what they thought about EA's ability to increase funding over time and close well demonstrated funding gaps.

(As an aside, Benjamin's post and accompanying documents made some predictions about the next few years - can anyone link me to a retrospective regarding how those predictions have born out?)

It sounds like you diverge on animal rights, for reasons I would like to understand

Benjamin, it sounds like you / Joe Bockman are saying that ending factory farming is exceptional among popular EA causes in having more talent than they can hire and being in sore need of funding.

Whereas Katherine, it sounds like you're saying that animal rights is particularly in need of talent relative to all the other cause areas you've mentioned here.

These seem like pretty diametrically opposed claims. Is this a real disagreement or have I misread? I'm not actually sure what the source of this disagreement is, other than Katherine and Joe having different intuitions, or bird's eye views of different parts of the landscape? Has Joe written more on this topic? If it's just a matter of two people's intuitions, it doesn't leave much room for evaluating either claim. (I get the sense that Katherine's claim isn't based on intuition, but the fact that EA animal organizations are currently expanding, which increases the estimated number of open job postings available in the near future. Is that correct?)

(Motivation: I'm reading this post now as part of the CE incubation program's reading list, and felt surprised because the conclusions conflicted with my intuitions, some of which I think were originally formed by reading Benjamin's posts a few years ago. As the program aims to set me on a path which will potentially help me cause redirection of funding, redirection of talent, create room for more talent, and/or create room for more funding within global poverty or animal issues, the answers to these questions may be of practical value to me.)

I'd be happy if either of you could weigh in on this / explain the nature and sources of disagreement (if there is in fact a disagreement) a bit more!

(PS - can I tag two people to be notified by a comment? Or are people notified about everything that occurs within their threads?)

Comment by ishaan on How to improve your productivity: a systematic approach to sustainably increasing work output · 2019-05-21T06:30:41.457Z · score: 4 (3 votes) · EA · GW

If you're taking suggestions for things to test, personally my (unquantified) single most successful productivity intervention yet has been putting a treadmill under my desk, and then stacking a box on the table to raise my laptop to elbow height.

My productivity per hour and general willpower to work is unchanged, but I'm now able to be on the computer for much longer hours at a stretch because I don't have to deal with the postural pain of sitting too long, and I no longer have to fight off my natural tendency to fidget and avoid being still. (I just switch from walking or standing to sitting or lying down as I get tired, with no interruption in work flow).

Perhaps more notable is the I think not unreasonable expectation that the additional 1.5-3.5 hours of walking per day it will ultimately increase my total number of productive years and decrease my sick days. That's not so easy to test on myself, but the benefits of walking are pretty well established. (Though of course the primary motivator there is not productivity, really)

I've also noticed improvements in baseline mood immediately after a long "walk", improving general physical stamina over time (e.g. I can walk farther without discomfort, I don't get as easily tired if I take on a task which requires being on my feet all day) and better lower body mobility and flexibility at the gym (e.g. lower and better form with squats).

Conflict of interest: it may end up being convenient for me if the CE office ends up getting a treadmill desk :P

Comment by ishaan on Overview of Capitalism and Socialism for Effective Altruism · 2019-05-17T21:53:33.613Z · score: 5 (4 votes) · EA · GW

For skimmers, page 10 of Candidate Scoring System 5 has a diagram outlining the overall methodology of this analysis nicely. My overall impression of the document is that it's not dissimilar in basic outlook from a mainstream political candidate evaluation or voter's guide, leaning towards more quantitatively driven methods, and keeping an eye towards issues favored by Effective Altruists.

Things I like: This provides one of the few analyses of politicians on EA specific issues like X-risk, animal welfare, and global poverty. I think that's potentially important, as (to my knowledge) there are not currently enough political evaluations of candidates based on that sort of thing. I support this and other volunteer projects attempting this sort of thing.

Things I'm ambivalent about: When it comes to areas which are non-neglected in non-EA political discourse, which you might find in a mainstream voter's guide, I don't currently feel more inclined to trust it over non-EA evaluations. That is to say, I don't see any reason to consider it unusually trustworthy with respect to things like which candidates are best for specific cause areas such as climate change, education, abortion, etc let alone broad ideological evaluations of "capitalism vs. socialism" as general philosophies. This is not meant to be discouraging - creating voting guides is a crowded field, being the "most trustworthy" isn't necessarily easy, though I do wonder if it might be better going forward to place a greater focus on evaluating the less crowded areas.

Some critique of the scope: I think an EA framework evaluating mainstream politics should include interventions (e.g. plans of organizing, activism), not just cause areas, and an analysis of "counterfactual" / "marginal" impact of those interventions, and a sense of the "tractability" when possible...not just the gross impact and importance of the policies themselves.

Whether or not some ideology, framework, -ism, policy recommendation, transfer of power, deep systemic change, etc can be rigorously shown to be superior to some alternative in terms of practical impact is politically interesting, and might change my vote or my ideological loyalty, but that doesn't help in terms of altruistic activity if the problem is intractable. For me to consider it effective as a form of altruism, I'd want a description of the various methods (beyond just personally voting) to influence political outcomes, and the resources / price tag of shifting the probabilities of an election outcome addition to estimates of the impact of doing so. (Positive side effect - this would help keep focus on political issues in proportion to their estimated practical importance)

I don't think that's a crowded area, either - I've encountered some (but not a lot) of mainstream work on the cost effectiveness of political activity.

(Maybe that's not the intention/scope of this project, though, and that's okay - my main intent is to say that it would be really good if in general politically based interventions started focusing more on that part of the analysis)

Edit: Maybe this is the wrong thread, as I now realize there are other posts about the document as a whole, but I'll leave this up unless someone thinks I should move it.

Comment by ishaan on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-13T23:23:51.164Z · score: 8 (3 votes) · EA · GW

I'm arguing that more spending on psychedelic research & some advocacy work (in particular, helping the rollout of FDA approval of MDMA & psilocybin go smoothly) would be leveraged.

I guess what it boils down to, is how much EA money do you think would need to go into accomplishing this, and for what expected outcome? I'd like to make the distinction that if you can recruit some talent from the EA community to use money provided by, say, Clarity Health Fund (which is earmarked for psychedelics anyway) to further psychedelics related causes in a more effective way, then I am absolutely all for it and in full support. But we ultimately want high impact with fairly small amounts of EA money, or via the use of free EA talent, or EA talent that is paid in ways other than EA money, because of the high counterfactual price tag on EA money. Calculating the expected outcome of this is tough, but possible, and I would change my mind if I saw a plausible estimate that came out as being impactful.

I think that I understand why you think this will work, and hopefully the next few paragraphs demonstrate that understanding. And I think it's important to acknowledge that GiveWell (by their own admission) did not account for leverage in their early evaluations, and this may have created some undesirable anchoring / lock-in effects with respect to Effective Altruism recommended activities.

And I agree that "leverage" can mean that causes that seem "less efficient" in terms of a strict "direct impact" / "resources spent" metric may have been unjustifiably ignored by the EA community, especially if the form of "leverage" involved is more complex than a simple fundraiser. Moderately efficient causes could benefit from an EA mindset, so long as resources are being redirected from less efficient to more efficient areas and not the other way around. Most of the world's resources either can't (due to logistics) or won't (due to the priorities of power structures and individual donors) be directed towards the most high impact causes. If you can recruit those resources, with a realistic assessment of your impact being greater than what those resources would otherwise have gone to, it would still be worthy of the name effective altruism, and you would still have more direct impact at the end of the day, using resources that would otherwise have gone somewhere with less direct impact.

In fact, when you put it that way, there's a whole host of cause areas you might consider. While trying to "End Homelessness in America" doesn't beat distributing mosquito nets to low income countries on a "direct impact" / "resources spent" metric, there is plenty of money that you might think of as effectively "earmarked" for USA purposes only, or earmarked for a certain type of intervention. If you redirect resources that would be otherwise spent on something less impactful, a high difference in impact means that you have done a good job, because of leverage. I think many in the EA community recognize this to some extent, and Givewell is currently investigating opportunities to influence government policy and improve government spending. The concept of leverage really broadens the scope of what "EA" could mean, and potentially does open the door to sometimes helping people in high income countries or furthering causes that don't boast efficiency per dollar, although I would guess generally not financially helping but rather via skills or spreading a message (e.g. influence donors who are of a less global mindset to donate to more effective cause within the local parameters they care about, or help organizations that aren't necessarily focused on doing the absolute maximum good per dollar still become more effective within the narrower scope of their goals, etc...). One could consider psychedelics legalization to be potentially a part of such activities.

Now that I've (hopefully) shown that I understand where you're coming from here, let me explain why I still don't think this will work, and what it would take to change my mind.

From the perspective of an individual, the act of recruiting EA money to your cause is also a form of "leverage". This applies to everyone and everything, not just psychedelics: if you believe that EA is generally on the right track, then the less "EA resources" you leverage to your cause, and the more otherwise inefficient resources you leverage to your cause, the better your (counterfactually informed) impact will be. Even people doing global poverty should preferentially recruit non-EA funds, if they believe that EA funds are otherwise well allocated.

I would (from my currently naive perspective) agree with you that investing in key research goals probably would be "leveraged" impact, in the sense that direction some EA-money to this might lead to other resources being redirected to this down the line. If we're talking about potentially diverting funding from other EA causes, we'll need to be super stringent about impact-per-dollar. We can and should include "leverage" in those calculations, but said calculations must occur.

From what I understand, you're essentially suggesting just a little bit of research and advocacy, on a reasonable expectation that it will catalyze some sort of tipping point, redirecting funds from various non-EA sources towards the problem. But as long as you're working within an EA framework, it's important to quantify your estimate of the impact of that investment.

To estimate the...counterfactual-blind?... impact of your (research, advocacy, whatever) actions, you'd have to estimate the expected impact on policy outcome (how much earlier do we estimate the relevant FDA approvals, policy changes, etc happen as a result of the diverted funds) and the expected value of those policy outcomes (how many people will get better treatment as a result of those outcomes, relative to the treatment they otherwise would have gotten). In other words, how many people benefited?

And then, you have to introduce the counterfactual question of what those resources could be spent on instead. You have to first calculate the counterfactual impact of any EA-resources, which (unless EAs are misguided) have a particularly heavy counterfactual impact price tag (At least when it comes to asking for money? Judging from what I've seen posted about the EA job market recruiting EA talent could still be a good move). After that, you'd have to calculate the counterfactual impact of all the other resources you leveraged (though I think it would be okay to just place that at zero for now, to keep the models simple enough to use).

And…despite non-EA leverage, I just don't think these numbers will come out that way, for all the reasons described in the previous comment. Even if you make brilliant use of leverage to mostly set aside the fact that the countries that are in a position to benefit from this are expensive to operate in, you'd still have to deal with the fact that the lack of research and attending policy changes has little to do with global bottlenecks to access…which means that the numerator in the "beneficiaries/EA-resources spent" equation is going to be pretty low. I don't mean all the resources you gain leverage - you can make your own calculations of what the counterfactual impact price tag of those are, and depending on who you leverage maybe you could even make a case for that being zero. I mean specifically the EA-money. A person using EA money for this cause would have to operate on a shoestring budget to beat the counterfactual cost. If you agree with my earlier statement that per individual, a year of clean water is, let's say, 10x as good as reaping the unrealistically-best-case scenario of psychedelics research a decade earlier than otherwise (which seems really lowballing it to me), you'd have to honestly believe that every EA-derived $50k (ignoring further leverage) you spend pushes the timeline forward by a year just to "break even". I admit I don't fully understand this issue or the plan but that seems really optimistic when I compare that to the aforementioned $19m figure required to push un-stigmatized drugs through FDA approval.

Anyway, if someone were to do those calculations, it would be a good use of time, because developing methods to evaluate the impact of research/advocacy on policy change in general is something we need. (stay tuned! I may be posting more on that later).

In fact it's worth just assuming psychedelics are as useful as any drug currently in use when doing your calculations, because even if psychedelics aren’t it, there would be many other items in this general class and we can try to estimate expected values of adding funds to promising research in general to refer to them all. If you were to demonstrate that psychedelic research/advocacy might have that level of impact by these metrics, it would be a pretty big deal even if this particular class of under-researched psychoactive compounds ended up being a flop, because there are a lot of other things that would potentially also become high impact by the same arguments.

In these discussions of impact, I think it's worth pointing out that unlike, say, x-risk, something like psychedelics research/advocacy is sufficiently concrete that we can reasonably attempt to quantify the impact of our activities, at least to within one or two orders of magnitude, and compare least next to research/advocacy for other policy interventions (which happening in low-income countries which have more people and are cheaper to lobby in)

This hopefully goes without saying, but I don't mean to claim that psychedelics is irrelevant and EAs should not pay any attention to this at all: If you or anyone else has done the research and feel that this is a low hanging fruit, even if the aforementioned impact evaluation doesn't come back as highly efficient, I would encourage that person to find a way to pluck it...and if some of the under-utilized EA-talent was leveraged towards the problem, it could be a good thing. I just wouldn't support redirecting global poverty or x-risk focused funding to this (unless some very surprising and convincing impact evaluations along the lines of what I described came out and changed my mind).

(Oh also, I think my use of the world "legalizing" in the previous comment might have been misleading, I just meant the general situation where our interventions allow psychedelics to be used in more and more contexts, without breaking the law. Not legalizing recreational use specifically )

Comment by ishaan on Is preventing child abuse a plausible Cause X? · 2019-05-11T17:21:53.174Z · score: 5 (3 votes) · EA · GW
I don't know what is the previous level of knowledge on this topic of you and other readers of this forum, and which parts of my knowledge would be obvious to you and which not;

I think it's generally best to assume the level of common knowledge you'd expect from a graduate student in an unrelated field.

what would be, from the perspective of a cause, the benefit of being a "cause X" in the EA community

Right now I think the main effect would be more intellectual talent directed towards researching the various strategies that might further the cause. In particular: figuring out the bottlenecks to improving that area, attempting to measure how much those improvements cost (especially if the key bottleneck is "lack of funding", but even otherwise), and attempting to measure the scope of how much we expect they improve quality of life.

If the outcomes of those analyses suggest that it's promising, then some potential results would include: funding directed towards those strategies, advising of more people to acquire skills and take careers that directly contribute to those strategies, and more intellectual talent devoted to improving those strategies on a meta level.