Update On Six New Charities Incubated By Charity Entrepreneurship 2020-02-27T05:20:18.346Z


Comment by ishaan on EA Debate Championship & Lecture Series · 2021-04-05T18:01:22.201Z · EA · GW

Thanks for hosting this event! It was a pleasure to participate. 

Comment by ishaan on The Intellectual and Moral Decline in Academic Research · 2020-09-28T17:23:09.089Z · EA · GW

Without making claims about the conclusions, I think this argument is of very poor quality and shouldn't update anyone in any direction.

"As taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent"

Taking all claims at face value, you should not be persuaded that more money causes retractions just because retractions increased roughly in proportion with the overall growth of the industry. I checked the cited work to see if there were any mitigating factors which justified making this claim (since maybe I didn't understand it, and since sometimes people make bad arguments for good conclusions) and it actually got worse - they ignored the low rate of retraction ( It's 0.2%), they compared US-only grants with global retractions, they did not account for increased oversight and standards, and so on.

The low quality of the claim, in combination with the fact that the central mission of this think tank is lobbying for reduced government spending in universities and increase political conservatism on campuses in North Carolina, suggests that the logical errors and mishandling of statistics we are seeing here is partisan motivated reasoning in action.

Comment by ishaan on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-22T04:48:42.979Z · EA · GW

This matches my understanding, however, I think it is normal for non-profits of the budget size that the EA ecosystem currently is to have this structure.

Bridgespan identified 144 nonprofits that have gone from founding to at least $50 million in revenue since 1970...[up to 2003]...we identified three important practices common among nonprofits that succeeded in building large-scale funding models: (1) They developed funding in one concentrated source rather than across diverse sources; (2) they found a funding source that was a natural match to their mission and beneficiaries; and (3) they built a professional organization and structure around this funding model.

- How Non-Profits Get Really Big

Some common alternatives are outlined here: Ten Non-Proft Funding Models.

Within this framework, I would describe the EA community currently using a hybrid between "Member Motivator" (cultivating membership of many individual donors who feel personally involved with the community - such as the GWWC model) and "Big Bettor" (such as the relationship between Good Ventures and the ecosystem of EA organizations).

Comment by ishaan on How have you become more (or less) engaged with EA in the last year? · 2020-09-10T18:29:26.090Z · EA · GW

This time last year, I started working at Charity Entrepreneurship after having attended the 2019 incubation program (more about my experience here). I applied to the 2019 incubation program after meeting CE staff at EAG London 2018. Prior to that, my initial introduction to EA was in 2011 via LessWrong, and the biggest factor in retaining my practical interest sufficiently to go to a conference was that I was impressed by the work of GiveWell. The regular production of interesting content by the community also helped remind me about it over the years. 80k's career advice also introduced me to some concepts (for example replacability) which may have made a difference.

Going forward I anticipate more engagement with both EA specifically and the concept of social impact more generally, because due to working at CE I have acquired a better practical understanding of how to maximize impact in general than I did before, as well as more insight about how to leverage the EA community specifically towards achieving impact (whereas my prior involvement consisted mostly of reading and occasionally commenting).

Comment by ishaan on Are there any other pro athlete aspiring EAs? · 2020-09-08T19:19:05.103Z · EA · GW

It's a cool idea! Athletes do seem to have a lot of very flexible and general-purpose fundraising potential, I think it makes a lot of sense to try to direct it effectively. Charity Entrepreneurship (an incubation program for founding effective non-profits) works with Player's Philanthropy Fund (a service which helps athletes and other entities create dedicated funds that can accept tax-deductible contributions in support of any qualified charitable mission) to help our new charities who have not completed the fairly complex process of formally registering as a non-profit get off the ground. You can actually see us on the roster, alongside various athletes. This doesn't mean we are actually working with athletes - we are just using some of the same operations infrastructure, but it might be a useful thing to know. In general I've noticed that there is quite a bit of infrastructure similar to PPF aimed at helping athletes do charitable fundraising, which I think is a good sign that this idea is promising.

Comment by ishaan on The community's conception of value drifting is sometimes too narrow · 2020-09-04T21:12:27.320Z · EA · GW

I think that what is causing some confusion here is that "value drift" is (probably?) a loanword from AI-alignment which (I assume?) originally referred to very fundamental changes in goals that would unintentionally occur within iterative versions of self improving intelligences, which...isn't really something that humans do. The EA community borrowed this sort of scary alien term and is using it to describe a normal human thing that most people would ordinarily just call "changing priorities".

A common sense way to say this is that you might start out with great intentions, your priorities end up changing, and then your best intentions never come to life. It's not that different from when you meant to go to the gym every morning...but then a phone call came, and then you had to go to work, and now you are tired and sitting on the couch watching television instead.

Logistically, it might make sense to do the phone call now and the gym later. The question is: "Will you actually go to the gym later?" If your plan involves going later, are you actually going to go? And if not, maybe you should reschedule this call and just going to the gym now. I don't see it as a micro death that you were hoping to go to the gym but did not, it's that over the day other priorities took precedence and then you became too tired. You're still the same person who wanted to go... you just ...didn't go. Being the person who goes to the gym requires building a habit and reinforcing the commitment, so if you want to go then you should keep track of which behaviors cause you to actually go and which behaviors break the habit and lead to not going.

Similarly you should track "did you actually help others? And if your plan involves waiting for a decade ...are you actually going to do it then? Or is life going to have other plans?" That's why the research on this does (and ought to) focus on things like "are donations happening", "is direct work getting done" and so on. Because that's what is practically important if your goal is to help others. You might argue for yourself "it's really ok, I really will help others later in life" or you might argue "what if I care about some stuff more than helping others" and so on, but I think someone who is in the position of attempting to effectively help others in part through the work of other people (whether through donations or career or otherwise) over the course of decades should to some degree consider what usually happens to people's priorities in aggregate when modeling courses of action.

Comment by ishaan on Book Review: Deontology by Jeremy Bentham · 2020-08-18T00:11:11.010Z · EA · GW

Cool write up!

Before I did research for this essay, I envisioned Bentham as a time traveller from today to the past: he shared all my present-day moral beliefs, but he just happened to live in a different time period. But that’s not strictly true. Bentham was wrong about a few things, like when he castigated the Declaration of Independence

Heh, I would not be so sure that Bentham was wrong about this! It seems like quite a morally complex issue to me and Bentham makes some good points.

what was their original their only original grievance? That they were actually taxed more than they could bear? No; but that they were liable to be so taxed...

This line of thought is all quite true. Americans (at least, the free landholders whose interests were being furthered by the declaration) at the time were among the wealthiest people in the world, and payed among the lowest taxes - less taxed than the English subjects. They weren't oppressed by any means, British rule had done them well.

But rather surprising it must certainly appear, that they should advance maxims so incompatible with their own present conduct. If the right of enjoying life be unalienable, whence came their invasion of his Majesty’s province of Canada? Whence the unprovoked destruction of so many lives of the inhabitants of that province?

This too, remains pertinent to the modern discourse. In response to Pontiac's Rebellion, a revolt of Native Americans led by Pontiac, an Ottawa chief, King George III declared all lands west of the Appalachian Divide off-limits to colonial settlers in the Proclamation of 1763.

Americans did not like that. The Declaration of independence ends with the following words:

“He (King George III) has excited domestic insurrections amongst us, and has endeavored to bring on the inhabitants of our frontiers, the merciless Indian savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes, and conditions.”

The Declaration of Independence voided the Proclamation of 1763, which contributed to the destruction of the Native Americans, a fact which is not hindsight but was understood at the time. Notice how indigenous communities still thrive in Canada, where the proclamation was not voided. There is also argument that slavery was prolonged as a result of it, and that this too is not hindsight but was understood at the time.

Of course, I doubt the British were truly motivated by humanitarian concern, and it's not clear to me from this piece that even Bentham is particularly motivated to worry about the indigenous peoples (vs. just using their suffering as a rhetorical tool to point out the hypocrisy of the out-group where it fits his politics) - you can tell he focuses more on the first economic point than the second humanitarian one. But his critiques would all be relevant had this event occurred today.

Really I think with the hindsight of history, that entire situation is less a moral issue and more a shift in the balance of power between two equally amoral forces - both of whom employed moral arguments in their own favor, but only one of which won and was subsequently held up as morally correct.

I think the lesson to be learned here might be less that Bentham was ahead of his time, and more that we are not as "ahead" in our time as we might imagine - e.g. we continue to teach everyone that stuff which was bad is good, we continue to justify our violence in similar terms. One thing I've noticed in reading old writings is that so many people often knew that what was going on was bad and that history would frown upon it but they continued to do it (e.g. Jefferson's and many other's writings on slavery largely condemn it, but they kept doing it more or less because that was the way that things were done, which is also not unlike today).

Comment by ishaan on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-07T00:29:28.890Z · EA · GW

Idk but in theory they shouldn't, as pitch is sensed by the hairs on the section of the cochlea that resonates at that the relevant frequency.

Comment by ishaan on Do research organisations make theory of change diagrams? Should they? · 2020-07-29T19:34:23.851Z · EA · GW

A forum resource on ToC in research which I found insightful: Are you working on a research agenda? A guide to increasing the impact of your research by involving decision-makers

Should they

Yes, but ToC don't improve impact in isolation (you can imagine a perfectly good ToC for an intervention which doesn't do much). Also, if you draw a nice diagram, but it doesn't actually inform any of your decisions or change your behavior in any way, then it hasn't really done anything. A ToC is ideally combined with cost-benefit analyses, the comparing of multiple avenues of action, etc and it should pay you back in the form of generating some concrete, informative actions e.g. consulting stakeholders to check your research questions, generally creating checkpoints at which you are trying to get measurements and indicators and opinions from relevant people.

For more foundational and theoretical questions where the direct impact isn't obvious, there may be a higher risk of drawing a diagram which doesn't do anything. I think there's ways to avoid this - understand the relevance of your research to other (ideally more practical) researchers who you've spoken to about it such as a peer review process, make a conceptual map of where your work fits in to other ideas which then lead to impact, try to get as close to the practical level as you realistically can. If it's really hard to tie it to the practical level it is sometimes a sign that you might need to re-evaluate the activity.

Do they

Back in academia, I didn't even know what a "theory of change" was, so I think not. But, one is frequently asked to state the practical and the theoretical value of your research, and the peer review and grant writing process implicitly incorporates elements of stakeholder relevance. However, as an academic, if you fail to make your own analyses, separately from this larger infrastructure, you may end up following institutional priorities (of grant makers, of academic journals, etc) which differ from "doing the most good" as you conceptualize it.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-16T02:24:33.503Z · EA · GW

The tricky part of social enterprise from my perspective is that high impact activities are hard to find, and I figure they would be even harder to find when placed under the additional constraint that they must be self sustaining. Which is not to say that you might not find one (see here and here), just that, finding an idea that works is arguably the trickiest part.

for-profit social enterprises may be more sustainable because of a lack of reliance on grants that may not materialise;

This is true, but keep in mind, impact via social enterprise may be "free" in terms of funding (so very cost-effective), but, it comes with opportunity costs in terms of your time. When you generate impact via social enterprise, you are essentially your own funder. Therefore, for a social enterprise to beat your earning-to-give baseline, its net impact must exceed the good you would have done via whatever you might have otherwise donated to a GiveWell top charity if you instead were donating as much money as you would in a high earning path. (This is of course also true for non-profit/other direct work paths). Basically, social enterprises aren't "free" (since your time isn't free) so it's a question of finding the right idea and then also deciding if the restrictions inherent in trying to be self-sustaining are easier than the restrictions (and funding counterfactuals) inherent in getting external funding.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T03:24:12.062Z · EA · GW
However, I'm sceptical of charity entrepreneurship's ability to achieve systemic change - I'd probably (correct me if I'm wrong) need a graduate degree in economics to tackle the global economic system.

It might plausibly be helpful to hire staff who had graduate degree in economics, but I think you would not necessarily need a graduate degree in economics yourself in order to start an organization focused on improving economic policy. Of course it's hard to say for sure until it's tried - but there's a lot that goes into running an organization, and it takes many different skills and types of people to make it come together. Domain expertise is only one part of it. A lot of great charities (e.g. GiveWell, AMF) were started by people who didn't enter with domain expertise or related degrees. (None of which is to say that economics isn't a strong option for a variety of paths, only that you shouldn't put the path of starting an organization in the "I need a degree first" box.)

(As for my opinion more generally, I do think that social entrepreneurship would under-perform relative to purely EtG (if you give to the right place), and also under-perform relative to focused non-profit or policy work (if you work on the right thing), because it has to simultaneously turn profit and achieve impact, which really limits the flexibility to work on the higher impact things. But it primarily depends on what specifically you're working on, in every case.)

Comment by ishaan on Where is it most effective to found a charity? · 2020-07-06T16:49:45.036Z · EA · GW

I've never done this myself, but here's bits of info I've absorbed through osmosis by working with people who have.
-Budget about 50-100 hours of work for registration. Not sure which countries require more work in this regard.
-If you're working with a lot of international partners, some countries have processes that are more recognized than others. The most internationally well-known registration type is America's 501(c)(3) - which means that even if you were to for example work somewhere like India, people are accustomed to working with 501(c)(3) and know the system. Less important if you aren't working with partners.
-If you are planning to get donations from mostly individuals, consider where those individuals are likely to live and what the laws regarding tax deductibleness are. Large grantmakers are more likely to be location agnostic.
-You don't need to live where you register, but if you want to grant a work visa to fly in an employee to a location, generally you will need to be registered in that location.

If you're interested in starting a charity you should consider auditing Charity Entrepreneurship's incubation program, and apply for the full course next year. Audit course will have information about how to pick locations for the actual intervention (which usually matters more than where you register for your impact). The full course for admitted students additionally provides guidance and support for operations/registration type stuff.

Comment by ishaan on EA Forum feature suggestion thread · 2020-06-28T13:02:17.988Z · EA · GW

I posted some things in this comment, and then realized the feature I wanted already existed and I just hadn't noticed it - which brings to mind another issue: how come one can retract, overwrite, but not delete a comment?

Comment by ishaan on Dignity as alternative EA priority - request for feedback · 2020-06-26T14:00:48.236Z · EA · GW
What evidence would you value to help resolve what weight an EA should place on dignity?

Many EAs tend to think that most interventions fail, so if you can't measure how well something works, chances are high that it doesn't work at all. To convince people who think that way, it helps to have a strong justification to incorporate a metric which is harder to measure over a well established and easier to measure metrics such as mortality and morbidity.

In the post on happiness you linked by Michael, you'll notice that he has a section on comparing subjective well being to traditional health metrics. A case is made that improving health does not necessarily improve happiness. This is important, because death and disability is easier to measure than things like happiness and dignity, so if it's a good proxy it should be used. If it turned out the that the best way to improve dignity is e.g. prevent disability, then in light of how much easier to measure disability prevention is, it would not be productive to switch focus. (Well, maybe. You might also take a close association between metrics as a positive sign that you're measuring something real. )

To get the EA community excited about a new metric, if it seems realistically possible then i'd recommend following Michael's example in this respect. After establishing a metric for dignity, try to determine how well existing top givewell interventions do on it, see what the relationship is with other metrics, and then see if there are any interventions that plausibly do better.

I think this could plausibly be done. I think there's a lot of people who favor donations to GiveDirectly because of the dignity/autonomy angle (cash performs well on quite a few metrics and perspectives, of course) - I wouldn't be surprised if there are donors who would be interested in whether you can do better than cash from that perspective.

Comment by ishaan on EA considerations regarding increasing political polarization · 2020-06-25T14:42:10.619Z · EA · GW
Why effective altruists should care

Opposing view: I don't think these are real concerns. The Future of Animal Consciousness Research citation boils down to "what if research in animal cognition is one day suppressed due to being labeled speciesist" - that's not a realistic worry. The vox thinkpeice emphasizes that we are in fact efficiently saving lives - I see no critiques there that we haven't also internally voiced to ourselves, as a community. I don't think it's realistic to expect coverage of us not to include these critiques, regardless of political climate. According to google search, the only folks even discussing that paper are long-termist EAs. I don't think AI alignment is any more politically polarized except as a special case of "vague resentment towards silicon valley elites" in general.

Sensible people on every part of the political spectrum will agree that animal and human EA interventions are good or at least neutral. The most controversial it gets is that people will disagree with the implication that they are best ways to do good...and why not? We internally often disagree on that too. Most people won't understand ai alignment enough to have an opinion beyond vague ideas about tech and tech-people. Polarization is occurring, but none of this constitutes evidence regarding political polarization's potential effect on EA.

Comment by ishaan on EA and tackling racism · 2020-06-16T20:09:14.154Z · EA · GW

a) Well, I think the "most work is low-quality aspect" is true, but also fully-general to almost everything (even EA). Engagement requires doing that filtering process.

b) I think seeking not to be "divisive" here isn't possible - issues of inequality on global scales and ethnic tension on local scales are in part caused by some groups of humans using violence to lock another group of humans out of access to resources. Even for me to point that out is inherently divisive. Those who feel aligned with the higher-power group will tend to feel defensive and will wish not to discuss the topic, while those who feel aligned with lower-power groups as well as those who have fully internalized that all people matter equally will tend to feel resentful about the state of affairs and will keep bringing up the topic. The process of mind changing is slow, but I think if one tries to let go of in-group biases (especially, recognizing that the biases exist) and internalizes that everyone matters equally, one will tend to shift in attitude.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:59:58.533Z · EA · GW
I've seen a lot of discussion of criminal justice reform

Well, I do think discussion of it is good, but if you're referring to resources directed to the cause's not that I want EAs to re-direct resources away from low-income countries to instead solving disparities in high income countries, and I don't necessarily consider this related to the self-criticism as a community issue. I haven't really looked into this issue, but: on prior intuition I'd be surprised if American criminal justice reform compares very favorably in terms of cost-effectiveness to e.g. GiveWell top charities, reforms in low income countries, or reforms regarding other issues. (Of course, prior intuitions aren't a good way to make these judgements, so right now that's just a "strong opinion, weakly held".)

My stance is basically no on redirecting resources away from basic interventions in low income countries and towards other stuff, but yes on advocating that each individual tries to become more self-reflective and knowledgeable about these issues.

I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias

I agree, that's not an example of bias. This is one of those situations where a word gets too big to be useful - "supportive of capitalism" has come to stand for a uselessly large range of concepts. The same person might be critical about private property, or think it has sinister/exploitative roots, and also support sensible growth focused economic policies which improve outcomes via market forces.

I think the fact that EA has common sense appeal to a wide variety of people with various ideas is a great feature. If you are actually focused on doing the most good you will start becoming less abstractly ideological and more practical and I think that is the right way to be. (Although I think a lot of EAs unfortunately stay abstract and end up supporting anything that's labeled "EA", which is also wrong).

My main point is that if someone is serious about doing the most good, and is working on a topic that requires a broad knowledge base, then a reasonable understanding the structural roots of inequality (including how gender and race and class and geopolitics play into it) should be one part of their practical toolkit. In my personal opinion, while a good understanding of this sort of thing generally does lead to a certain political outlook, it's really more about adding things to your conceptual toolbox than it is about which -ism you rally around.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:51:34.269Z · EA · GW
What are some of the biases you're thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?

The longer answer to this question: I am not sure how to give a productive answer to this question. In the classic "cognitive bias" literature, people tend to immediately accept that the biases exist once they learn about them (…as long as you don't point them out right at the moment they are engaged in them). That is not the case for these issues.

I had to think carefully about how to answer because (when speaking to the aforementioned "randomly selected people who went to prestigious universities", as well as when speaking to EAs) such issues can be controversial and trigger defensiveness. These topics are political and cannot be de-politicized, I don't think there is any bias I can simply state that isn't going to be upvoted by those who agree and dismissed as a controversial political opinion by those who don't already agree, which isn't helpful.

It's analogous to if you walked into a random town hall and proclaimed "There's a lot of anthropomorphic bias going on in this community, for example look at all the religiosity" or "There's a lot of species-ism going on in this community, look at all the meat eating". You would not necessarily make any progress on getting people to understand. The only people who would understand are those who know exactly what you mean and already agree with you. In some circles, the level of understanding would be such that people would get it. In others, such statements would produce minor defensiveness and hostility. The level of "understanding" vs "defensiveness and hostility" in the EA community regarding these issues is similar to that of randomly selected prestigious university students (that is, much more understanding than the population average, but less than ideal). As with "anthropomorphic bias" and as with "speciesism", there are some communities where certain concepts are implicitly understood by most people and need no explanation, and some communities where they aren't. It comes down to what someone's point of view is.

Acquiring an accurate point of view, and moving a community towards an accurate point of view, is a long process of truth seeking. It is a process of un-learning a lot of things that you very implicitly hold true. It wouldn't work to just list biases. If I start listing out things like (unfortunately poorly named) "privilege-blindness" and (unfortunately poorly named) "white-fragility" I doubt it's not going to have any positive effect other than to make people who already agree nod to themselves, while other people roll their eyes, and other people google the terms and then roll their eyes. Criticizing things such that something actually goes through is pretty hard.

The productive process involves talking to individual people, hearing their stories, having first-hand exposure to things, reading a variety of writings on the topic and evaluating them. I think a lot of people think of these issues as "identity political topics" or "topics that affect those less fortunate" or "poorly formed arguments to be dismissed". I think progress occurs when we frame-shift towards thinking of them as "practical every day issues that affect our lives", and "how can I better articulate these real issues to myself and others" and "these issues are important factors in generating global inequality and suffering, an issue which affects us all".

Comment by ishaan on EA and tackling racism · 2020-06-14T19:49:49.161Z · EA · GW
What are some of the biases you're thinking of here?

This is a tough question to answer properly, both because it is complicated and because I think not everyone will like the answer. There is a short answer and a long answer.

Here is the short answer. I'll put the long answer in a different comment.

Refer to Sanjay's statement above

There are some who would argue that you can't tackle such a structural issue without looking at yourselves too, and understanding your own perspectives, biases and privileges...But I worried that tackling the topic of racism without even mentioning the risk that this might be a problem risked seeming over-confident.

At time of writing, this is sitting at negative-5 karma. Maybe it won't stay there, but this innocuous comment was sufficiently controversial that it's there now. Why is that? Is anything written there wrong? I think it's a very mild comment pointing out an obviously true fact - that a communities should also be self-reflective and self-critical when discussing structural racism. Normally EAs love self-critical, skeptical behavior. What is different here? Even people who believe that "all people matter equally" and "racism is bad" are still very resistant to having self-critical discussions about it.

I think that understanding the psychology of defensiveness surrounding the response to comments such as this one is the key to understanding the sorts of biases I'm talking about here. (And to be clear - I don't think this push back against this line of criticism is specific to the EA community, I think the EA community is responding as any demographically similar group would...meaning, this is general civilizational inadequacy at work, not something about EA in particular)

Comment by ishaan on EA and tackling racism · 2020-06-10T20:27:07.521Z · EA · GW

I broadly agree, but in my view the important part to emphasize is what you said on the final thoughts (about seeking to ask more questions about this to ourselves and as a community) and less on intervention recommendations.

Is EA really all about taking every question and twisting it back to malaria nets ...?... we want is to tackle systemic racism at a national level (e.g. in the US, or the UK).

I bite this bullet. I think you do ultimately need to circle back to the malaria nets (especially if you are talking more about directing money than about directing labor). I say this as someone who considers myself as much a part of the social justice movement as I do part of the EA movement.Realistically, I don't think it's really plausible that tackling stuff in high income countries is going to be more morally important than malaria net-type activities, at least when it comes to fungible resources such as donations (the picture gets more complex with respect to direct work of course). It's good to think about what the cost-effective ways to improve matters in high income countries might be, but realistically I bet once you start crunching numbers you will probably find that malaria-net-type-activities should still the top priority by a wide margin if you are dealing with fungible resources. I think the logical conclusions of anti-racist/anti-colonialist thought converge upon this as well. In my view, the things that social justice activists are fighting for ultimately do come down to the basics of food, shelter, medical care, and the scale of that fight has always been global even if the more visible portion generally plays out on ones more local circles.

However, I still think putting thought into how one would design such interventions should be encouraged, because:

our doubts about the malign influence of institutional prejudice...should reach ourselves as well.

I agree with this, and would encourage more emphasis on this. The EA community (especially on the rationality/lesswrong part of the community) puts a lot of effort into getting rid of cognitive biases. But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don't really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities). And that's kind of weird. We're a group of people who are trying to achieve social impact. We're often people who wield considerable resources and have to work with power structures all the time. It's a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average.I don't really mean this as a call to action (realistically, I think given the low current state of awareness it seems probable that attempting action is going to result in misguided or heavy-handed solutions). What I do suggest is - a lot of you spend some of your spare time reading and thinking about cognitive biases, trying to better understand yourself and the world, and consider this a worthwhile activity. I think, it would be worth applying a similar spirit to spending time to really understand these issues as well.

Comment by ishaan on Effective Animal Advocacy Resources · 2020-05-25T04:33:25.479Z · EA · GW

Super helpful, I'm about to cite this in the CE curriculum :)

Comment by ishaan on Why I'm Not Vegan · 2020-04-10T17:40:04.006Z · EA · GW
I get much more than $0.43 of enjoyment out of a year's worth of eating animal products

I think we would likely not justify a moral offset for harming humans at (by the numbers you posted) $100/year or eating children at $20/pound (100*15 years / 75 pounds). This isn't due to sentimentality, deontology, taboo, or biting the bullet - I think a committed consequentialist, one grounded in practicality, would agree that no good consequences would likely come from allowing that sort of thing, and I think that this probably logically applies to meat.

I think overall it's better to look first at the direct harm vs direct benefit, and how much you weigh the changes to your own experience against the suffering caused. The offset aspect is not unimportant, but I think it's a bit misleading when not applied evenly in the other direction.

I am sympathetic to morally weighing different animals orders of magnitude differently. We have to do that in order to decide how to prioritize between different interventions.

That said, I don't think human moral instincts for these sorts of cross-species trolley problems are well equipped for numbers bigger than 3-5. Your moral instincts can (I would say, accurately) inform you that you would rather avert harm to a person than to 5 chickens, but when you get into the 1000s you're pretty firmly in torture vs dust specks territory and should not necessarily just trust your instincts. That doesn't mean orders of magnitude differences are wrong, but it does mean they're potentially subject to a lot of bias and inconsistency if not accompanied by some methodology.

Comment by ishaan on Help in choosing good charities in specific domains · 2020-02-20T19:07:53.955Z · EA · GW

Charity Entrepreneurship is incubating new family planning and animal welfare organizations, which will aim to operate via principles of effective altruism - potentially relevant to your interests.

Comment by ishaan on Who should give sperm/eggs? · 2020-02-12T23:37:53.893Z · EA · GW

Since you are asking "who" should do it (rather than whether more or less people in general should do it, which seems the more relevant question since it would carry the bulk of the effect), I would wish to replace any anonymous donors with people who are willing to take a degree of responsibility for and engagement with the resulting child and their feelings about it, since looking at opinion polls from donor conceived people has made me think there's a reasonable chance they experience negative emotions about the whole thing at non-negligible rates and it is possible that this might be mitigated by having a social relationship to the donor.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2020-01-17T06:44:51.687Z · EA · GW

Spend some time brainstorming and compare multiple alternative courses of action and potential hurdles to those actions before embarking on it, consider using a spreadsheet to augment your working memory when you evaluate actions by various criteria, get a sense of expected value per time on a given task so you can decide how long it's worth to spend on it, enforce this via time capping / time boxing and if you are working much longer on a given task much than you estimated then re-evaluate what you are doing, time track which task you spend your working hours on to become more aware of time in general. Personally I don't think I fully appreciated how valuable time was and how much i was sometimes wasting unintentionally before tracking it (although I could see some people finding this stressful)

Of course this is all sort of easier said than done haha. I think to some degree watching other people actually doing things which one is supposed to do helps enforce the habit.

Comment by ishaan on Growth and the case against randomista development · 2020-01-17T06:28:24.021Z · EA · GW

Any discussion of how much it might cost to change a given economic policy / the limiting factor that has kept it from changing thus far?

(I think this is also the big question with health policy)

Comment by ishaan on Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? · 2020-01-13T00:21:50.493Z · EA · GW

"Rejecting" would be a bit unusual, but of course you should honestly advise a well qualified candidate if you think their other career option is higher impact. I think it would be ideal if everyone gives others their honest advice about how to do the most good, roughly regardless of circumstance.

I've only seen a small slice of things, but my general sense is that people in the EA community do in fact live up to this ideal, regularly turning down and redirecting talent as well as funding and other resources towards the thing that they believe does the most good.

Also, although it might ultimately add up to the same thing, I think it brings more clarity to think along the lines of "counterfactual impact" (estimating how much unilateral impact an individual's alternative career choices have) rather than "comparative advantage" which is difficult to assess without detailed awareness of the multiple other actors you are comparing to.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2019-12-16T17:14:34.956Z · EA · GW

I went to the program, was quite impressed with what I saw there, and decided to work at charity entrepreneurship.

Before attending the program, as career paths, I was considering academia, earning to give, direct work in the global poverty space, and a few other more offbeat options. After the program, I'd estimate that I've significantly increased the expected value of my own career (perhaps by 3x-12x or more) in terms of impact by attending the program, thanks to

1) the direct impact of CE itself and associated organizations. I can say that in terms of what I've directly witnessed, there's a formidable level of productive work occurring at this organization. My own level of raw productivity has risen quite a bit by being in proximity and picking up good habits. I'm pretty convinced that this productivity translates into impact, (although on that count, you can evaluate the key assumptions and claims yourself by looking at the cost effectiveness models and historical track record).

2) practical meta-skills I've picked up regarding how to think about personal impact. Not only did I change my mind and update on quite a few important considerations, but there were also quite a few things that I didn't even realize were considerations before attending the program. I think my decision making going forward will be better now.

3) connections and network to other effective altruists, and general knowledge about the effective altruism movement. Prior to attending the program my engagement with the community was on a rather abstract level. Now, if I wanted to harness the EA community to accomplish a concrete action in the global poverty or animal space, I'd know roughly what to do and who to talk to and how to get started.

4) the career capital from program related activities.

Also, I had a good time. If you enjoy skill building and like interacting with other effective altruists, the program is quite fun.

Happy to answer any questions.

Comment by ishaan on Introducing Good Policies: A new charity promoting behaviour change interventions · 2019-11-20T13:11:34.932Z · EA · GW

I'm sure there's a better document somewhere addressing these, but I'll just quickly say that people tend to regret starting smoking tobacco and often want to stop, tobacco smoking reduces quality of life, and that smokers often support raising tobacco taxes if the money goes to addressing the (very expensive!) health problems caused by smoking (e.g. this sample, and I don't think this pattern is unique). So I think bringing tobacco taxes in line with recommendations is good under most moral systems, even those which strongly prioritize autonomy - this is a situation where smokers seem to be straightforwardly stating that they'd rather not behave this way.

Eric Garner died because the police approached him on suspicion of selling illegal cigarettes and then killed him - I don't think that's realistically attributable to tobacco taxation.

Comment by ishaan on List of EA-related email newsletters · 2019-10-10T08:42:43.054Z · EA · GW

For global health, don't forget Givewell's newsletter!

For meta, CharityEntrepreneurship has one as well (scroll to the middle of the page for the newsletter)

Comment by ishaan on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-18T19:00:29.858Z · EA · GW
Do you have any opinions that you would be reluctant to express in front of a group of your peers? If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think what you're told.

Not necessarily! You might just be less averse to disagreement. Or perhaps you (rightly or wrongly) feel less personally vulnerable to the potential consequences of stating unpopular opinions and criticism.

Or, maybe you did quite a lot of independent thinking that differed dramatically from what you were "told", and then gravitated towards one or more social circles that happen to have greater tolerance for the things you believe, which perhaps one or more of your communities of origin did not.

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-18T17:32:43.036Z · EA · GW

I agree that more people trying to do cost effectiveness analyses is good! I regret that the tone seemed otherwise and will consider it more in the future. I engaged with it primarily because I too often wonder about how one might improve impact outside of impact-focused environments, and I generally find it an interesting direction to explore. I also applaud that you made the core claim clearly and boldly and I would like to see more of that as well - all models suffer these flaws to some degree and it's a great virtue to make clear claims that are designed such that any mistakes will be caught (as described here). Thanks for doing the piece and I hope you can use these comments to continue to create models of this and other courses of action :)

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-17T20:23:03.360Z · EA · GW

I think the biggest improvement would be correcting the fact that this model (accidentally, I think) assumes that improving any arbitrary high budget charity by 5% is equally as impactful as improving a Givewell equivalent charity by 5%. Most charity's impact is an order of magnitude smaller.

You could solve this with a multiplier for the charity's impact at baseline.

If I understand correctly, you figure that if you become a trustee of a £419668/year budget charity, if only you can improve the cost effectiveness by 5%, then you can divide that by 42 hours a year, to get £419668*5%/42 hours=£500/hour in the value of your donated time. (A style tip - it would be helpful to put the key equation describing roughly what you've done in the description, to make it all legible without having to go into the spreadsheet.)

I think it is fair to say that, were you to successfully perform this feat, you have indeed done something roughly as impactful as providing a £500/hour value to the charity you are trustee-ing for. So, if you improved a Givewell-top-charity-equivalent's cost effectiveness by 5% for a year, then maybe you could fairly take 5% of that charity's yearly budget and divide it by your hours for that year, as you've done, to calculate your Givewell-top-charity-equivalent impact in terms of how it would compare to donated dollars.

But if you improve a £419668/yr budget charity which is only 1% as cost-effective as a Givewell-top-charity-equivalent by 5%, then that makes your hourly impact 1%*£419668*5%/42 hours = £5/hour of Givewell-top-charity-equivalent impact - you'd be better served working a bit extra and donating 5 dollars to Givewell.

I don't think this model has credence even after these adjustments as I'm skeptical of the structure, but you did make those assumptions explicitly which is good. If you think the effect takes ~42 hours/year then this hypothesis is potentially cheap to just test in practice, and then revise your model with more information. Have you joined any boards and tried this in practice, if yes how did it go?

edit - ah, you're using the term "5% increase" very differently.

Instead it assumes a 5% increase, perhaps from £0 of impact to 5% of the annual income or perhaps from 100% of annual income to 105%

So just to be clear, this implies that making 100% of your annual income in impact would mean that you are the most cost effective charity in the world (or whatever other benchmark you want to set at "100%"). Used in this sense, the word "5% increase" doesn't mean "the shelter saves 5% more kittens" but that charity as a whole has gone from being part of the long tail of negligible impact to being 1/20th as cost effective as the most cost effective charity in the world. This isn't the way percents are usually expressed / this seems like a confused way to express this concept since the 100% benchmark is arbitrary/unknown - it would be better in that case to express it on an absolute scale rather than a percentage.

Comment by ishaan on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-21T23:32:44.550Z · EA · GW

brainstorming / regurgitating some random additional ideas -

Goodhart's law - a charity may from the outset design itself or self-modify itself around Effective Altruist metrics, thereby pandering to the biases of the metrics and succeeding in them despite being less Good than a charity which scored well on the same metrics despite no prior knowledge of them. (Think of the difference between someone who has aced a standardized test due to intentional practice and "teaching to the test" vs. someone who aced it with no prior exposure to standardized tests - the latter person may possess more of the quality that the test is designed to measure). This is related to "influencing charities" issue, but focusing on the potential for defeating of the metric itself, rather than direct effects of the influence.

Counterfactuals of donations (other than the matching thing)- a highly cost effective charity which can only pull from an effective altruist donor pool might have less impact than a slightly less cost effective charity which successfully redirects donations from people who wouldn't have donated to a cost effective charity (this is more of an issue for the person who controls talent, direction, and other factors, not the person who controls money).

Model inconsistency - Two very different interventions will naturally be evaluated by two very different models, and some models may inherently be harsher or more lenient on the intervention than others. This will be true even if all the models involved are as good and certain as they can realistically be.

Regression to the mean - The expected value of standout candidates will generally regress to the mean of the pool from which they are drawn, since at least some of the factors which caused them to rise to the top will be temporary (including legitimate factors that have nothing to do with mistaken evaluations)

Comment by ishaan on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-19T05:08:22.452Z · EA · GW

I think this description generally falls in line with what I've experienced and heard secondhand and is broadly true. However, there are some differences between my impression of it and yours. (But it sounds like you've collected more accounts, more systematically, and I've actually only gone up to the M.A. level in grad school, so I'm leaning towards trusting your aggregate)

Peer review is a disaster

I think we can get at better ways than peer review, but also, don't forget that people will sort of inevitably have Feelings about getting peer reviewed, especially if the review is unfavorable, and this might bias them to say that it's unfair or broken. I wouldn't expect peer review is particularly better or worse than what you'd expect from what is basically a group of people with some knowledge of a topic and some personal investment in the matter having a discussion - it can certainly be a space for pettiness, both by the reviewer and from the reviewed, as well as a space for legitimate discussion.

PIs mostly manage people -- all the real work is done by grad students and postdocs

I think this is sometimes true, but I would not consider this a default state of affairs. I think some, but not all, grad students and post docs can conceive of and execute a good project from start to finish (more, in top universities). However, I think most successful PIs are constantly running projects of their own as well. Moreover, a lot of grad students and post docs are running projects that either the PI came up with, or independently created projects that are ultimately a small permutation within a larger framework that the PI came up with. I do think it sometimes happens that some people believe they are doing all the work and sort of forget the degree of training and underestimate how much the PI is behind the scenes.

management and fundraising (and endless administrative responsibilities bestowed on any tenure-track professor) and can 100% focus on doing science and publishing papers, while getting mentoring from your senior PI and while being helped by all the infrastructure established labs

My impression was actually that grant writing, management, and setting up infrastructure is the bulk of Doing Science, properly understood. (Whereas, I get the impression that this write up sort of frames it as some sort of side show to the Real Work of Doing Science). With "fundraising", the writer of the grant is the one who has to engage in the big picture thinking, make the pitch, and plan the details to a level of rigor sufficient to satisfy an external body. With "infrastructure", one must set up the lab protocols so that they're actually measuring what they are meant to. It's easy to do this wrong, and what's worse, it's easy to do this wrong and not even realize you are doing it wrong and have those mistakes make it all the way up to a nonsensical and wrong publication. I think there is a level of fairly deep expertise involved in setting up protocols. And "management" in this context also involves a lot of teaching people skills and concepts, including sometimes a fair bit of hand-holding during the process of publishing papers (students' first drafts aren't always great, even if the student is very good).

People outside of biology generally think that doing a PhD means spending 6 years at the bench performing your advisor's experiments and is only possible with perfect undergrad GPA, not realizing that neither of these are true of you're truly capable

Very true in one sense - I agree that academia is very forgiving about credentials and gpa relative to other forms of post-graduate education, and people are definitely excited and responsive to being cold contacted by motivated students who will do their own projects. However, keep in mind that if you're planning to work on whatever you want, rather than your adviser's experiments, you will have more trouble fully utilizing the adviser's management/infrastructure/expertise and to a lesser extent grants.

For a unique and individual project, you might have to build some of your infrastructure on your own. This means things may take much longer and are more likely not to work the first few times - all of which is a wonderful learning experience, but this does not always align with the incentive of publishing papers and graduating quickly. I think some fields (especially the ones closer to math) have the sort of "pure researcher" track you have in mind, but it's rare in social and biological sciences in part because the most needed people are in fact those with scientific expertise who can train and manage a team and build infrastructure/protocol as well s fund raise and set an agenda- i think it would be tough to realistically delegate this to anyone who doesn't know the science.

(But - again, this is only my impression from doing a masters and from conversations I've had with other people. Getting a sense of a whole field isn't really easy and I imagine different regions and so on are very different.)

Comment by ishaan on 'Longtermism' · 2019-08-19T03:34:22.142Z · EA · GW

I think it's worth pointing out that "longtermism" as minimally defined here is not pointing to the same concept that "people interested in x-risk reduction" was probably pointing at. I think the word which most accurately captures what it was pointing at is generally called "futurism" (examples [1],[2]).

This could be a feature or a bug, depending on use case.

  • It could be a feature if you want a word to capture a moral underpinning common to many futurist's intuitions while being, as you said, remaining "compatible with any empirical view about the best way of improving the long-run future", or to form a coalition among people with diverse views about the best ways to improve the long-run future.
  • It could be a bug if people started informally using "longtermism" interchangably with "far futurism", especially if it created a motte-bailey style of argument in which to an easily defensible minimal definition claim that "future people matter equally" was used to response to skepticism regarding claims that any specific category of efforts aiming to influence the far future is necessarily more impactful.

If you want to retain the feature of being "compatible with any empirical view about the best way of improving the long-run future" you might prefer the no-definition approach, because criteria ii is not philosophical, but an empirical view about what society currently wrongly privileges.

From the perspective of addressing the "bug" aspect however, I think criteria ii and iii are good calls. They make some progress in narrowing who is a "longtermist", and they specify that it is ultimately a call to a specific action (so e.g someone who thinks influencing the future would be awesome in theory but is intractable in practice can fairly be said to not meet criteria iii). In general, I think that in practice people are going to use "longtermist" and "far futurist" interchangeably regardless of what definition is laid out at this point. I therefore favor the second approach, with a minimal definition, as it gives a nod to the fact that it's not just a moral stance and but advocates some sort of practical response.

Comment by ishaan on How do you, personally, experience "EA motivation"? · 2019-08-16T21:18:17.015Z · EA · GW

The way I feel when the concept of a person in the abstract is invoked feels like a fainter version of the love I would feel towards a partner, a parent, a sibling, a child, a close friend, and towards myself. The feeling drives me to act in the direction of making them happy, growing their capabilities, furthering their ambitions, fulfilling their values, and so on. In addition to feeling happy when my loved ones are happy, there is also an element of pride when my loved ones grow or accomplish something, as well as fulfillment when our shared values are achieved. When engaging with the concept of abstract people, I can very easily imagine real people - each with a rich life history, unique ways of thinking, a web of connection, and so on...people who I would love if I were to know them. This motivates me to work hard to provide for their well being and growth, to undergo risks and dangers and sacrifices to protect them from harm, to empower and facilitate them in their undertakings, and to secure a future in which they may flourish - in the same ordinary sense that I imagine many other people do for themselves, their children and families, their tribes and nations, all people, all beings, and so on. I feel a sense of being united with all people as we work together to steer the universe towards our shared purpose.

You've italicized "effectively" as part of the question, but I don't think I feel any real distinction between "wanting to help people" and "wanting to help people effectively" - when I'm doing a task, it seems like doing it effectively is rather straightforwardly better than doing it ineffectively. "Effective altruism" does imply a level of impartiality regarding who benefits which I don't possess (since I care about myself, my friends, my family, and so on more than strangers), but it is otherwise the same. Even if I were I only to help people who I directly knew and personally loved in a non-abstract sense, I would still seek to do so effectively.

Comment by ishaan on What posts you are planning on writing? · 2019-07-26T07:57:32.301Z · EA · GW

That very EA survey data, combined with Florida et all The Rise Of The Megaregion data which characterizing the academic/intellectual/economic output of each region. It would be a brief post, the main takeaway is that EA geographic concentration seems associated with a region's prominence in academia, whereas things like economic prominence, population size, etc don't seem to matter much.

Comment by ishaan on What posts you are planning on writing? · 2019-07-25T22:33:14.417Z · EA · GW

Here's some stuff which I may consider writing when I have more time. The posts are currently too low on the priorities list to work on, but if anyone thinks one of these is especially interesting or valuable, I might prioritize it higher, or work on it a little when I need a break from my current main project. For the most part I'm unlikely to prioritize writing in the near future though because I suspect my opinions are going to rapidly change on a lot of these topics soon (or my view on their usefulness / importance / relevance).

1) Where Does EA take root? The characteristics of geographic regions which have unusually high numbers of effective altruists, with a eye towards guessing which areas might be fertile places to attempt more growth. (Priority 4/10, mostly because I mostly already have the data due to working on another thing, but I'm not sure to which growth is a priority)

2) Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon, but I'm ).

3) A (as far as I know novel) thought experiment meant to complicate utilitarianism, which has produced some very divergent responses when I pose it conversation so far. The intention is to call into question what exactly it is that we suppose ought to be maximized. (priority 3/10)

4) How to turn philosophical intuitions about "happiness", "suffering", "preference", 'hedons" and other subjective phenomenological experiences into something which can be understood within a science/math framework, at least for the purposes of making moral decisions. (priority 3/10)

5) Applying information in posts (3) and (4) to make practical decisions about some moral "edge cases". Edge cases include things like: non-human life, computer algorithms, babies and fetuses, coma, dementia, severe brain damage and congenital abnormalities. (priority 3/10)

6) How are human moral and epistemic foundations formed? If you understand the "No Universally Compelling Arguments" set of concepts, this post is basically helping people apply that principle in practical terms referencing real human minds and cultures, integrating various cultural anthropology and post modernist works. (priority 2/10)

Comment by ishaan on Ways Frugality Increases Productivity · 2019-07-19T20:58:35.668Z · EA · GW

I super agree with the title, but I think the text actually really undersells it! Runway not only increases your flexibility to not earn, but also reduces your stress and removes all sorts of psychologically difficult power dynamics that come with having a boss or otherwise being beholden to external factors for your well being (Yes, you may still have a boss or external factors, but now you won't need their continued approval or success to pay bills, and that makes all the difference). Also, frugality enables you to really splurge without worrying when it really counts. Additionally, If you do not have any large and expensive possessions, tend to live in low cost apartments, and don't have any dependents, you can move to whatever location it is most productive for you to be in with little to no overhead - whether that be across town or across the globe. Frugality in an urban context also forces close living situations (housemates) which can dramatically increase your social network. Further, you end up building scrappy skills and habits (e.g. negotiating apartments, meal planning, knowledge of public services, biking) which can really come in handy even when you're not being frugal.

If you have the privilege to be in circumstances where you are able to make money without spending most of it, it's good to take advantage of this if you can. Don't feel bad about it if you can't - it's not always simple or possible for everyone. But if you feel like it would be pretty easy for you to be frugal and you're choosing not to because you think spending a lot more makes you more productive, I strongly suggest reconsider.

Another point worth considering is that if you are sufficiently frugal, and if "productivity" is truly your goal here, you can "increase your productivity" by taking that money and hiring a second person to work on your project with you. Can all your time saving expenses increase your productivity more than a whole second person? (I'm sure there are some circumstances for which the answer is yes, but I imagine that is rare.)

Comment by ishaan on Considering people’s hidden motives in EA outreach · 2019-06-01T21:41:14.086Z · EA · GW

You've laid out your opinions clearly. It is well cited, and has interesting and informative accompanying sources. It's a good post. However, I disagree with some portions of the underlying attitudes, (even while not particularly objecting to some of the recommended methods)

In an ideal world where all people are rational, the ideas mentioned in this forum post would be completely useless.

The thing is, this is a purely inside view. It sort of presupposes effective altruist ideas are correct, and that the only barrier to widespread adoption is irrationality, rather than any sensible sort of skepticism.

While humans can be irrational in distributing status, there is such a thing as legitimately earned status. If we put on our idealist hats for just a moment and forget all the extremely silly things humans accord status to, status can represent the "outside view" - if institutions we respect seem to respect EA, that should increase our confidence in EA ideas. Not because we're status climbing apes, but because "capable of convincing me" shouldn't be a person's only bar for trusting an argument. One should sensibly understands the limited scope of ones own judgement regarding big topics.

Now, taking our idealist hats off, obviously we can't just trust what most people think, or consider all "high status" institutions as equally legitimate. We have to be discerning. But there are institutions (such as academia, in my opinion) whose approval matters because it functions as legitimate external validation. It's not just social currency, it's a well earned social currency. Not only that, it's an opportunity to send our good ideas elsewhere to develop and mutate, as well as an opportunity to allow our bad ideas to be culled.

Unfortunately, people often are much less rational than we’d like to admit. Acknowledging this might be a pragmatic way for EA to improve outreach effectiveness.

The other issue is that when one is forming a broad, high level strategy for engaging in the world, it should feel good. The words one uses should make one feel warm inside, not exasperated at the irrationality of the world and the necessity of stooping to slimy feeling methods to win. Lest anyone irrationally (/s) dismiss this as a "warm fuzzy altruism", in Bosch's linked taxonomy, let me pragmatically (/s) employ an appeal to authority: Yudkowsky has made the same point. If it feels cynical and a touch Machiavellian, it usually will not ultimately produce morally wholesome results. Personally, I think if you want to really convince people, you shouldn't use methods that would make them feel like you tricked them if they knew what you were doing.

Not to's just sort of impractical for EA to attempt "we know you are irrational and we're not above pushing your irrationality buttons" strategies. EA organizations are generally scrupulous about transparency so that we can hold each other accountable. This means that any cynical outreach attempts will be transparent as well. In general my sense is that idealist institutions can't effectively wield some of these more cynical methods.

Also as a sort of aside, I don't think there's anything irrational about appealing to emotions. The key is to appeal to emotions in a way that we bring out behavior which is a true expression of people's values. Often, when someone has a "bad" ideology, it is emotions of compassion that bring them out of it. Learning to better engage people on an emotional level is not in any way opposed to presenting logical and rational cases for things.

How can EA help people increase their status? a non-cynical way?

By acquiring well-earned legitimacy! Make real positive impacts in areas other people care about. That means you can also help individual effective altruists make real measurable impacts that they can put on their resume and thereby increase their career capital. Create arguments that other intellectuals agree with and cite. Mentor other people and give them skills. Create mechanisms for people to be public about their donations and personal sacrifices they might make to further a cause in a socially graceful way (it inspires others to do the same). These are all things that the Effective Altruist community is currently doing, and it's been working regardless of whether or not people are wearing suits.

What all these methods have in common is that they work with people's rationality (and true altruistic motives), rather than work around their irrationality (and hidden selfish motives)- these are methods that encourage involvement with EA because people are convinced that them personally being involved with EA involvement will help further their (altruistic, but also otherwise) goals. The status raising effects in these methods are secondary to real accomplishment, they put forth honest signals of competence and skill, which the larger society recognizes because it is actually valuable. The appeals to emotion work via being connected to the reality of actually accomplishing the tasks that those emotions are oriented towards.

So, I would generally agree with your call for EAs to think about more ways to gain legitimacy. I just want to strongly prioritized well-earned legitimacy...whereas this post comes off as though it's largely about gaining less legitimate forms of status. (Perhaps due to an implicit feeling that all status is illegitimate?)

Comment by ishaan on Which scientific discovery was most ahead of its time? · 2019-05-31T01:10:03.894Z · EA · GW

I think part of the "continuity" comes from the fact that things that were "ahead of their time" tended not to be useful yet and get lost. Or worse, perhaps several people had to independently come up with, and support, and learn about an idea enough to use it for it to be actually adopted, or it just ends up sitting in some tinkerer's basement or a dusty old tome.

So, you can flip this question: Which discoveries and inventions seem to have occurred after their time (e.g. they were technologically possible, the prerequisite ideas were pretty well known, and they would have been immensely useful practically in that time and place) and why didn't civilization get at them before?

Comment by ishaan on There's Lots More To Do · 2019-05-30T23:21:23.492Z · EA · GW

Well, firstly, how much credence should we assign the actual analysis in that post?

Before we begin talking about how we should behave "even if" the cost per life saved is much higher than 5k - is there some consensus as to whether the actual facts and analysis of that post are actually true or even somewhat credible? (separate from the conclusions, which, I agree, seem clearly wrong for all the reasons you said).

As in, if they had instead titled the post "Givewell's Cost-Per-Life-Saved Estimates are Impossibly Low" and concluded "if the cost per life saved estimate was truly that low, we could have already gone ahead and saved all the cheap lives, and the cost would be higher - so there's something deeply wrong here"... would people be agreeing with it?

(Because if so, shouldn't the relevant lower bound for cpls on the impact evaluations be updated if they're wrong, and shouldn't that probably be the central point of discussion?

And if not...we should probably add a note clarifying for any reader joining the discussion late, that we're not actually sure whether the post is correct or not, before going into the implications of the conclusions. We certainly wouldn't want to start thinking that there aren't lives that can be saved at low cost if there actually are)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-30T14:40:06.395Z · EA · GW

I think that's a little unfair. It wasn't just have an "unexamined assumption", he just declared that solidarity was the best way and named some organizations he liked, with no attempt at estimating and quantifying. And he's critiquing EA, an ideology whose claim to fame is impact evaluations. Can an EA saying "okay that's great, I agree that could be true... but how about having a quantitative impact evaluation... of any kind, at all, just to help cement the case" really be characterized as "whataboutism" / methodology war?

(I don't think I agree with your first paragraph, but I do think it's fair to argue that "but not all readers are in high income countries" is whataboutism until I more fully expand on what I think the practical implications are on impact evaluation. I'm going to save the discussion about the practical problems that arise from being first world centric for a different post, or drop them, depending on how my opinion changes after I've put more thought into it.)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-30T01:00:32.663Z · EA · GW
This is with regards to political ideologies where either the disagreement over fundamental values, or at least basic facts that inform our moral judgements, are irreconcilable. Yet there will also be political movements with which EA can reconcile, as we would share the same fundamental values, but EA will nonetheless be responsible to criticize or challenge, on the grounds those movements are, in practice, using means or pursuing ends that put them in opposition to those of EA.

I'm going to critique Connor's article, and in doing so attempt to "lead by example" in showing how I think critiques of this type are best engaged.

The best way to show solidarity is to strike at the heart of global inequality in our own land.

There's two problems with Connor's article, and they both have to do with this sentence.

The less important problem: Who is the "our" in the phrase "our own land"? We're on the internet, yet Connor just assumes the reader's allegiances, identity, location, etc. Why is everyone who is not in some particular land implicitly excluded from the conversation? Why is "us" not everyone and "our land" not the Earth?

EA is just as guilty of this, for example when people talk about dollars going farther "overseas". This is the internet, donors and academics and direct workers and so on live in every country, so where is "local" and where is "overseas", exactly? For all EA's globalist ambitious, there is this assumption that people who are actually in a low-middle income country aren't a part of the conversation. (I agree with everything the "dollar overseas" article actually says, just to be clear. The problem is what the phrasing means about the assumptions of the writers.)

It's bad when Connor does it and it's bad when effective altruists do it. Yes, we are writing for a specific audience, but that audience is anyone who takes the time to understand EA ideas and can speak the language written. This is part of what I'm talking about when I say that EA makes some very harmful assumptions about who exactly the agents of change are going to be and the scope of who "effective altruists" potentially are. This problem is not limited to EAs, it is widespread.

The problem isn't the phrasing, of course, it's what the phrasing indicates about the writer.

The more important problem, and on this forum, this one is preaching to the choir of course, is 2) You can't just assume that your solidarity group is the most effective way to do things. Someone still has to do an impact evaluation on your social movement and the flow of talent and resources through that movement, including the particular activities of any particular organization enacting that movement.

Thus far, Effective Altruists are at the forefront of actually attempting to do this in a transparent way for altruistic organizations. The expansion to policy change is still in its infancy, but ...I would not be surprised if impact evaluations of attempting political movements and policy changes begin surfacing at some point.

Nor can you just assume that the best way to do things is local and that people should for some mysterious reason focus on things "in their own lands". Yes, it may in fact be beneficial to be local at times, have to actually check, you have to have some reasonable account of why this is the most effective thing for you to do.

Once you agree on certain very basic premises (that all humans are roughly equally important moral subjects, that the results of your actions are important, etc) I think all effective altruism really asks is that you attempt process of actually estimating the effect of your use of resources and talent in a rigorous way. This applies regardless of whether your method is philanthropy or collective action.

(What would Connor say if they read my comment? I suspect they would at the very least admit that it was not ideal to implicitly assume their audience like that. But I'd like to think any shrewd supporter of collective action would eventually ask..."Well okay, how do I actually do an impact evaluation of my collective action related plans?" And the result would hopefully be more rigorous and effective collective action, which is more likely to actually accomplish what it was intended to accomplish. I think it's important that the response deconstructed the false dichotomy between "collective action" and "effective altruism". The critic should begin asking: "okay, disagreements aside, what might these effective altruist frameworks for evaluating impact do for me?" and "If I think that this other thing is more effective, how can I quantitatively prove it?")

I think the "less important problem" is related to the "more important problem". For Connor, even if we grant that collective action is the best thing, the implicitly western "us" limits his vision as to what forms collective action could take, and which social movements people like himself might direct money, talent, or other resources towards. (For EAs, I would speculate that the implicit "us" limits our vision in different, more complicated ways, having to do with under-valuing certain forms of human capital in accomplishing EA goals - Just as Connor just assumes local is better, I think EAs sometimes just assume certain things that EAs tend to assume about exactly who is well placed to make effective impact (and therefore, who needs EA oriented advice, resources, education, training, etc). it's a subject I'm still thinking about, and it's the one I hope to write about later.

Comment by ishaan on Drowning children are rare · 2019-05-29T18:14:37.029Z · EA · GW

I think examining the number of low hanging fruits is important. I'm not yet sure if this analysis is correct, but I too would like to know exactly how many low hanging fruits there are, and exactly how low hanging they are, and whether this information is consistent with EA org's actions. If your analysis is true, people should put more energy into expanding cause areas beyond health stuff.

I think it might be nice if someone attempted a per-intervention spreadsheet / graph estimating how much more expensive the "next marginal life saved / qaly / disease prevented / whatever" would get, with each additional dollar spent...while sort of assuming that that currently existing organizations can successfully scale or new organizations can be formed to handle the issue. (So, sort of like "room for more funding", but focusing instead on the scale of the problem rather than the scale of the organization that deals with the problem). Has someone already done so? I know plenty of people have looked at problem scales in general, but I haven't seen much on predicting the marginal-cost changes as we progress along the scales.

Okay, that said: this last paragraph was in the original post but not the cross-post

As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits. And try to fix the underlying systems problems that got you so confused in the first place.

I think there's potentially a much deeper problem with this statement, which goes beyond any in the impact analysis. Even if one forgets all moral philosophy, disregard all practical analyses, and use nothing but concrete practical personal experience and a gut sense of right and wrong to guide one's behavior...well, for me at least, that still makes living frugally to conserve scarce resources for others seem like a correct thing to do?

I know people who live in poverty, personally - both in the "below the American poverty line" sense (I guess I'm technically below that line myself in a grad student sort of way, but I know people who are rather more permanently under it), and in the "global poor" sense. Even by blood alone, I'm only two generations removed from people who have temporarily experienced global poverty of the <$2/day magnitude. So for me at least, it remains obvious on a personal face-to-face level that among humans the global poor are the ones who can make best personal use of scarce resources. I imagine there are people whose social circles don't include people in local or global poverty, but that's not an immutable fact of life - one can change that, if one thinks social circles are essential ingredients to making impact.

I don't really agree with the framing of "Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits" as something obviously distinct from helping the global poor. I don't feel like I or my lived ones could never experience global poverty. I feel like I'm part of a community and friendly with people who might directly experience or interact with global poverty. If being a low info donor doesn't help...are there not things one can do to become a "high info donor" or direct worker for that matter?

I think that if I believed similarly to you - and if I understand correctly, you think: that abstractions are misleading, that face-to-face community building and support of loved ones and people you actually know is the important thing here, that it's important to build your own models of the world rather than trust more knowledgeable people to do impact evaluations for you, that it's really hard to overcome deceptive marketing practices by donation seekers......then, rather than claiming that there is no imperative to live frugally and engage with global poverty. If I believed this I think I'd advocate that more EAs set some time aside to get some hands-on, face to face involvement in with the people who generate impact evaluations (or at least, actually read the impact evaluation), that donors spend more time meeting people who do direct work, and that both donors and direct work spend more time interacting with the supposed direct beneficiaries of their work. That seems really different from saying that the "utilitarian imperative" is wrong. (And maybe you do advocate all these other things as well, I don't mean to imply you don't...but why advocate for just staying within yourself and your circle?)

If there's a lot of misinformation and misleading going on, I do think there's ways to get around that by acting to put oneself in more situations where one has more opportunities for direct experience and building one's own models of the world. Going straight to the idea that you should just take care of yourself and people you currently know seems ...a bit like giving up? And even if you don't think a global scope is appropriate, is there not enough poverty within your immediate community and social circle that there remains an urgency to be frugal and use resources to help others?

I just don't see how your analysis, even if totally correct, leads to the conclusion that the imperative to frugality and redistribution is destroyed. I mean, as long as we're calling it "living like a monk", at least some of the actual monks did it for exactly that purpose, in the absence of any explicit utilitarianism, with the people they tried to help largely on a face to face basis. it's not an idea that rests particularly heavily on EA foundations or impact evaluations.

(I don't want to be construed as defending frugality in particular, just claiming the general sense of the ethos of redirecting resources to people who may need it more, and the personal frugality that is sometimes motivated by that ethos, as being positive... and that the foundations of it do not rely on trusting Givewell, Effective Altruism, and so on)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-29T05:21:50.212Z · EA · GW

This is sort of an off the cuff ramble of an answer for a topic which deserves more careful thinking, so I might make some hand-wavy statements and grand sweeping claims which I will not endorse later, but:

First off, I feel that it's a little unhelpful to frame the question this way. It implicitly forces answers to conflate some fairly separate concepts together 1) The System 2) leftists 3) critiques of EA.

Here's a similarly sort of unhelpful way to ask a question:

What are these "cognitive biases" that effective altruist critiques of veganism are seeking to make us aware of?

How would you answer?

Most effective altruists support veganism! The central insight motivating most vegan practices is similar to the central insight of EA. Don't lose sight of that just because some branches of effective altruists think AI risk rather than veganism is a best possible way to go about doing good, and cite cognitive biases as the reason why people might not realize that AI risk is the top priority.

Cognitive Biases are a highly useful but fully generalizable concept that can be used to support or critique literally anything. You should seek to understand cognitive biases them in their own right...not only in the light of how someone has used them to form a "critique of veganism" by advocating for AI risk instead.

That's how you'd answer, right? So, in answer to your question:

What exactly is the system EA's (leftist) critics are seeking to change?

Most ideologically consistent leftists support EA, or would begin supporting it once they learn what it is. Utilitarianism / widening of the moral circle is very similar to ordinary lefty egalitarianism. Don't lose sight of that just because some branches of the left don't think some particular EA method are the best possible way to save the world, and cite Failure to Challenge the System as the reason.

The System is a highly useful but fully generalizable concept that can be used to support or critique literally anything. You should seek to understand it in its own right...not only in the light of how someone might invoke it to form a "critique of (non-systemic) effective altruism" by advocating for systemic change instead

I hope this analogy made my point - this question implicitly exaggerates a very minor conflict, setting up an oppositional framework which does not really need to exist.

...okay, so to actually attempt to answer the quesiton rather than subvert it. Please note that the following are not my own views, but a fairly off the cuff representation of my understanding of a set of views that other people hold. Some of these are "oversimplified" versions of views that I do roughly hold, while others are views that I think are false or misguided.

What is the system?: Here's one oversimplified version of the story: from the lower to upper paleolithic, egalitarian hunter gatherers gradually depleted the natural ecology. Prior to the depletion, generally most able bodied persons could easily provide for themselves and several dependents via foraging. Therefore, it was difficult for anyone to coerce anyone else, no concepts of private property were developed, and people weren't too fussy about who was related to whom.

In the neolithic, the ecology was generally getting depleted and resources were getting scarce. Hard work and farming became increasingly necessary to survive and people had incentive to violently hoard land, hoard resources, and control the labor of others. "The System" is the power structures that emerged thereby. It includes concepts of private property, slavery, marriage (which was generally a form of slavery), social control of reproduction, social control of sex, caste, class, racism, etc - all mechanisms ultimately meant to justify the power held by the powerful. Much like cognitive biases, these ideas are deeply built into the way all of us think, and distort our judgement. (E.g. do you believe "stealing" is wrong? Some might argue that this is the cultural programming of The System talking. Without conceptions of property, there can be no notion of stealing)

Despite resource scarcity declining due to tech advance, the bulk of human societies are still operating off those neolithic power hierarchies, and the attending harmful structures and concepts are still in place. "Changing the system" often implies steps to re-equalizing the distribution of power and resources, or otherwise dismantling the structures that keep power in the hands of the powerful.

By insisting that the circle of moral concern includes all of humanity (at least), and actively engaging in a process which redistributes resources to the global poor, effective altruists would generally be considered as a source of positive contributing to the dismantling of "The System". I do think the average leftist would think Effective Altruism, properly pitched, is generally a good idea - As would the average person regardless of ideology, realistically, if you stuck to the basic premises and didn't get too into some of the more unusual conclusions that they sometimes are taken to.

So how come some common left critiques of EAs invoke "The System"?:

Again, I don't (entirely) agree with all these views, I'm explaining them.

1) Back when the public perception of EA was that it was about "earning to give" and "donating"...especially when it seemed like "earning to give" meant directing your talent to extractive corporate institutions, the critique was that donations do not actually alter the system of power. Consider that a feudal lord may "give" alms to the serf out of noblesse oblige, but the fundamentally extractive relationship between the lord and serf remains unchanged. I put "give" in quotes because, if you really want to understand The System, you have to stop implicitly thinking of the "lord's" "ownership" of the things they "nobly" "give" to the "serf" as in any way legitimate in the first place. The lord and serf may both conceptualize this exchange as the lord showing kindness towards the serf, but the reality is that the lord, or his ancestors, actually create and perpetuate the situation in the first place. Imagine the circularity of the lord calculating he had made a magnanimous "impact" by giving the serf a bit of the gold... that was won by trading the grain which the serf had toiled for in the first place. Earning to give is a little reminiscent of this...particularly in fields like finance, where you're essentially working for the "lord" in this analogy.

2) Corporate environments maximize profit. Effective altruists maximize impact. As both these things are ultimately geared towards maximizing something that ultimately boils down to a number, effective altruist language often sounds an awful lot like corporate language, and people who "succeed" in effective altruism look and sound an awful lot like people who "succeed" in corporate environments. This breeds a sense of distrust. There's a long history within leftism of groups of people "selling out" - claiming to try to change the system from inside, but then turning their backs on the powerless once they got power. To some degree, this similarity may create distasteful perceptions of a person's "value" within effective altruism that is analogous to the distasteful perception of a person's "value" in a capitalist society. (E.g. capitalist society treats people who are good at earning money as sort of morally superior. Changing "earning money" to "causing impact" can cause similarly wrong thinking)

3) EAs to some extent come off as viewing the global poor as "people to help" rather than "people to empower". The effective altruist themself is viewed as the hero and agent of change, not the people they are helping. There is not that much discussion of the people we are helping as agents of change who might play an important part in their own liberation. (This last one happens to be a critique I personally agree with fairly wholeheartedly, and plan to write more on later)

To the extent the systemic change criticism of EA is incorrect, as EA enters the policy arena more and more, we will once again come in friction with leftist (and other political movements), unlike EA has since its inception. The difference this time is we would be asserting the systemic change we're pursuing is more effective (and/or in other ways better) than the systemic change other movements are engaging in. And if that's the case, I think EA needs to engage the communities of our critics just as critically as they have engaged us. This is something I've begun working on myself.

I would strongly recommend not creating a false dichotomy between "EA" and "Leftists", and setting up these things as somehow opposed or at odds. I'm approximately an EA. I'm approximately a leftist. While there are leftist-style critiques of EA, and EA-style critiques of leftism, I wouldn't say that there's any particular tension between these frameworks.

There is really no need to draw lines and label things according to ideology in that manner. I think the most productive reply to a "X-ist" critique of EA is an X-ist support of EA, or better yet, a re-purposing of EA to fulfill X-ist values. (Yes, there are some value systems for which this cannot work...but the egalitarian left is definitely not among those value systems)

to the extent the systemic change criticism of EA is correct, EA should internalize this criticism, and should effectively change socioeconomic systems better than leftists ever expected from us


And to that I would add, don't needlessly frame EA as fundamentally in opposition to anyone's values. EA can be framework for figuring out strategic ways to fulfill your values regardless of what those values are. (Up to a point - but again, "leftists" are well within the pale of that point.)

...and perhaps better than leftist political movements themselves (lots of them don't appear to be active or at least effective in actually changing "the system" they themselves criticize EA for neglecting).

Well, I think this is an unhelpful tone. It is, again, setting up EA as something different and better than leftism, rather than a way for us to fulfill our values - even if our values aren't all exactly the same as each others. This isn't particular to leftism. If you wanted the members of a church congregation to donate to Givewell, you should focus on shared values of charity, not "EAs could save more souls than Christianity ever could". The goal for EA is not to engage against other ideologies, the goal (to the extent that EA ideas are good and true, which obviously they may not all be) is to become part of the fabric of common sense by which other ideologies operate and try to perpetuate their goals.

Beyond the tone it's also just not true, in my opinion. Seems to me that social change does in fact occur constantly due to political movements, all the time. What's more, I'm pretty sure that the widespread acceptance of the basic building block concepts of effective altruism (such as, all people are equally important) are largely due to these leftist social movements. I don't think it's a stretch to say that EA itself is at least in part among the products of these social movements.

Comment by ishaan on Drowning children are rare · 2019-05-29T04:09:29.169Z · EA · GW

That was my first question too, but I think figured out the answer? Maybe? (Let me know if I got this right BenHoffman?)

BenHoffman's central claim is not that people aren't suffering preventable diseases. It is only that "drowning children" (a metaphor for people who can be saved with a few thousand dollars) are rare.

So they're questioning why, if the current price of saving a life is so low, and the amount of available funding so high, why hasn't all that low hanging fruit of saving "drowning children" been funded already? And if it has been, the marginal price should be higher by now?

And the answer supposedly can't be "there's simply too many low hanging fruits, too many drowning children" because, if you assume that all low hanging fruits are Communicable, maternal, neonatal, and nutritional diseases disease related, there's a maximum of ten million fruits (low hanging or not) and the most generous thing for the "there's just too many low hanging fruits for us to pick them all and that's why the price remains low" is to assume all possible fruits are low hanging. And that's why it makes sense to assume that they're all at the marginal price. The claim is that if you were truly purchasing all the low hanging lives saved, and your budget was that high, the marginal price should have gone up by now because you should have already bought up all the cheap life saving methods.

(I'm just exploring the thought process behind this particular subsection of the analysis, which is not to be taken as being agreement with the overall argument, in whole or in part.)

Comment by ishaan on How to improve your productivity: a systematic approach to sustainably increasing work output · 2019-05-28T20:06:48.389Z · EA · GW

I haven't tried a mini-stepper! Next time I'm at the gym I'll check if they have one I can try. Even if it does not work as well, it would certainly be a lot cheaper and more portable.

Untested Speculation: People using steppers/bikes etc. might stop exerting conscious attention to move once they get sufficiently absorbed in their work. A special property of treadmills is that if you stop, you'll be carried backwards and away from your keyboard - this trains you out of stopping pretty instantly. Steppers/bikes/etc wouldn't automatically have this property - though perhaps one could mimic the training by adding a "don't stop!" signalling noise or something. Ultimately I think it's probably important that the movement not require much conscious attention.

Comment by ishaan on EA Survey 2018 Series: Cause Selection · 2019-05-23T06:06:11.405Z · EA · GW Effective Givers.pdf?dl=0

This isn't really what I was looking for, but it's an"online national sample of Americans" polled on giving to deworming vs make a wish and the local choir. I'm hoping to find something more focused on the diversity of causes within EA, and more well defined and more adjacent populations.

I mentioned college professors above, but I can think of lots of different populations e.g ."students from specific colleges" or "members of adjacent online forums", or "startup founders" or 'doctors without borders people" or "teach for america people" or even "Non-EA friends and relatives of EAs" which might be illustrative as points of comparison - some easier to poll than others. Generally I think the most useful data comes from those who are representative of people who are already sort of adjacent to EA, represent key institutions, and whose buy-in would be most practically useful for movement building over decades, which is why I went for "college professors" first.