Posts

Update On Six New Charities Incubated By Charity Entrepreneurship 2020-02-27T05:20:18.346Z

Comments

Comment by ishaan on What should CEEALAR be called? · 2021-06-16T02:03:56.140Z · EA · GW

I thought "EA hotel" was pretty great as a straightforward description, good substitutes might have a word for "ea" and a word for "hotel". So like:

Bentham's Base
Helpers' House

Swap with Lodge, Hollow, Den if alliteration is too cute
 e.g. "Bentham's House", "Bentham's Lodge" both sound pretty serious.

Or just forget precedent and brand something new e.g. Runway (or Runway Athena)

Some "just kidding" alliterative options that I couldn't resist:
Crypto crib, Prioritization Place, Utilitarian's Union, Consequentialist Club, Greg's iGloo

Comment by ishaan on EA is a Career Endpoint · 2021-05-20T01:40:15.777Z · EA · GW

What would it take to get the information that people like you, MichaelA, and many others have, compile it into a continually maintained resource, and get it into the hands of the people who need it?

I guess the "easy" answer is "do a poll with select interviews" but otherwise I'm not sure. I guess it would depends on which specific types of information you mean? To some degree organizations will state what they want and need in outreach. If you're referring to advice like what I said re: "indicate that you know what EA is in your application", a compilation of advice posts like this one about getting a job in EA might help. Or you could try to research/interview to find more concrete aspects of what the "criteria +bar to clear on those criteria" is for different funders if you see a scenario where the answer isn't clearly legible. (If it's a bar at all. For some stuff it's probably a matter of networking and knowing the right person.)

Another general point on collecting advice is that I think it's easy to accidentally conflate "in EA" (or even "in the world") with "in the speaker's particular organization, in that particular year, within that specific cause area" when listening to advice…The same goes for what both you and I have said above. For example, my perspective on early-career is informed by my particular colleagues, while your impression that "funders have more money than they can spend" or the work being all within "a small movement" etc is not so applicable for someone who wants to work in global health. Getting into specifics is super important. 

Comment by ishaan on EA is a Career Endpoint · 2021-05-20T00:44:04.528Z · EA · GW

Heh, I was wondering if I'd get called out on that. You're totally right, everything that happens in the world constitutes evidence of something! 

What I should have said is that humans are prone to fundamental attribution error and it is bad to privilege the hypothesis that it's evidence of real skill/experience/resume signalling/degree etc, because then you risk working on the wrong things. Rejections are evidence, but they’re mostly evidence of a low baseline acceptance rate, and only slightly  evidence of other things.

I can imagine someone concluding things like "I'd better get a PhD in the subject so I can signal as qualified and then try again" in a scenario where maybe the thing that would've shifted their chances is rewording a cover letter, spending a single day researching some examples of well-designed CEAs before the work task, or applying on a different year.

Comment by ishaan on EA is a Career Endpoint · 2021-05-18T13:33:02.837Z · EA · GW

Another factor which may play a role in the seeming arbitrariness of it all, is that orgs are often looking for a very specific thing, or have specific values or ideas that they emphasize, or are sensitive to specific key-words, which aren't always obvious and legible from the outside - leading to communications gaps. To give the most extreme example I've encountered of this, sometimes people don't indicate that they know what EA is about in their initial application, perhaps not realizing that they're being considered alongside non-EA applicants or that it might matter. For specific orgs, communication gaps might get more specific. If you're super interested in joining an org, getting a bit of intel on this can really help (and is a lot easier than trying to get experience somewhere else before re-applying!).

Comment by ishaan on EA is a Career Endpoint · 2021-05-18T13:11:22.218Z · EA · GW

Also don't worry about repeated rejections. Even if you are rejected, your application had an expected value, it increased the probability that a strong hire was made and that more impact was achieved. The strength of the applicant pool matters. Rejection of strong applicants is a sign of a thriving and competitive movement. It means that the job that you thought was important enough to apply to is more likely to be done well by whoever does it.

Rejection should not be taken as evidence that your talent or current level of experience is insufficient. I think that (for most people reading this forum) it's often less a lack of the trust/vetting issue, and more a bit of randomness. I've applied lots of places. In some I did not even make it into the first round, totally rejected. In others I was a top candidate or accepted. I don't think this variance is because of meaningfully differing fit or competitiveness, I think it's because recruiting, grantmaking, any process where you have to decide between a bunch of applications, is idiosyncratic. I'm sure anyone who has screened applications knows what I'm talking about, it's not an exact science. There's a lot of applicants and little time, sometimes snap judgements must be made in a few seconds - at the end we pick a hopefully suitable candidate, but we also miss lots of suitable candidates, sometimes overlooking several "best" candidates. And then there's semi-arbitrary differences in what qualities different screeners emphasize (the interview? a work task? EA engagement? Academic degrees?). When there's a strong applicant pool, it means things are a bit more likely to go well.

(All that said, EA is big enough that all this stuff differs a lot by specific org as well as broader cause area)

Comment by ishaan on EA is a Career Endpoint · 2021-05-18T13:05:32.012Z · EA · GW

Counter-point: If you are interested in an EA job or grant, please do apply to it, even if you haven't finished school. If you're reading the EA forum, you are likely in the demographic of people where (some) EA orgs and grant makers want your application.

I just imagined the world where none of my early-career colleagues had applied to EA things. I think that world is plausibly counterfactually worse. Possibly a world with fewer existing EA adjacent orgs, smaller EA adjacent orgs, or fewer high impact EA jobs. I think dynamic where we have a thriving community of EAs who apply for EA jobs and grants is a major strength of the movement. I think EA orgs benefit so much from having strong applicants relative to the wider hiring market. I hope everyone keeps erring on the side of applying! 

But also yes definitely do look outside of EA - try your best to actually evaluate impact, don't get biased by whether or not something is labeled "EA". 
 

Comment by ishaan on EA Debate Championship & Lecture Series · 2021-04-05T18:01:22.201Z · EA · GW

Thanks for hosting this event! It was a pleasure to participate. 

Comment by ishaan on The Intellectual and Moral Decline in Academic Research · 2020-09-28T17:23:09.089Z · EA · GW

Without making claims about the conclusions, I think this argument is of very poor quality and shouldn't update anyone in any direction.

"As taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent"

Taking all claims at face value, you should not be persuaded that more money causes retractions just because retractions increased roughly in proportion with the overall growth of the industry. I checked the cited work to see if there were any mitigating factors which justified making this claim (since maybe I didn't understand it, and since sometimes people make bad arguments for good conclusions) and it actually got worse - they ignored the low rate of retraction ( It's 0.2%), they compared US-only grants with global retractions, they did not account for increased oversight and standards, and so on.

The low quality of the claim, in combination with the fact that the central mission of this think tank is lobbying for reduced government spending in universities and increase political conservatism on campuses in North Carolina, suggests that the logical errors and mishandling of statistics we are seeing here is partisan motivated reasoning in action.

Comment by ishaan on How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna? · 2020-09-22T04:48:42.979Z · EA · GW

This matches my understanding, however, I think it is normal for non-profits of the budget size that the EA ecosystem currently is to have this structure.

Bridgespan identified 144 nonprofits that have gone from founding to at least $50 million in revenue since 1970...[up to 2003]...we identified three important practices common among nonprofits that succeeded in building large-scale funding models: (1) They developed funding in one concentrated source rather than across diverse sources; (2) they found a funding source that was a natural match to their mission and beneficiaries; and (3) they built a professional organization and structure around this funding model.

- How Non-Profits Get Really Big

Some common alternatives are outlined here: Ten Non-Proft Funding Models.

Within this framework, I would describe the EA community currently using a hybrid between "Member Motivator" (cultivating membership of many individual donors who feel personally involved with the community - such as the GWWC model) and "Big Bettor" (such as the relationship between Good Ventures and the ecosystem of EA organizations).

Comment by ishaan on How have you become more (or less) engaged with EA in the last year? · 2020-09-10T18:29:26.090Z · EA · GW

This time last year, I started working at Charity Entrepreneurship after having attended the 2019 incubation program (more about my experience here). I applied to the 2019 incubation program after meeting CE staff at EAG London 2018. Prior to that, my initial introduction to EA was in 2011 via LessWrong, and the biggest factor in retaining my practical interest sufficiently to go to a conference was that I was impressed by the work of GiveWell. The regular production of interesting content by the community also helped remind me about it over the years. 80k's career advice also introduced me to some concepts (for example replacability) which may have made a difference.

Going forward I anticipate more engagement with both EA specifically and the concept of social impact more generally, because due to working at CE I have acquired a better practical understanding of how to maximize impact in general than I did before, as well as more insight about how to leverage the EA community specifically towards achieving impact (whereas my prior involvement consisted mostly of reading and occasionally commenting).

Comment by ishaan on Are there any other pro athlete aspiring EAs? · 2020-09-08T19:19:05.103Z · EA · GW

It's a cool idea! Athletes do seem to have a lot of very flexible and general-purpose fundraising potential, I think it makes a lot of sense to try to direct it effectively. Charity Entrepreneurship (an incubation program for founding effective non-profits) works with Player's Philanthropy Fund (a service which helps athletes and other entities create dedicated funds that can accept tax-deductible contributions in support of any qualified charitable mission) to help our new charities who have not completed the fairly complex process of formally registering as a non-profit get off the ground. You can actually see us on the roster, alongside various athletes. This doesn't mean we are actually working with athletes - we are just using some of the same operations infrastructure, but it might be a useful thing to know. In general I've noticed that there is quite a bit of infrastructure similar to PPF aimed at helping athletes do charitable fundraising, which I think is a good sign that this idea is promising.

Comment by ishaan on The community's conception of value drifting is sometimes too narrow · 2020-09-04T21:12:27.320Z · EA · GW

I think that what is causing some confusion here is that "value drift" is (probably?) a loanword from AI-alignment which (I assume?) originally referred to very fundamental changes in goals that would unintentionally occur within iterative versions of self improving intelligences, which...isn't really something that humans do. The EA community borrowed this sort of scary alien term and is using it to describe a normal human thing that most people would ordinarily just call "changing priorities".

A common sense way to say this is that you might start out with great intentions, your priorities end up changing, and then your best intentions never come to life. It's not that different from when you meant to go to the gym every morning...but then a phone call came, and then you had to go to work, and now you are tired and sitting on the couch watching television instead.

Logistically, it might make sense to do the phone call now and the gym later. The question is: "Will you actually go to the gym later?" If your plan involves going later, are you actually going to go? And if not, maybe you should reschedule this call and just going to the gym now. I don't see it as a micro death that you were hoping to go to the gym but did not, it's that over the day other priorities took precedence and then you became too tired. You're still the same person who wanted to go... you just ...didn't go. Being the person who goes to the gym requires building a habit and reinforcing the commitment, so if you want to go then you should keep track of which behaviors cause you to actually go and which behaviors break the habit and lead to not going.

Similarly you should track "did you actually help others? And if your plan involves waiting for a decade ...are you actually going to do it then? Or is life going to have other plans?" That's why the research on this does (and ought to) focus on things like "are donations happening", "is direct work getting done" and so on. Because that's what is practically important if your goal is to help others. You might argue for yourself "it's really ok, I really will help others later in life" or you might argue "what if I care about some stuff more than helping others" and so on, but I think someone who is in the position of attempting to effectively help others in part through the work of other people (whether through donations or career or otherwise) over the course of decades should to some degree consider what usually happens to people's priorities in aggregate when modeling courses of action.

Comment by ishaan on Book Review: Deontology by Jeremy Bentham · 2020-08-18T00:11:11.010Z · EA · GW

Cool write up!

Before I did research for this essay, I envisioned Bentham as a time traveller from today to the past: he shared all my present-day moral beliefs, but he just happened to live in a different time period. But that’s not strictly true. Bentham was wrong about a few things, like when he castigated the Declaration of Independence

Heh, I would not be so sure that Bentham was wrong about this! It seems like quite a morally complex issue to me and Bentham makes some good points.

what was their original their only original grievance? That they were actually taxed more than they could bear? No; but that they were liable to be so taxed...

This line of thought is all quite true. Americans (at least, the free landholders whose interests were being furthered by the declaration) at the time were among the wealthiest people in the world, and payed among the lowest taxes - less taxed than the English subjects. They weren't oppressed by any means, British rule had done them well.

But rather surprising it must certainly appear, that they should advance maxims so incompatible with their own present conduct. If the right of enjoying life be unalienable, whence came their invasion of his Majesty’s province of Canada? Whence the unprovoked destruction of so many lives of the inhabitants of that province?

This too, remains pertinent to the modern discourse. In response to Pontiac's Rebellion, a revolt of Native Americans led by Pontiac, an Ottawa chief, King George III declared all lands west of the Appalachian Divide off-limits to colonial settlers in the Proclamation of 1763.

Americans did not like that. The Declaration of independence ends with the following words:

“He (King George III) has excited domestic insurrections amongst us, and has endeavored to bring on the inhabitants of our frontiers, the merciless Indian savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes, and conditions.”

The Declaration of Independence voided the Proclamation of 1763, which contributed to the destruction of the Native Americans, a fact which is not hindsight but was understood at the time. Notice how indigenous communities still thrive in Canada, where the proclamation was not voided. There is also argument that slavery was prolonged as a result of it, and that this too is not hindsight but was understood at the time.

Of course, I doubt the British were truly motivated by humanitarian concern, and it's not clear to me from this piece that even Bentham is particularly motivated to worry about the indigenous peoples (vs. just using their suffering as a rhetorical tool to point out the hypocrisy of the out-group where it fits his politics) - you can tell he focuses more on the first economic point than the second humanitarian one. But his critiques would all be relevant had this event occurred today.

Really I think with the hindsight of history, that entire situation is less a moral issue and more a shift in the balance of power between two equally amoral forces - both of whom employed moral arguments in their own favor, but only one of which won and was subsequently held up as morally correct.

I think the lesson to be learned here might be less that Bentham was ahead of his time, and more that we are not as "ahead" in our time as we might imagine - e.g. we continue to teach everyone that stuff which was bad is good, we continue to justify our violence in similar terms. One thing I've noticed in reading old writings is that so many people often knew that what was going on was bad and that history would frown upon it but they continued to do it (e.g. Jefferson's and many other's writings on slavery largely condemn it, but they kept doing it more or less because that was the way that things were done, which is also not unlike today).

Comment by ishaan on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-07T00:29:28.890Z · EA · GW

Idk but in theory they shouldn't, as pitch is sensed by the hairs on the section of the cochlea that resonates at that the relevant frequency.

Comment by ishaan on Do research organisations make theory of change diagrams? Should they? · 2020-07-29T19:34:23.851Z · EA · GW

A forum resource on ToC in research which I found insightful: Are you working on a research agenda? A guide to increasing the impact of your research by involving decision-makers

Should they

Yes, but ToC don't improve impact in isolation (you can imagine a perfectly good ToC for an intervention which doesn't do much). Also, if you draw a nice diagram, but it doesn't actually inform any of your decisions or change your behavior in any way, then it hasn't really done anything. A ToC is ideally combined with cost-benefit analyses, the comparing of multiple avenues of action, etc and it should pay you back in the form of generating some concrete, informative actions e.g. consulting stakeholders to check your research questions, generally creating checkpoints at which you are trying to get measurements and indicators and opinions from relevant people.

For more foundational and theoretical questions where the direct impact isn't obvious, there may be a higher risk of drawing a diagram which doesn't do anything. I think there's ways to avoid this - understand the relevance of your research to other (ideally more practical) researchers who you've spoken to about it such as a peer review process, make a conceptual map of where your work fits in to other ideas which then lead to impact, try to get as close to the practical level as you realistically can. If it's really hard to tie it to the practical level it is sometimes a sign that you might need to re-evaluate the activity.

Do they

Back in academia, I didn't even know what a "theory of change" was, so I think not. But, one is frequently asked to state the practical and the theoretical value of your research, and the peer review and grant writing process implicitly incorporates elements of stakeholder relevance. However, as an academic, if you fail to make your own analyses, separately from this larger infrastructure, you may end up following institutional priorities (of grant makers, of academic journals, etc) which differ from "doing the most good" as you conceptualize it.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-16T02:24:33.503Z · EA · GW

The tricky part of social enterprise from my perspective is that high impact activities are hard to find, and I figure they would be even harder to find when placed under the additional constraint that they must be self sustaining. Which is not to say that you might not find one (see here and here), just that, finding an idea that works is arguably the trickiest part.

for-profit social enterprises may be more sustainable because of a lack of reliance on grants that may not materialise;

This is true, but keep in mind, impact via social enterprise may be "free" in terms of funding (so very cost-effective), but, it comes with opportunity costs in terms of your time. When you generate impact via social enterprise, you are essentially your own funder. Therefore, for a social enterprise to beat your earning-to-give baseline, its net impact must exceed the good you would have done via whatever you might have otherwise donated to a GiveWell top charity if you instead were donating as much money as you would in a high earning path. (This is of course also true for non-profit/other direct work paths). Basically, social enterprises aren't "free" (since your time isn't free) so it's a question of finding the right idea and then also deciding if the restrictions inherent in trying to be self-sustaining are easier than the restrictions (and funding counterfactuals) inherent in getting external funding.

Comment by ishaan on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T03:24:12.062Z · EA · GW
However, I'm sceptical of charity entrepreneurship's ability to achieve systemic change - I'd probably (correct me if I'm wrong) need a graduate degree in economics to tackle the global economic system.

It might plausibly be helpful to hire staff who had graduate degree in economics, but I think you would not necessarily need a graduate degree in economics yourself in order to start an organization focused on improving economic policy. Of course it's hard to say for sure until it's tried - but there's a lot that goes into running an organization, and it takes many different skills and types of people to make it come together. Domain expertise is only one part of it. A lot of great charities (e.g. GiveWell, AMF) were started by people who didn't enter with domain expertise or related degrees. (None of which is to say that economics isn't a strong option for a variety of paths, only that you shouldn't put the path of starting an organization in the "I need a degree first" box.)

(As for my opinion more generally, I do think that social entrepreneurship would under-perform relative to purely EtG (if you give to the right place), and also under-perform relative to focused non-profit or policy work (if you work on the right thing), because it has to simultaneously turn profit and achieve impact, which really limits the flexibility to work on the higher impact things. But it primarily depends on what specifically you're working on, in every case.)

Comment by ishaan on Where is it most effective to found a charity? · 2020-07-06T16:49:45.036Z · EA · GW

I've never done this myself, but here's bits of info I've absorbed through osmosis by working with people who have.
-Budget about 50-100 hours of work for registration. Not sure which countries require more work in this regard.
-If you're working with a lot of international partners, some countries have processes that are more recognized than others. The most internationally well-known registration type is America's 501(c)(3) - which means that even if you were to for example work somewhere like India, people are accustomed to working with 501(c)(3) and know the system. Less important if you aren't working with partners.
-If you are planning to get donations from mostly individuals, consider where those individuals are likely to live and what the laws regarding tax deductibleness are. Large grantmakers are more likely to be location agnostic.
-You don't need to live where you register, but if you want to grant a work visa to fly in an employee to a location, generally you will need to be registered in that location.

If you're interested in starting a charity you should consider auditing Charity Entrepreneurship's incubation program, and apply for the full course next year. Audit course will have information about how to pick locations for the actual intervention (which usually matters more than where you register for your impact). The full course for admitted students additionally provides guidance and support for operations/registration type stuff.

Comment by ishaan on EA Forum feature suggestion thread · 2020-06-28T13:02:17.988Z · EA · GW

I posted some things in this comment, and then realized the feature I wanted already existed and I just hadn't noticed it - which brings to mind another issue: how come one can retract, overwrite, but not delete a comment?

Comment by ishaan on Dignity as alternative EA priority - request for feedback · 2020-06-26T14:00:48.236Z · EA · GW
What evidence would you value to help resolve what weight an EA should place on dignity?

Many EAs tend to think that most interventions fail, so if you can't measure how well something works, chances are high that it doesn't work at all. To convince people who think that way, it helps to have a strong justification to incorporate a metric which is harder to measure over a well established and easier to measure metrics such as mortality and morbidity.

In the post on happiness you linked by Michael, you'll notice that he has a section on comparing subjective well being to traditional health metrics. A case is made that improving health does not necessarily improve happiness. This is important, because death and disability is easier to measure than things like happiness and dignity, so if it's a good proxy it should be used. If it turned out the that the best way to improve dignity is e.g. prevent disability, then in light of how much easier to measure disability prevention is, it would not be productive to switch focus. (Well, maybe. You might also take a close association between metrics as a positive sign that you're measuring something real. )

To get the EA community excited about a new metric, if it seems realistically possible then i'd recommend following Michael's example in this respect. After establishing a metric for dignity, try to determine how well existing top givewell interventions do on it, see what the relationship is with other metrics, and then see if there are any interventions that plausibly do better.

I think this could plausibly be done. I think there's a lot of people who favor donations to GiveDirectly because of the dignity/autonomy angle (cash performs well on quite a few metrics and perspectives, of course) - I wouldn't be surprised if there are donors who would be interested in whether you can do better than cash from that perspective.

Comment by ishaan on EA considerations regarding increasing political polarization · 2020-06-25T14:42:10.619Z · EA · GW
Why effective altruists should care

Opposing view: I don't think these are real concerns. The Future of Animal Consciousness Research citation boils down to "what if research in animal cognition is one day suppressed due to being labeled speciesist" - that's not a realistic worry. The vox thinkpeice emphasizes that we are in fact efficiently saving lives - I see no critiques there that we haven't also internally voiced to ourselves, as a community. I don't think it's realistic to expect coverage of us not to include these critiques, regardless of political climate. According to google search, the only folks even discussing that paper are long-termist EAs. I don't think AI alignment is any more politically polarized except as a special case of "vague resentment towards silicon valley elites" in general.

Sensible people on every part of the political spectrum will agree that animal and human EA interventions are good or at least neutral. The most controversial it gets is that people will disagree with the implication that they are best ways to do good...and why not? We internally often disagree on that too. Most people won't understand ai alignment enough to have an opinion beyond vague ideas about tech and tech-people. Polarization is occurring, but none of this constitutes evidence regarding political polarization's potential effect on EA.

Comment by ishaan on EA and tackling racism · 2020-06-16T20:09:14.154Z · EA · GW

a) Well, I think the "most work is low-quality aspect" is true, but also fully-general to almost everything (even EA). Engagement requires doing that filtering process.

b) I think seeking not to be "divisive" here isn't possible - issues of inequality on global scales and ethnic tension on local scales are in part caused by some groups of humans using violence to lock another group of humans out of access to resources. Even for me to point that out is inherently divisive. Those who feel aligned with the higher-power group will tend to feel defensive and will wish not to discuss the topic, while those who feel aligned with lower-power groups as well as those who have fully internalized that all people matter equally will tend to feel resentful about the state of affairs and will keep bringing up the topic. The process of mind changing is slow, but I think if one tries to let go of in-group biases (especially, recognizing that the biases exist) and internalizes that everyone matters equally, one will tend to shift in attitude.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:59:58.533Z · EA · GW
I've seen a lot of discussion of criminal justice reform

Well, I do think discussion of it is good, but if you're referring to resources directed to the cause area...it's not that I want EAs to re-direct resources away from low-income countries to instead solving disparities in high income countries, and I don't necessarily consider this related to the self-criticism as a community issue. I haven't really looked into this issue, but: on prior intuition I'd be surprised if American criminal justice reform compares very favorably in terms of cost-effectiveness to e.g. GiveWell top charities, reforms in low income countries, or reforms regarding other issues. (Of course, prior intuitions aren't a good way to make these judgements, so right now that's just a "strong opinion, weakly held".)

My stance is basically no on redirecting resources away from basic interventions in low income countries and towards other stuff, but yes on advocating that each individual tries to become more self-reflective and knowledgeable about these issues.

I suppose the average EA might be more supportive of capitalism than the average graduate of a prestigious university, but I struggle to see that as an example of bias

I agree, that's not an example of bias. This is one of those situations where a word gets too big to be useful - "supportive of capitalism" has come to stand for a uselessly large range of concepts. The same person might be critical about private property, or think it has sinister/exploitative roots, and also support sensible growth focused economic policies which improve outcomes via market forces.

I think the fact that EA has common sense appeal to a wide variety of people with various ideas is a great feature. If you are actually focused on doing the most good you will start becoming less abstractly ideological and more practical and I think that is the right way to be. (Although I think a lot of EAs unfortunately stay abstract and end up supporting anything that's labeled "EA", which is also wrong).

My main point is that if someone is serious about doing the most good, and is working on a topic that requires a broad knowledge base, then a reasonable understanding the structural roots of inequality (including how gender and race and class and geopolitics play into it) should be one part of their practical toolkit. In my personal opinion, while a good understanding of this sort of thing generally does lead to a certain political outlook, it's really more about adding things to your conceptual toolbox than it is about which -ism you rally around.

Comment by ishaan on EA and tackling racism · 2020-06-14T19:51:34.269Z · EA · GW
What are some of the biases you're thinking of here? And are there any groups of people that you think are especially good at correcting for these biases?

The longer answer to this question: I am not sure how to give a productive answer to this question. In the classic "cognitive bias" literature, people tend to immediately accept that the biases exist once they learn about them (…as long as you don't point them out right at the moment they are engaged in them). That is not the case for these issues.

I had to think carefully about how to answer because (when speaking to the aforementioned "randomly selected people who went to prestigious universities", as well as when speaking to EAs) such issues can be controversial and trigger defensiveness. These topics are political and cannot be de-politicized, I don't think there is any bias I can simply state that isn't going to be upvoted by those who agree and dismissed as a controversial political opinion by those who don't already agree, which isn't helpful.

It's analogous to if you walked into a random town hall and proclaimed "There's a lot of anthropomorphic bias going on in this community, for example look at all the religiosity" or "There's a lot of species-ism going on in this community, look at all the meat eating". You would not necessarily make any progress on getting people to understand. The only people who would understand are those who know exactly what you mean and already agree with you. In some circles, the level of understanding would be such that people would get it. In others, such statements would produce minor defensiveness and hostility. The level of "understanding" vs "defensiveness and hostility" in the EA community regarding these issues is similar to that of randomly selected prestigious university students (that is, much more understanding than the population average, but less than ideal). As with "anthropomorphic bias" and as with "speciesism", there are some communities where certain concepts are implicitly understood by most people and need no explanation, and some communities where they aren't. It comes down to what someone's point of view is.

Acquiring an accurate point of view, and moving a community towards an accurate point of view, is a long process of truth seeking. It is a process of un-learning a lot of things that you very implicitly hold true. It wouldn't work to just list biases. If I start listing out things like (unfortunately poorly named) "privilege-blindness" and (unfortunately poorly named) "white-fragility" I doubt it's not going to have any positive effect other than to make people who already agree nod to themselves, while other people roll their eyes, and other people google the terms and then roll their eyes. Criticizing things such that something actually goes through is pretty hard.

The productive process involves talking to individual people, hearing their stories, having first-hand exposure to things, reading a variety of writings on the topic and evaluating them. I think a lot of people think of these issues as "identity political topics" or "topics that affect those less fortunate" or "poorly formed arguments to be dismissed". I think progress occurs when we frame-shift towards thinking of them as "practical every day issues that affect our lives", and "how can I better articulate these real issues to myself and others" and "these issues are important factors in generating global inequality and suffering, an issue which affects us all".

Comment by ishaan on EA and tackling racism · 2020-06-14T19:49:49.161Z · EA · GW
What are some of the biases you're thinking of here?

This is a tough question to answer properly, both because it is complicated and because I think not everyone will like the answer. There is a short answer and a long answer.

Here is the short answer. I'll put the long answer in a different comment.

Refer to Sanjay's statement above

There are some who would argue that you can't tackle such a structural issue without looking at yourselves too, and understanding your own perspectives, biases and privileges...But I worried that tackling the topic of racism without even mentioning the risk that this might be a problem risked seeming over-confident.

At time of writing, this is sitting at negative-5 karma. Maybe it won't stay there, but this innocuous comment was sufficiently controversial that it's there now. Why is that? Is anything written there wrong? I think it's a very mild comment pointing out an obviously true fact - that a communities should also be self-reflective and self-critical when discussing structural racism. Normally EAs love self-critical, skeptical behavior. What is different here? Even people who believe that "all people matter equally" and "racism is bad" are still very resistant to having self-critical discussions about it.

I think that understanding the psychology of defensiveness surrounding the response to comments such as this one is the key to understanding the sorts of biases I'm talking about here. (And to be clear - I don't think this push back against this line of criticism is specific to the EA community, I think the EA community is responding as any demographically similar group would...meaning, this is general civilizational inadequacy at work, not something about EA in particular)

Comment by ishaan on EA and tackling racism · 2020-06-10T20:27:07.521Z · EA · GW

I broadly agree, but in my view the important part to emphasize is what you said on the final thoughts (about seeking to ask more questions about this to ourselves and as a community) and less on intervention recommendations.

Is EA really all about taking every question and twisting it back to malaria nets ...?... we want is to tackle systemic racism at a national level (e.g. in the US, or the UK).

I bite this bullet. I think you do ultimately need to circle back to the malaria nets (especially if you are talking more about directing money than about directing labor). I say this as someone who considers myself as much a part of the social justice movement as I do part of the EA movement.Realistically, I don't think it's really plausible that tackling stuff in high income countries is going to be more morally important than malaria net-type activities, at least when it comes to fungible resources such as donations (the picture gets more complex with respect to direct work of course). It's good to think about what the cost-effective ways to improve matters in high income countries might be, but realistically I bet once you start crunching numbers you will probably find that malaria-net-type-activities should still the top priority by a wide margin if you are dealing with fungible resources. I think the logical conclusions of anti-racist/anti-colonialist thought converge upon this as well. In my view, the things that social justice activists are fighting for ultimately do come down to the basics of food, shelter, medical care, and the scale of that fight has always been global even if the more visible portion generally plays out on ones more local circles.

However, I still think putting thought into how one would design such interventions should be encouraged, because:

our doubts about the malign influence of institutional prejudice...should reach ourselves as well.

I agree with this, and would encourage more emphasis on this. The EA community (especially on the rationality/lesswrong part of the community) puts a lot of effort into getting rid of cognitive biases. But when it comes to acknowledging and internally correcting for the types of biases which result from growing up in a society which is built upon exploitation, I don't really think the EA community does better than any other randomly selected group of people who are from a similar demographic (lets say, randomly selected people who went to prestigious universities). And that's kind of weird. We're a group of people who are trying to achieve social impact. We're often people who wield considerable resources and have to work with power structures all the time. It's a bit concerning that the community level of knowledge of the bodies of work that deal with these issues is just average.I don't really mean this as a call to action (realistically, I think given the low current state of awareness it seems probable that attempting action is going to result in misguided or heavy-handed solutions). What I do suggest is - a lot of you spend some of your spare time reading and thinking about cognitive biases, trying to better understand yourself and the world, and consider this a worthwhile activity. I think, it would be worth applying a similar spirit to spending time to really understand these issues as well.

Comment by ishaan on Effective Animal Advocacy Resources · 2020-05-25T04:33:25.479Z · EA · GW

Super helpful, I'm about to cite this in the CE curriculum :)

Comment by ishaan on Why I'm Not Vegan · 2020-04-10T17:40:04.006Z · EA · GW
I get much more than $0.43 of enjoyment out of a year's worth of eating animal products

I think we would likely not justify a moral offset for harming humans at (by the numbers you posted) $100/year or eating children at $20/pound (100*15 years / 75 pounds). This isn't due to sentimentality, deontology, taboo, or biting the bullet - I think a committed consequentialist, one grounded in practicality, would agree that no good consequences would likely come from allowing that sort of thing, and I think that this probably logically applies to meat.

I think overall it's better to look first at the direct harm vs direct benefit, and how much you weigh the changes to your own experience against the suffering caused. The offset aspect is not unimportant, but I think it's a bit misleading when not applied evenly in the other direction.

I am sympathetic to morally weighing different animals orders of magnitude differently. We have to do that in order to decide how to prioritize between different interventions.

That said, I don't think human moral instincts for these sorts of cross-species trolley problems are well equipped for numbers bigger than 3-5. Your moral instincts can (I would say, accurately) inform you that you would rather avert harm to a person than to 5 chickens, but when you get into the 1000s you're pretty firmly in torture vs dust specks territory and should not necessarily just trust your instincts. That doesn't mean orders of magnitude differences are wrong, but it does mean they're potentially subject to a lot of bias and inconsistency if not accompanied by some methodology.

Comment by ishaan on Help in choosing good charities in specific domains · 2020-02-20T19:07:53.955Z · EA · GW

Charity Entrepreneurship is incubating new family planning and animal welfare organizations, which will aim to operate via principles of effective altruism - potentially relevant to your interests.

Comment by ishaan on Who should give sperm/eggs? · 2020-02-12T23:37:53.893Z · EA · GW

Since you are asking "who" should do it (rather than whether more or less people in general should do it, which seems the more relevant question since it would carry the bulk of the effect), I would wish to replace any anonymous donors with people who are willing to take a degree of responsibility for and engagement with the resulting child and their feelings about it, since looking at opinion polls from donor conceived people has made me think there's a reasonable chance they experience negative emotions about the whole thing at non-negligible rates and it is possible that this might be mitigated by having a social relationship to the donor.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2020-01-17T06:44:51.687Z · EA · GW

Spend some time brainstorming and compare multiple alternative courses of action and potential hurdles to those actions before embarking on it, consider using a spreadsheet to augment your working memory when you evaluate actions by various criteria, get a sense of expected value per time on a given task so you can decide how long it's worth to spend on it, enforce this via time capping / time boxing and if you are working much longer on a given task much than you estimated then re-evaluate what you are doing, time track which task you spend your working hours on to become more aware of time in general. Personally I don't think I fully appreciated how valuable time was and how much i was sometimes wasting unintentionally before tracking it (although I could see some people finding this stressful)

Of course this is all sort of easier said than done haha. I think to some degree watching other people actually doing things which one is supposed to do helps enforce the habit.


Comment by ishaan on Growth and the case against randomista development · 2020-01-17T06:28:24.021Z · EA · GW

Any discussion of how much it might cost to change a given economic policy / the limiting factor that has kept it from changing thus far?

(I think this is also the big question with health policy)

Comment by ishaan on Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? · 2020-01-13T00:21:50.493Z · EA · GW

"Rejecting" would be a bit unusual, but of course you should honestly advise a well qualified candidate if you think their other career option is higher impact. I think it would be ideal if everyone gives others their honest advice about how to do the most good, roughly regardless of circumstance.

I've only seen a small slice of things, but my general sense is that people in the EA community do in fact live up to this ideal, regularly turning down and redirecting talent as well as funding and other resources towards the thing that they believe does the most good.

Also, although it might ultimately add up to the same thing, I think it brings more clarity to think along the lines of "counterfactual impact" (estimating how much unilateral impact an individual's alternative career choices have) rather than "comparative advantage" which is difficult to assess without detailed awareness of the multiple other actors you are comparing to.

Comment by ishaan on Announcement: early applications for Charity Entrepreneurship’s 2020 Incubation Program are now open! · 2019-12-16T17:14:34.956Z · EA · GW

I went to the program, was quite impressed with what I saw there, and decided to work at charity entrepreneurship.

Before attending the program, as career paths, I was considering academia, earning to give, direct work in the global poverty space, and a few other more offbeat options. After the program, I'd estimate that I've significantly increased the expected value of my own career (perhaps by 3x-12x or more) in terms of impact by attending the program, thanks to

1) the direct impact of CE itself and associated organizations. I can say that in terms of what I've directly witnessed, there's a formidable level of productive work occurring at this organization. My own level of raw productivity has risen quite a bit by being in proximity and picking up good habits. I'm pretty convinced that this productivity translates into impact, (although on that count, you can evaluate the key assumptions and claims yourself by looking at the cost effectiveness models and historical track record).

2) practical meta-skills I've picked up regarding how to think about personal impact. Not only did I change my mind and update on quite a few important considerations, but there were also quite a few things that I didn't even realize were considerations before attending the program. I think my decision making going forward will be better now.

3) connections and network to other effective altruists, and general knowledge about the effective altruism movement. Prior to attending the program my engagement with the community was on a rather abstract level. Now, if I wanted to harness the EA community to accomplish a concrete action in the global poverty or animal space, I'd know roughly what to do and who to talk to and how to get started.

4) the career capital from program related activities.

Also, I had a good time. If you enjoy skill building and like interacting with other effective altruists, the program is quite fun.

Happy to answer any questions.

Comment by ishaan on Introducing Good Policies: A new charity promoting behaviour change interventions · 2019-11-20T13:11:34.932Z · EA · GW

I'm sure there's a better document somewhere addressing these, but I'll just quickly say that people tend to regret starting smoking tobacco and often want to stop, tobacco smoking reduces quality of life, and that smokers often support raising tobacco taxes if the money goes to addressing the (very expensive!) health problems caused by smoking (e.g. this sample, and I don't think this pattern is unique). So I think bringing tobacco taxes in line with recommendations is good under most moral systems, even those which strongly prioritize autonomy - this is a situation where smokers seem to be straightforwardly stating that they'd rather not behave this way.

Eric Garner died because the police approached him on suspicion of selling illegal cigarettes and then killed him - I don't think that's realistically attributable to tobacco taxation.

Comment by ishaan on List of EA-related email newsletters · 2019-10-10T08:42:43.054Z · EA · GW

For global health, don't forget Givewell's newsletter!

For meta, CharityEntrepreneurship has one as well (scroll to the middle of the page for the newsletter)

Comment by ishaan on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-18T19:00:29.858Z · EA · GW
Do you have any opinions that you would be reluctant to express in front of a group of your peers? If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think what you're told.

Not necessarily! You might just be less averse to disagreement. Or perhaps you (rightly or wrongly) feel less personally vulnerable to the potential consequences of stating unpopular opinions and criticism.

Or, maybe you did quite a lot of independent thinking that differed dramatically from what you were "told", and then gravitated towards one or more social circles that happen to have greater tolerance for the things you believe, which perhaps one or more of your communities of origin did not.

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-18T17:32:43.036Z · EA · GW

I agree that more people trying to do cost effectiveness analyses is good! I regret that the tone seemed otherwise and will consider it more in the future. I engaged with it primarily because I too often wonder about how one might improve impact outside of impact-focused environments, and I generally find it an interesting direction to explore. I also applaud that you made the core claim clearly and boldly and I would like to see more of that as well - all models suffer these flaws to some degree and it's a great virtue to make clear claims that are designed such that any mistakes will be caught (as described here). Thanks for doing the piece and I hope you can use these comments to continue to create models of this and other courses of action :)

Comment by ishaan on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-17T20:23:03.360Z · EA · GW

I think the biggest improvement would be correcting the fact that this model (accidentally, I think) assumes that improving any arbitrary high budget charity by 5% is equally as impactful as improving a Givewell equivalent charity by 5%. Most charity's impact is an order of magnitude smaller.

You could solve this with a multiplier for the charity's impact at baseline.

If I understand correctly, you figure that if you become a trustee of a £419668/year budget charity, if only you can improve the cost effectiveness by 5%, then you can divide that by 42 hours a year, to get £419668*5%/42 hours=£500/hour in the value of your donated time. (A style tip - it would be helpful to put the key equation describing roughly what you've done in the description, to make it all legible without having to go into the spreadsheet.)

I think it is fair to say that, were you to successfully perform this feat, you have indeed done something roughly as impactful as providing a £500/hour value to the charity you are trustee-ing for. So, if you improved a Givewell-top-charity-equivalent's cost effectiveness by 5% for a year, then maybe you could fairly take 5% of that charity's yearly budget and divide it by your hours for that year, as you've done, to calculate your Givewell-top-charity-equivalent impact in terms of how it would compare to donated dollars.

But if you improve a £419668/yr budget charity which is only 1% as cost-effective as a Givewell-top-charity-equivalent by 5%, then that makes your hourly impact 1%*£419668*5%/42 hours = £5/hour of Givewell-top-charity-equivalent impact - you'd be better served working a bit extra and donating 5 dollars to Givewell.

I don't think this model has credence even after these adjustments as I'm skeptical of the structure, but you did make those assumptions explicitly which is good. If you think the effect takes ~42 hours/year then this hypothesis is potentially cheap to just test in practice, and then revise your model with more information. Have you joined any boards and tried this in practice, if yes how did it go?

edit - ah, you're using the term "5% increase" very differently.

Instead it assumes a 5% increase, perhaps from £0 of impact to 5% of the annual income or perhaps from 100% of annual income to 105%

So just to be clear, this implies that making 100% of your annual income in impact would mean that you are the most cost effective charity in the world (or whatever other benchmark you want to set at "100%"). Used in this sense, the word "5% increase" doesn't mean "the shelter saves 5% more kittens" but that charity as a whole has gone from being part of the long tail of negligible impact to being 1/20th as cost effective as the most cost effective charity in the world. This isn't the way percents are usually expressed / this seems like a confused way to express this concept since the 100% benchmark is arbitrary/unknown - it would be better in that case to express it on an absolute scale rather than a percentage.

Comment by ishaan on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-21T23:32:44.550Z · EA · GW

brainstorming / regurgitating some random additional ideas -

Goodhart's law - a charity may from the outset design itself or self-modify itself around Effective Altruist metrics, thereby pandering to the biases of the metrics and succeeding in them despite being less Good than a charity which scored well on the same metrics despite no prior knowledge of them. (Think of the difference between someone who has aced a standardized test due to intentional practice and "teaching to the test" vs. someone who aced it with no prior exposure to standardized tests - the latter person may possess more of the quality that the test is designed to measure). This is related to "influencing charities" issue, but focusing on the potential for defeating of the metric itself, rather than direct effects of the influence.

Counterfactuals of donations (other than the matching thing)- a highly cost effective charity which can only pull from an effective altruist donor pool might have less impact than a slightly less cost effective charity which successfully redirects donations from people who wouldn't have donated to a cost effective charity (this is more of an issue for the person who controls talent, direction, and other factors, not the person who controls money).

Model inconsistency - Two very different interventions will naturally be evaluated by two very different models, and some models may inherently be harsher or more lenient on the intervention than others. This will be true even if all the models involved are as good and certain as they can realistically be.

Regression to the mean - The expected value of standout candidates will generally regress to the mean of the pool from which they are drawn, since at least some of the factors which caused them to rise to the top will be temporary (including legitimate factors that have nothing to do with mistaken evaluations)


Comment by ishaan on How Life Sciences Actually Work: Findings of a Year-Long Investigation · 2019-08-19T05:08:22.452Z · EA · GW

I think this description generally falls in line with what I've experienced and heard secondhand and is broadly true. However, there are some differences between my impression of it and yours. (But it sounds like you've collected more accounts, more systematically, and I've actually only gone up to the M.A. level in grad school, so I'm leaning towards trusting your aggregate)

Peer review is a disaster

I think we can get at better ways than peer review, but also, don't forget that people will sort of inevitably have Feelings about getting peer reviewed, especially if the review is unfavorable, and this might bias them to say that it's unfair or broken. I wouldn't expect peer review is particularly better or worse than what you'd expect from what is basically a group of people with some knowledge of a topic and some personal investment in the matter having a discussion - it can certainly be a space for pettiness, both by the reviewer and from the reviewed, as well as a space for legitimate discussion.

PIs mostly manage people -- all the real work is done by grad students and postdocs

I think this is sometimes true, but I would not consider this a default state of affairs. I think some, but not all, grad students and post docs can conceive of and execute a good project from start to finish (more, in top universities). However, I think most successful PIs are constantly running projects of their own as well. Moreover, a lot of grad students and post docs are running projects that either the PI came up with, or independently created projects that are ultimately a small permutation within a larger framework that the PI came up with. I do think it sometimes happens that some people believe they are doing all the work and sort of forget the degree of training and underestimate how much the PI is behind the scenes.

management and fundraising (and endless administrative responsibilities bestowed on any tenure-track professor) and can 100% focus on doing science and publishing papers, while getting mentoring from your senior PI and while being helped by all the infrastructure established labs

My impression was actually that grant writing, management, and setting up infrastructure is the bulk of Doing Science, properly understood. (Whereas, I get the impression that this write up sort of frames it as some sort of side show to the Real Work of Doing Science). With "fundraising", the writer of the grant is the one who has to engage in the big picture thinking, make the pitch, and plan the details to a level of rigor sufficient to satisfy an external body. With "infrastructure", one must set up the lab protocols so that they're actually measuring what they are meant to. It's easy to do this wrong, and what's worse, it's easy to do this wrong and not even realize you are doing it wrong and have those mistakes make it all the way up to a nonsensical and wrong publication. I think there is a level of fairly deep expertise involved in setting up protocols. And "management" in this context also involves a lot of teaching people skills and concepts, including sometimes a fair bit of hand-holding during the process of publishing papers (students' first drafts aren't always great, even if the student is very good).

People outside of biology generally think that doing a PhD means spending 6 years at the bench performing your advisor's experiments and is only possible with perfect undergrad GPA, not realizing that neither of these are true of you're truly capable

Very true in one sense - I agree that academia is very forgiving about credentials and gpa relative to other forms of post-graduate education, and people are definitely excited and responsive to being cold contacted by motivated students who will do their own projects. However, keep in mind that if you're planning to work on whatever you want, rather than your adviser's experiments, you will have more trouble fully utilizing the adviser's management/infrastructure/expertise and to a lesser extent grants.

For a unique and individual project, you might have to build some of your infrastructure on your own. This means things may take much longer and are more likely not to work the first few times - all of which is a wonderful learning experience, but this does not always align with the incentive of publishing papers and graduating quickly. I think some fields (especially the ones closer to math) have the sort of "pure researcher" track you have in mind, but it's rare in social and biological sciences in part because the most needed people are in fact those with scientific expertise who can train and manage a team and build infrastructure/protocol as well s fund raise and set an agenda- i think it would be tough to realistically delegate this to anyone who doesn't know the science.

(But - again, this is only my impression from doing a masters and from conversations I've had with other people. Getting a sense of a whole field isn't really easy and I imagine different regions and so on are very different.)

Comment by ishaan on 'Longtermism' · 2019-08-19T03:34:22.142Z · EA · GW

I think it's worth pointing out that "longtermism" as minimally defined here is not pointing to the same concept that "people interested in x-risk reduction" was probably pointing at. I think the word which most accurately captures what it was pointing at is generally called "futurism" (examples [1],[2]).

This could be a feature or a bug, depending on use case.

  • It could be a feature if you want a word to capture a moral underpinning common to many futurist's intuitions while being, as you said, remaining "compatible with any empirical view about the best way of improving the long-run future", or to form a coalition among people with diverse views about the best ways to improve the long-run future.
  • It could be a bug if people started informally using "longtermism" interchangably with "far futurism", especially if it created a motte-bailey style of argument in which to an easily defensible minimal definition claim that "future people matter equally" was used to response to skepticism regarding claims that any specific category of efforts aiming to influence the far future is necessarily more impactful.

If you want to retain the feature of being "compatible with any empirical view about the best way of improving the long-run future" you might prefer the no-definition approach, because criteria ii is not philosophical, but an empirical view about what society currently wrongly privileges.

From the perspective of addressing the "bug" aspect however, I think criteria ii and iii are good calls. They make some progress in narrowing who is a "longtermist", and they specify that it is ultimately a call to a specific action (so e.g someone who thinks influencing the future would be awesome in theory but is intractable in practice can fairly be said to not meet criteria iii). In general, I think that in practice people are going to use "longtermist" and "far futurist" interchangeably regardless of what definition is laid out at this point. I therefore favor the second approach, with a minimal definition, as it gives a nod to the fact that it's not just a moral stance and but advocates some sort of practical response.





Comment by ishaan on How do you, personally, experience "EA motivation"? · 2019-08-16T21:18:17.015Z · EA · GW

The way I feel when the concept of a person in the abstract is invoked feels like a fainter version of the love I would feel towards a partner, a parent, a sibling, a child, a close friend, and towards myself. The feeling drives me to act in the direction of making them happy, growing their capabilities, furthering their ambitions, fulfilling their values, and so on. In addition to feeling happy when my loved ones are happy, there is also an element of pride when my loved ones grow or accomplish something, as well as fulfillment when our shared values are achieved. When engaging with the concept of abstract people, I can very easily imagine real people - each with a rich life history, unique ways of thinking, a web of connection, and so on...people who I would love if I were to know them. This motivates me to work hard to provide for their well being and growth, to undergo risks and dangers and sacrifices to protect them from harm, to empower and facilitate them in their undertakings, and to secure a future in which they may flourish - in the same ordinary sense that I imagine many other people do for themselves, their children and families, their tribes and nations, all people, all beings, and so on. I feel a sense of being united with all people as we work together to steer the universe towards our shared purpose.


You've italicized "effectively" as part of the question, but I don't think I feel any real distinction between "wanting to help people" and "wanting to help people effectively" - when I'm doing a task, it seems like doing it effectively is rather straightforwardly better than doing it ineffectively. "Effective altruism" does imply a level of impartiality regarding who benefits which I don't possess (since I care about myself, my friends, my family, and so on more than strangers), but it is otherwise the same. Even if I were I only to help people who I directly knew and personally loved in a non-abstract sense, I would still seek to do so effectively.


Comment by ishaan on What posts you are planning on writing? · 2019-07-26T07:57:32.301Z · EA · GW

That very EA survey data, combined with Florida et all The Rise Of The Megaregion data which characterizing the academic/intellectual/economic output of each region. It would be a brief post, the main takeaway is that EA geographic concentration seems associated with a region's prominence in academia, whereas things like economic prominence, population size, etc don't seem to matter much.

Comment by ishaan on What posts you are planning on writing? · 2019-07-25T22:33:14.417Z · EA · GW

Here's some stuff which I may consider writing when I have more time. The posts are currently too low on the priorities list to work on, but if anyone thinks one of these is especially interesting or valuable, I might prioritize it higher, or work on it a little when I need a break from my current main project. For the most part I'm unlikely to prioritize writing in the near future though because I suspect my opinions are going to rapidly change on a lot of these topics soon (or my view on their usefulness / importance / relevance).

1) Where Does EA take root? The characteristics of geographic regions which have unusually high numbers of effective altruists, with a eye towards guessing which areas might be fertile places to attempt more growth. (Priority 4/10, mostly because I mostly already have the data due to working on another thing, but I'm not sure to which growth is a priority)

2) Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon, but I'm ).

3) A (as far as I know novel) thought experiment meant to complicate utilitarianism, which has produced some very divergent responses when I pose it conversation so far. The intention is to call into question what exactly it is that we suppose ought to be maximized. (priority 3/10)

4) How to turn philosophical intuitions about "happiness", "suffering", "preference", 'hedons" and other subjective phenomenological experiences into something which can be understood within a science/math framework, at least for the purposes of making moral decisions. (priority 3/10)

5) Applying information in posts (3) and (4) to make practical decisions about some moral "edge cases". Edge cases include things like: non-human life, computer algorithms, babies and fetuses, coma, dementia, severe brain damage and congenital abnormalities. (priority 3/10)

6) How are human moral and epistemic foundations formed? If you understand the "No Universally Compelling Arguments" set of concepts, this post is basically helping people apply that principle in practical terms referencing real human minds and cultures, integrating various cultural anthropology and post modernist works. (priority 2/10)



Comment by ishaan on Ways Frugality Increases Productivity · 2019-07-19T20:58:35.668Z · EA · GW

I super agree with the title, but I think the text actually really undersells it! Runway not only increases your flexibility to not earn, but also reduces your stress and removes all sorts of psychologically difficult power dynamics that come with having a boss or otherwise being beholden to external factors for your well being (Yes, you may still have a boss or external factors, but now you won't need their continued approval or success to pay bills, and that makes all the difference). Also, frugality enables you to really splurge without worrying when it really counts. Additionally, If you do not have any large and expensive possessions, tend to live in low cost apartments, and don't have any dependents, you can move to whatever location it is most productive for you to be in with little to no overhead - whether that be across town or across the globe. Frugality in an urban context also forces close living situations (housemates) which can dramatically increase your social network. Further, you end up building scrappy skills and habits (e.g. negotiating apartments, meal planning, knowledge of public services, biking) which can really come in handy even when you're not being frugal.

If you have the privilege to be in circumstances where you are able to make money without spending most of it, it's good to take advantage of this if you can. Don't feel bad about it if you can't - it's not always simple or possible for everyone. But if you feel like it would be pretty easy for you to be frugal and you're choosing not to because you think spending a lot more makes you more productive, I strongly suggest reconsider.

Another point worth considering is that if you are sufficiently frugal, and if "productivity" is truly your goal here, you can "increase your productivity" by taking that money and hiring a second person to work on your project with you. Can all your time saving expenses increase your productivity more than a whole second person? (I'm sure there are some circumstances for which the answer is yes, but I imagine that is rare.)

Comment by ishaan on Considering people’s hidden motives in EA outreach · 2019-06-01T21:41:14.086Z · EA · GW

You've laid out your opinions clearly. It is well cited, and has interesting and informative accompanying sources. It's a good post. However, I disagree with some portions of the underlying attitudes, (even while not particularly objecting to some of the recommended methods)

In an ideal world where all people are rational, the ideas mentioned in this forum post would be completely useless.

The thing is, this is a purely inside view. It sort of presupposes effective altruist ideas are correct, and that the only barrier to widespread adoption is irrationality, rather than any sensible sort of skepticism.

While humans can be irrational in distributing status, there is such a thing as legitimately earned status. If we put on our idealist hats for just a moment and forget all the extremely silly things humans accord status to, status can represent the "outside view" - if institutions we respect seem to respect EA, that should increase our confidence in EA ideas. Not because we're status climbing apes, but because "capable of convincing me" shouldn't be a person's only bar for trusting an argument. One should sensibly understands the limited scope of ones own judgement regarding big topics.

Now, taking our idealist hats off, obviously we can't just trust what most people think, or consider all "high status" institutions as equally legitimate. We have to be discerning. But there are institutions (such as academia, in my opinion) whose approval matters because it functions as legitimate external validation. It's not just social currency, it's a well earned social currency. Not only that, it's an opportunity to send our good ideas elsewhere to develop and mutate, as well as an opportunity to allow our bad ideas to be culled.

Unfortunately, people often are much less rational than we’d like to admit. Acknowledging this might be a pragmatic way for EA to improve outreach effectiveness.

The other issue is that when one is forming a broad, high level strategy for engaging in the world, it should feel good. The words one uses should make one feel warm inside, not exasperated at the irrationality of the world and the necessity of stooping to slimy feeling methods to win. Lest anyone irrationally (/s) dismiss this as a "warm fuzzy altruism", in Bosch's linked taxonomy, let me pragmatically (/s) employ an appeal to authority: Yudkowsky has made the same point. If it feels cynical and a touch Machiavellian, it usually will not ultimately produce morally wholesome results. Personally, I think if you want to really convince people, you shouldn't use methods that would make them feel like you tricked them if they knew what you were doing.

Not to mention...it's just sort of impractical for EA to attempt "we know you are irrational and we're not above pushing your irrationality buttons" strategies. EA organizations are generally scrupulous about transparency so that we can hold each other accountable. This means that any cynical outreach attempts will be transparent as well. In general my sense is that idealist institutions can't effectively wield some of these more cynical methods.

Also as a sort of aside, I don't think there's anything irrational about appealing to emotions. The key is to appeal to emotions in a way that we bring out behavior which is a true expression of people's values. Often, when someone has a "bad" ideology, it is emotions of compassion that bring them out of it. Learning to better engage people on an emotional level is not in any way opposed to presenting logical and rational cases for things.

How can EA help people increase their status?

...in a non-cynical way?

By acquiring well-earned legitimacy! Make real positive impacts in areas other people care about. That means you can also help individual effective altruists make real measurable impacts that they can put on their resume and thereby increase their career capital. Create arguments that other intellectuals agree with and cite. Mentor other people and give them skills. Create mechanisms for people to be public about their donations and personal sacrifices they might make to further a cause in a socially graceful way (it inspires others to do the same). These are all things that the Effective Altruist community is currently doing, and it's been working regardless of whether or not people are wearing suits.

What all these methods have in common is that they work with people's rationality (and true altruistic motives), rather than work around their irrationality (and hidden selfish motives)- these are methods that encourage involvement with EA because people are convinced that them personally being involved with EA involvement will help further their (altruistic, but also otherwise) goals. The status raising effects in these methods are secondary to real accomplishment, they put forth honest signals of competence and skill, which the larger society recognizes because it is actually valuable. The appeals to emotion work via being connected to the reality of actually accomplishing the tasks that those emotions are oriented towards.

So, I would generally agree with your call for EAs to think about more ways to gain legitimacy. I just want to strongly prioritized well-earned legitimacy...whereas this post comes off as though it's largely about gaining less legitimate forms of status. (Perhaps due to an implicit feeling that all status is illegitimate?)

Comment by ishaan on Which scientific discovery was most ahead of its time? · 2019-05-31T01:10:03.894Z · EA · GW

I think part of the "continuity" comes from the fact that things that were "ahead of their time" tended not to be useful yet and get lost. Or worse, perhaps several people had to independently come up with, and support, and learn about an idea enough to use it for it to be actually adopted, or it just ends up sitting in some tinkerer's basement or a dusty old tome.

So, you can flip this question: Which discoveries and inventions seem to have occurred after their time (e.g. they were technologically possible, the prerequisite ideas were pretty well known, and they would have been immensely useful practically in that time and place) and why didn't civilization get at them before?

Comment by ishaan on There's Lots More To Do · 2019-05-30T23:21:23.492Z · EA · GW

Well, firstly, how much credence should we assign the actual analysis in that post?

Before we begin talking about how we should behave "even if" the cost per life saved is much higher than 5k - is there some consensus as to whether the actual facts and analysis of that post are actually true or even somewhat credible? (separate from the conclusions, which, I agree, seem clearly wrong for all the reasons you said).

As in, if they had instead titled the post "Givewell's Cost-Per-Life-Saved Estimates are Impossibly Low" and concluded "if the cost per life saved estimate was truly that low, we could have already gone ahead and saved all the cheap lives, and the cost would be higher - so there's something deeply wrong here"... would people be agreeing with it?

(Because if so, shouldn't the relevant lower bound for cpls on the impact evaluations be updated if they're wrong, and shouldn't that probably be the central point of discussion?

And if not...we should probably add a note clarifying for any reader joining the discussion late, that we're not actually sure whether the post is correct or not, before going into the implications of the conclusions. We certainly wouldn't want to start thinking that there aren't lives that can be saved at low cost if there actually are)

Comment by ishaan on What exactly is the system EA's critics are seeking to change? · 2019-05-30T14:40:06.395Z · EA · GW

I think that's a little unfair. It wasn't just have an "unexamined assumption", he just declared that solidarity was the best way and named some organizations he liked, with no attempt at estimating and quantifying. And he's critiquing EA, an ideology whose claim to fame is impact evaluations. Can an EA saying "okay that's great, I agree that could be true... but how about having a quantitative impact evaluation... of any kind, at all, just to help cement the case" really be characterized as "whataboutism" / methodology war?

(I don't think I agree with your first paragraph, but I do think it's fair to argue that "but not all readers are in high income countries" is whataboutism until I more fully expand on what I think the practical implications are on impact evaluation. I'm going to save the discussion about the practical problems that arise from being first world centric for a different post, or drop them, depending on how my opinion changes after I've put more thought into it.)