Posts

EA Medicine Network 2021-07-30T19:46:03.781Z
Negative counterfactual impact of starting new charities 2021-03-07T13:01:26.281Z
Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure 2020-12-19T12:31:44.803Z
jushy's Shortform 2020-12-19T12:27:51.998Z

Comments

Comment by jushy (sanjush) on EA Medicine Network · 2021-08-04T13:54:47.624Z · EA · GW

Thank you! We most recently had an event with speakers generally discussing high-impact paths related to medicine. We've recently recruited event planners so we haven't started planning new events yet, but we're open to suggestions! We're also hoping to have an in-person meetup around July 2022.

Comment by jushy (sanjush) on What key facts do you find are compelling when talking about effective altruism? · 2021-04-19T09:45:54.229Z · EA · GW

'The news' being less likely to draw our attention towards bad things that happen more frequently

Comment by jushy (sanjush) on What key facts do you find are compelling when talking about effective altruism? · 2021-04-19T09:44:49.430Z · EA · GW

Around £60 000 a year putting people in the top 1% incomes

Comment by jushy (sanjush) on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-12T10:59:41.103Z · EA · GW

Like others have said, I suspect that neutrality on making happy people isn't the majority view amongst EAs.

But I am neutral on making happy people, which means that I am not particularly worried about extinction, but I still think EA work surrounding extinction is a priority, because almost all of this work also helps to prevent other 'worst case scenarios' that do not necessarily involve extinction (https://forum.effectivealtruism.org/posts/nz26sqMNf7kfFDg8y/longtermism-which-doesn-t-care-about-extinction-implications).

I think preference for extinction over a point in time with small amounts of suffering only holds if, on top of being 'time-agnostic' and neutral on making happy people, you are a strict negative utilitarian (you only care about reducing suffering, and not about increasing pleasure), and that the small amount of suffering cannot be eliminated at a later point in time.

Comment by jushy (sanjush) on Forget replaceability? (for ~community projects) · 2021-04-10T18:42:34.048Z · EA · GW

Completely agree with your first 2 points!

With the 3rd, I feel that the incentive to do things that are less effective in absolute terms but more appealing to non-EA funders already exists, and that whether someone should act upon this incentive depends on how much effectiveness they would have to sacrifice, and how their project compares to other potential uses of the EA funding. 

That being said, I'm of the (completely subjective) opinion that there are probably lots of cases where a 'pull non-EA funding towards a relatively more EA project' approach will have a greater counterfactual impact than 'create a very EA project and get EA funding for it' approach. But as Owen said below, it's definitely a case-by-case kind of thing.

Comment by jushy (sanjush) on Voting reform seems overrated · 2021-04-10T16:57:46.853Z · EA · GW

As far as I'm aware the main EA electoral reform org (electionscience.org) advocates for approval voting rather than PR, so I think a successful criticism of electoral reform as a cause area would require comparing approval voting and other voting system ideas to both PR and FPTP.

Comment by jushy (sanjush) on Forget replaceability? (for ~community projects) · 2021-03-31T17:27:29.332Z · EA · GW

I was thinking about this earlier, it feels like the negative counterfactual impact of starting new charities would be very valuable for someone to investigate.

Also, I agree that "Where is the funding coming from?" is a super important question when assessing replaceability / the counterfactual, and I think a norm of seeking non-EA funding first for EA projects would be a good thing (but it might already be a thing, I'm not sure).

Comment by jushy (sanjush) on What Makes Outreach to Progressives Hard · 2021-03-16T21:24:38.245Z · EA · GW

Related to this, a reasonable question I can see progressives asking is "Why do EAs not prioritise anti-racism/ feminism/ LGBT rights?" 

While EAs could argue that drug decriminalisation and criminal justice reform in America are closely related to anti-racism, I think there are some important philosophical questions to answer here related to how EA chooses to define a cause area, and why we don't seem to think of anti-racism / feminism / LGBT rights  as cause areas. I have no idea what a good answer would look like.

I also don't think that the last discussion on this forum of how we define cause areas made much progress.

Comment by jushy (sanjush) on What Makes Outreach to Progressives Hard · 2021-03-15T00:05:32.949Z · EA · GW

I agree but I feel that in practice leftists I come across use the term to mean 'working against the class you grew up in', and exclusively use it for people who grew up poor and working class.

Comment by jushy (sanjush) on What Makes Outreach to Progressives Hard · 2021-03-14T18:37:50.496Z · EA · GW

Not OP or at Harvard Law but anecdotally I know plenty of people who would consider themselves to be leftists, fit in the anti-oppression cluster, but wouldn't think that just going to Harvard Law makes you a class traitor. I think for many it would depend on what the Harvard Law grad actually did as a profession, eg - are you a corporate lawyer (class traitor) or a human rights lawyer (not class traitor).

That being said, I also think that the mainstreaming of social justice issues means that increasing numbers of people in the intersectionality/anti-oppression cluster don't know about / care about / support ideas about class struggle and class war, so aren't really 'leftists' in that sense of the word.

Comment by jushy (sanjush) on Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" · 2021-02-26T13:00:14.020Z · EA · GW

I think of 'equality' as having 2 major versions:

  1. Valuing each individual's interests equally, which leads to a focus on maximising overall utility
  2. Aiming for all individuals to have equal utility

EA and utilitarianism generally focus on the first version.

In most cases, I think that focusing on either of these versions give us the same conclusions, eg - EA approaches to global health, development and animal welfare.

In my opinion, longtermism is the only  strand of EA where our attempts to maximise utility do not also bring us closer to all individuals having equal utility.

And I think your idea of 'just' and 'fair' actions depend on which version of equality you value more. For me,  I value the first version more, so the actions that I see as 'just' and 'fair' are almost entirely the ones that EA endorses. 

Comment by jushy (sanjush) on jushy's Shortform · 2021-02-25T10:23:57.246Z · EA · GW

Is anyone aware of previous writings by EAs on founding think tanks as a way of having an impact over the long-term?

In the UK,  I think the Fabian Society and the Centre for Policy Studies are continuing to influence British politics long after the deaths of their founders.

Comment by jushy (sanjush) on jushy's Shortform · 2021-01-27T10:05:20.707Z · EA · GW

Is anyone aware of any research / blog posts specifically on how much free range hens suffer? Most of the ones I can see repeatedly deviate from this question.

Comment by jushy (sanjush) on Hilary Greaves: The collectivist critique of the EA movement · 2021-01-23T20:15:51.824Z · EA · GW

That's a good point, I think exploring collective action will become more important when / if EA becomes larger. I was thinking along similar lines of effectiveness of petitions and protests

Comment by jushy (sanjush) on Hilary Greaves: The collectivist critique of the EA movement · 2021-01-20T22:53:53.397Z · EA · GW

Is there a problem here with a lack of clarity over whether criticisms are targeted at EA principles vs the movement itself?

I think exploring the impacts of collective actions would be completely in line with EA principles, but there don't seem to be a lot of EAs actually doing this at present.

Comment by jushy (sanjush) on Being Inclusive · 2021-01-17T16:44:53.969Z · EA · GW

With respect to effective charitable giving being more accessible, I think it could make sense to deemphasise the link between effective giving and effective altruism, and for some people to work on promoting effective giving more rapidly + widely than effective altruism itself

Comment by jushy (sanjush) on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-19T17:50:33.878Z · EA · GW

Yes I am, thank you! I'll edit the post to clarify this. That would also explain the EA Survey considering X-risks and the Long-term future to be one category.

Comment by jushy (sanjush) on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-19T17:06:56.964Z · EA · GW

Thank you for your input! I agree with the point about co-operation with other value systems.

EDIT: as MichaelStJules pointed out, I think I was also mixing up existential risks (a broader term) with extinction risks (a narrower term).

Comment by jushy (sanjush) on What are some potential coordination failures in our community? · 2020-12-19T12:38:06.382Z · EA · GW

Not enough industry-specific EA networks / industry-specific EA networks aren't active / visible enough. 

Comment by jushy (sanjush) on jushy's Shortform · 2020-12-19T12:27:52.360Z · EA · GW

Longtermism which doesn't care about Existential Risk - Implications of Benatar's asymmetry between pain and pleasure

I think a major implication of longtermism is that "we should care far more about problems which will cause suffering to many generations, or problems that will deprive many generations of pleasure".

But if  like me, you accept Benatar's argument on the asymmetry of suffering and pleasure, i.e, that a lack of pleasure isn't a bad thing if no one is around to miss it, then the "existential risk component" of an existential risk isn't a problem, since depriving many generations of pleasure by preventing them from existing in the first place isn't a bad thing. 

However, many existential risks are "progressive" in the sense that they will cause suffering for many generations before causing extinction, so they would still be a cause for concern. But the fact that they are an "existential risk" wouldn't really be relevant.

On the other hand, some existential risks that EAs are concerned about could only affect a small number of generations (eg - very large asteroids), and could almost entirely be ignored in comparison to issues which could plague many generations. 

I think a reasonable amount of people agree with Benatar, because I think most people don't see depriving an individual of pleasure by preventing them from existing as a 'con' of contraception.

Originally posted to r/effectivealtruism because I thought I was missing something obvious.

Comment by jushy (sanjush) on Growth and the case against randomista development · 2020-08-11T14:13:36.617Z · EA · GW

Could it be better to consider median income rather than GDP per capita (a mean) when thinking about the welfare of individuals, given how skewed income distributions are?