Comment by Sarthak on How can prediction markets become more trendy, legal, and accessible? · 2019-03-16T18:04:42.887Z · EA · GW

Augur is a decentralized protocol using the blockchain which allows anyone to setup a prediction market about anything. Although I’m not sure about the legality, the fact that no one individual/institution owns or runs Augur suggests to me it might be easier to build niche/specific prediction markets on top of it.

Comment by Sarthak on Making discussions in EA groups inclusive · 2019-03-05T04:28:59.216Z · EA · GW

Can you clarify why you think your three criteria are enough to ascribe benign intentions the majority of the time? The point I was trying to get at was that there’s no relation to thinking a lot about how to make the world a better place and making sacrifices to achieve that AND also having benign intentions towards other groups. People can just more narrowly define the world that they are serving.

A concrete example of how believing women have less worth than men could be harmful in evaluating charities; one charity helps women by X utils, one charity helps men by X utils. (Perhaps charity #1 decreases the amount of work women need to do by having a well for water; etc.). Believing women have less worth than men would lead to charity #2 strictly dominating charity #1 when they should AC tually be equally recommended.

In terms of people having the ‘right’ philosophy — what I’m saying is that there’s nothing inherent to EA that prevents it from coexisting with misogyny. It’s not a core EA belief that women are equal to men. So we shouldn’t be surprised that EA’s may act as misogynists.

In any case, you admit that your criteria aren’t sufficient to screen out all negative intentions. When you say we give the benefit of the doubt for the sake of the EA project, what you’re saying is that demographic minorities need to accept some level of malevolence in their communities in exchange for the privilege of contributing to the EA cause. Why should the burden be on them? Why not place the burden (if you can even call it that) on individuals who don’t have to worry about this systematic malevolence — which is what this document suggests we do — to think about what they say before they say it.

(I’m not going to address each of your rebuttals individually because the main points I want to defend are the two I’ve tried to clarify above.)

Comment by Sarthak on Making discussions in EA groups inclusive · 2019-03-04T22:35:00.674Z · EA · GW

You identify the number one issue you have with activists from demographic groups being that they are suspicious of EA motivations.

One of the major problems driving social justice fear and offense in the US right now is the failure of right-wing and centrist actors to credibly demonstrate that they're not secretly harboring bias and hate. If I was going to pick something that activists for underrepresented demographics need to revise when they look at EA, it's that they should stop applying their default suspicions to the people in EA.

And you claim that they shouldn't be because 1) EA's are trying to improve the world as much as possible, 2) EA's make personal sacrifices for such, and 3) they accept the paradigm of philosophical and scientific rigor.

But when people are already committing to improve the well-being of the world as much as possible, and when they are making personal sacrifices to be involved in this effort, and when they are accepting our paradigm of philosophical and scientific rigor, the least you can give them is a basic element of trust regarding their beliefs and motivations.

My question: is this belief because you are an effective altruist, or because these criteria are sufficient to indicate positive intentions towards minorities 100% of the time?

An example: a devout traditionalist Buddhist male might also believe that they are trying to improve the world as much as possible, given the guidelines of their religious/spiritual tradition. They might very well also make personal sacrifices for such, and they may well be doing so in a paradigm of philosophical rigor. Buddhists might also claim that the way they achieve these things is scientifically rigorous (there's a famous quote by the Dalai Lama where he says that if science refutes part of his teachings, the teachings must be rejected). But if said (male) Buddhist was raising questions about whether women deserve equal rights to men, does the fact he satisfies your three criterion mean we should assume he has positive and respectful intentions towards women?

We could replace Buddhism with any religious or spiritual worldview that is compatible with western science. I'm more familiar with Buddhism, which is why I make that my example.

My guess is that for most people who have read/upvoted your comment, the answer would be no. The reason would be because none of these comments reveal whether someone harbors bias or is malevolent about certain things. If you harbor an implicit belief that other genders are inferior to men, for example, then no matter how much you care about bettering the world, and how many sacrifices you make to better the world, your intentions would still be about bettering the world for {approved of population/men}. Philosophical and scientific rigor don't help either; although I'm not well versed in the history of racism, I do know that science and philosophy have been used to espouse discriminatory views plenty of times in the past. See craniometry and Immanuel Kant (whose original work discriminates against black people iirc)

Comment by Sarthak on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T16:53:22.647Z · EA · GW

+1 for pointing out the hazard of having funding concentrated in the hands of a very few decision makers

Comment by Sarthak on Suffering of the Nonexistent · 2019-03-02T17:30:57.985Z · EA · GW

Got it. I would recommend cutting this post down roughly in half -- you take a while to get to the point (stating your thesis in roughly the 14th paragraph). I understand the desire to try and warn the audience for what is coming, but the first section until you get to the thesis just seems overwrought to me. I know cutting is hard, but I'm confident the rewards from increased clarity will be worth it.

Comment by Sarthak on Suffering of the Nonexistent · 2019-03-02T00:17:51.732Z · EA · GW

Hi, I hope this doesn’t offend, but is this meant to be satire? I’m unclear if that’s the case (and I don’t think this post is well structured whether it’s meant to be satire or serious). If it’s not satire, I’ll engage more.

Comment by Sarthak on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T03:09:12.413Z · EA · GW

That makes sense — on a second look, I misread your first comment. Absolutely agree that the community shouldn’t have a go big or go home mentality, ie it shouldn’t be seen as impossible to do good if you can’t get an ultra selective job at one of these organizations.

Comment by Sarthak on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T00:42:15.280Z · EA · GW

I would disagree with that line of reasoning -- as donors, we should be seeking to channel money into the most effective places it can do good, not trying to spread out the opportunity to do good to different individuals within the EA movement.

So if donor A can create 10 utils by donating $1 to Org Z, or create 5 utils and one new EA job by donating $1 to Org Y, the choice seems to be clear. My understanding is that our current research suggests that this is the case. (I also agree with Arepo, however, about donors potentially being irrational.)