Call for Early Career Speakers 2020-09-03T16:13:17.124Z
Advice for an Undergrad 2019-07-02T16:36:43.651Z


Comment by ZacharyRudolph on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-06T18:25:20.763Z · EA · GW

I thought Open Phil's Criminal Justice Reform efforts would include work in this area and it seems they've done some research into this. Some links from a quick google for interested persons:

Comment by ZacharyRudolph on What key facts do you find are compelling when talking about effective altruism? · 2021-04-22T15:34:42.301Z · EA · GW

That 11,000 children died yesterday, will die today and are going to die tomorrow from preventable causes. (I'm not sure if that number is correct, but it's the one that comes to mind most readily.)

Comment by ZacharyRudolph on The importance of how you weigh it · 2021-04-03T03:32:23.990Z · EA · GW

TLDR: Very helpful post. Do you have any rough thoughts on how someone would pursue moral weighing research?

Wanted to say, first of all, that I found this post really helpful in helping crystalize some thoughts I've had for a while.  I've spent about a year researching population axiologies (admittedly at the undergrad level) and have concluded that something like a critical level utilitarian view is close enough to a correct view that there's not much left to say. So, in trying to figure out where to go from there (and especially whether to pursue a career in philosophy), I've been trying to think of  just what sort of questions would make a substantive difference in how we ought to approach EA goals. I couldn't think of anything, but it still seemed like there was some gap between the plausible arguments that have been presented so far and how to actually go about accomplishing those goals.  I think you've clarified here, with "moral weighting," the gap that was bothering me. It seems similar to the "neutrality intuition" Broome talks about where we don't want to (but basically have to) say there's a discrete threshold where a life goes from worth living to not.

At any rate, moral weighting is the sort of work I hope to be able to contribute to. Are there any other articles/papers/posts you think would be relevant to the topic? Do you have any rough thoughts on the sort of considerations that would be operative here? Do any particular fields seem closest to you? I had been considered something like a wellbeing metric like the QALY or DALY in public health (a la the work Derek Foster posted a little while ago) to be a promising direction. 


Comment by ZacharyRudolph on Boundaries of Empathy and Their Consequences · 2019-08-02T16:52:55.775Z · EA · GW

I'm mostly using "person" to be a stand in for that thing in virtue of which something has rights or whatever. So if preference satisfaction turns out to be the person-making feature, then having the ability to have preferences satisfied is just what it is to be a person. In which case, not appropriately considering such a trait in non-humans would be prima facie wrong (and possibly arbitrary).

Comment by ZacharyRudolph on Boundaries of Empathy and Their Consequences · 2019-08-01T16:52:53.228Z · EA · GW

I'm familiar with the general argument, but I find it persuasive in the other direction. That is, I find it plausible that there are human animals for whom personhood fails to pertain, so ~(2). [Disclaimer: I'm not making any further claim to know what sort of humans those might be nor even that coming to know the fact of the matter in a given case is within our powers.] I don't know if consciousness is the right feature, but I worry that my intuitive judgements on these sorts of features are ad hoc (and will just pick out whatever group I already think qualifies).

Just to respond to the conclusion of that article, it doesn't seem at all obvious that humans should be treated equally despite having different abilities, at least in contexts where those abilities are relevant. They also seem to equivocate a bit on treatment/respect. I can hold that persons should be treated with equal respect or equitably (or whatever) without holding that they should be treated equally. It also seems to me like personhood would be a binary feature. I don't think it makes sense to say that someone is more of a person than another and is this deserving of more person privileges.

Comment by ZacharyRudolph on Boundaries of Empathy and Their Consequences · 2019-07-30T17:17:18.654Z · EA · GW

Yes! It's much more conducive to conversation now, and I've changed my vote accordingly.

To actually engage with your question: I personally find (1) to be the most motivating reason to adopt a more vegetarian diet since I'm more compelled by the idea that my actions might be harming other persons. Regardless, (1) and (2) are both grounded in the empirical observations. (and both of which are seriously questionable in how much of a difference they make in the individual case: see this and the number of confounding factors in veg diets causing better health)

I personally reject (3) because animals don't fall, in my ontology, under the category of morally significant beings (neither argument nor experience has yet made me think animals possess whatever it is that makes us consider, at least most, humans as persons) I take this to be a morally relevant difference. (Though, I would endorse many efforts to improve animal welfare for reasons ultimately grounded in human person welfare.)

Moreover, regarding changing behavior, I can think of a number of additional reasons someone might not change their behavior that aren't related to empathy, e.g. they might find it supererogatory, they might have ingrained cultural reasons, they might not think they'll be able to make a difference, and reasons to do with poverty and food injustice.

Thus for me, an answer to (a) and (b) would be a convincing theory of personhood and a further convincing argument that animals share that person-making feature (or other moral relevance-making feature).

Comment by ZacharyRudolph on Boundaries of Empathy and Their Consequences · 2019-07-30T05:50:19.832Z · EA · GW

"(3) The ethical argument: killing or abusing an animal for culinary enjoyment is morally unsound"

I'm understanding abuse as being wrong by definition, a la how murder is by definition a wrongful killing. (3) seems to transparently be a case of arguing that something that is wrong is thus wrong. But, I agree, this by itself wouldn't warrant downvoting so much as how the generally dismissive tone of the writing came off as assuming some moral high ground, e.g. "to accept that this being with no identity, little conceivable intellect, and no means of advocating for itself or expressing relief or gratitude is suffering to an extent that is not justified by the mere desire of taste," "too inconvenient," "culinary enjoyment."

I felt I should comment instead of anonymously downvoting in case it was just a misunderstanding.

Comment by ZacharyRudolph on Boundaries of Empathy and Their Consequences · 2019-07-28T17:07:26.665Z · EA · GW

Down voted for question begging in the way you phrased the "ethical argument," and descriptions like "the mere desire of taste." [Edit: I changed my vote based on changes made.]

Comment by ZacharyRudolph on What are your thoughts on my career change options? (AI public policy) · 2019-07-19T21:05:09.779Z · EA · GW

In that case, it seems plausible that you (and your coworkers) will do more and better work if you're not just ascetically grinding away for decades (and if they aren't spending time around someone like that). Perhaps, a good next step is to shadow/intern with/talk to people currently doing these jobs to learn what they look like day to day?

Comment by ZacharyRudolph on What are your thoughts on my career change options? (AI public policy) · 2019-07-19T18:57:22.586Z · EA · GW

I don't think I can give much specific advice, but it doesn't seem like you're putting much of a weight on what you want to do. For instance, it seems like you're somewhat disappointed that 80k advised against working in AI ethics. If so, I'd suggest maybe applying anyway or considering good programs not in the top 10 (most school rankings seem to be fairly arbitrary in my experience anyway) with the knowledge that you might have to be a little more self-motivated to do "top 10" quality work.

Alternatively, it might be the case that you simply haven't looked into Civil Service jobs as much in which case maybe spend some time imagining/learning about that path. You might find yourself becoming just as excited for that work as for the AI stuff.

Comment by ZacharyRudolph on Want to Save the World? Enter the Priesthood · 2019-07-14T17:20:47.665Z · EA · GW

I'm not sure I understand your objection, but I feel like I should clarify that I'm not endorsing consequentialism as a sort of moral criterion (that is, the thing in virtue of which something is right or wrong) so much as I take the "effective" part of effective altruism to imply using some sort nonmoral consequentialist reasoning. As far as I understand (which isn't far), a Catholic moral framework would still allow for some sort of moral quantification (that some acts are more good than others or are good to a greater degree), e.g. saints are a thing. If so, then (I think) it seems sensible to say a Catholic could sensibly take the results of a consequentialist reasoning as applied to her own framework as morally motivating reasons to choose one act over another.

My worry is that if that framework holds only one value as most basic, then this consequentialist reasoning might (edit: depending on the value) validly lead to the conclusion that the way to do the most good is something radically different from the things that this subculture tends to endorse, and that this should count towards the concern that this subculture's actions could produce serious disvalue (edit: disvalue from, say, the moral consequentialist's point of view).

On the other hand if this framework is some sort of pluralist/virtue system (you mentioned a virtue of charity), then yeah I definitely agree that effective altruism could represent the pursuit of excellence in such a virtue or that "effectiveness" could be interpreted as a way of saying that the altruist is simply addressing what he takes to be his most stringent obligations with regard to his duty of charity. These, though, I think would count as different arguments (i.e. arguments which make sense to Catholics) than those which utilitarians take to give morally motivating reasons.

Comment by ZacharyRudolph on Want to Save the World? Enter the Priesthood · 2019-07-13T17:58:57.833Z · EA · GW

You're right. What I was trying to get at was that I presume Catholics would start with different answers to axiological questions like "what is the most basic good?". Where I might offer a welfarist answer, the Church might say say "a closeness to God" (I'm not confident in that). Thus, if a Catholic altruist applies the "effective" element of EA reasoning, the way to do the most good in the world might end up looking like aggressive evangelism in order to save the most souls. And that if we're trying to convince Catholic Priests to encourage the Church use its resources for usual EA interventions, it seems like you'd need to either employ a different set of arguments than those used to convince welfarists/utilitarians or convince them to adopt answer to the question we started with.

Comment by ZacharyRudolph on Want to Save the World? Enter the Priesthood · 2019-07-12T16:39:26.126Z · EA · GW

I've spent some time seriously trying to convince a devout Catholic friend of mine about EA. The problem, as far as I can tell, is that EA and the Church have value systems that are almost directly at odds. I mean, that if you take seriously their value system, the rational course of action isn't EA. At least, not in the manner meant here.

My understanding: Essentially, the Church already has an entrenched long-termist view. It's just that the hugely disvaluable outcome is a soul or souls spending eternity in hell (or however long in purgatory). In an expected value analysis, eternity is always going to win out over the whatever the life of the universe is. To convince them, then, to pursue traditional EA goals would, I think, require the extra step of motivating them to think those EA goals are more important.

Comment by ZacharyRudolph on Advice for an Undergrad · 2019-07-03T17:36:44.248Z · EA · GW

I started quantitatively "upskilling" almost a year ago exactly after eschewing math classes for.. a while. I spent this past academic year taking the calc series. Now working through MITOpenCourseware's multivariable this summer to test out of it when I get to AC.

Contingent on testing out, it should only be two math classes/semester to meet the requirements.

Comment by ZacharyRudolph on Advice for an Undergrad · 2019-07-03T15:38:17.828Z · EA · GW

Do you recall which Facebook group/page? I searched the "Effective Altruism" group for keywords like major/college but didn't find anything.

Thanks for the class suggestion. I'll look into what they offer on that.

Comment by ZacharyRudolph on Advice for an Undergrad · 2019-07-03T15:33:53.445Z · EA · GW

Thank you, I've actually read that article before. I asked here because there seem to be all kinds of factors which would confound the usefulness of the advice there, e.g. it might be tailored to the average reader/their ideal reader, limitations on what they want to publically advise.

I figured responses here might be less fit to the curve and thus more useful since I'm not confident of being on that curve.