Comment by rohinmshah on Why do you reject negative utilitarianism? · 2019-02-12T18:31:58.991Z · score: 14 (9 votes) · EA · GW
Therefore, it is a straw man argument that NUs don’t value life or positive states, because NUs value them instrumentally, which may translate into substantial practical efforts to protect them (compared even with someone who claims to be terminally motivated by them).

By my understanding, a universe with no conscious experiences is the best possible universe by ANU (though there are other equally good universes as well). Would you agree with that?

If so, that's a strong reason for me to reject it. I want my ethical theory to say that a universe with positive conscious experiences is strictly better than one with no conscious experiences.

Comment by rohinmshah on What are some lists of open questions in effective altruism? · 2019-02-06T20:20:16.172Z · score: 3 (3 votes) · EA · GW

I was going to post a few lists that hadn't already been posted, but this one had all of them already :)

Comment by rohinmshah on Disentangling arguments for the importance of AI safety · 2019-01-23T17:42:55.145Z · score: 2 (2 votes) · EA · GW

I think 4, 5 and 6 are all valid even if you take the CAIS view. Could you explain how you think those depend on the AGI being an independent agent?

Plausibly 2 and 3 also apply to CAIS, though those are more ambiguous.

Comment by rohinmshah on Altruistic Motivations · 2019-01-05T16:06:05.061Z · score: 2 (2 votes) · EA · GW

Actually, my summary of that post initially dropped the obligation frame because of these reasons :P (Not intentionally, since I try to have objective summaries, but I basically ignored the obligation point while reading and so forgot to put it in the summary.)

I do think the opportunity frame is much more reasonable in that setting, because "human safety problems" are something that you might have been resigned to in the past, and AI design is a surprising option that might let us fix them, so it really does sound like good news. On the other hand, the surprising part about effective altruism is "people are dying for such preventable reasons that we can stop it for thousands of dollars", which is bad news that it's really hard to be excited by.

Comment by rohinmshah on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-04T20:40:01.569Z · score: 8 (4 votes) · EA · GW

Not sure. A few hypotheses:

  • Arxiv sanity has become better at predicting what I care about as I've given it more data. I don't think this is the whole story because the absolute number of papers I see on Twitter has gone down.
  • I did create my Twitter account primarily for academic stuff, but it's possible that over time Twitter has learned to show me non-academic stuff that is more attention-grabbing or controversial, despite me trying not to click on those sorts of things.
  • Academics are promoting their papers less on Twitter.
Comment by rohinmshah on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-31T14:33:22.493Z · score: 7 (3 votes) · EA · GW

Not the OP, but the Alignment Newsletter (which I write) should help for technical AI safety. I source from newsletters, blogs, Arxiv Sanity and Twitter (though Twitter is becoming more useless over time). I'd imagine you could do the same for other fields as well.

Comment by rohinmshah on Critique of Superintelligence Part 3 · 2018-12-24T06:54:13.111Z · score: 2 (2 votes) · EA · GW
these sorts of techniques have been applied for decades and have never achieved anything close to human level AI

We also didn't have the vast amounts of compute that we have today.

other parts of Bostrom's argument rely upon much broader conceptions of intelligence that would entail the AI having common sense.

My claim is that you can write a program that "knows" about common sense, but still chooses actions by maximizing a function, in which case it's going to interpret that function literally and not through the lens of common sense. There is currently no way that the "choose actions" part gets routed through the "common sense" part the way it does in humans. I definitely agree that we should try to build an AI system which does interpret goals using common sense -- but we don't know how to do that yet, and that is one of the approaches that AI safety is considering.

I agree with the prediction that AGI systems will interpret goals with common sense, but that's because I expect that we humans will put in the work to figure out how to build such systems, not because any AGI system that has the ability to use common sense will necessarily apply that ability to interpreting its goals.

If we found out today that someone created our world + evolution in order to create organisms that maximize reproductive fitness, I don't think we'd start interpreting our sex drive using "common sense" and stop using birth control so that we more effectively achieved the original goal we were meant to perform.

Comment by rohinmshah on Critique of Superintelligence Part 3 · 2018-12-15T09:27:59.243Z · score: 2 (2 votes) · EA · GW

I'm not really arguing for Bostrom's position here, but I think there is a sensible interpretation of it.

Goals/motivation = whatever process the AI uses to select actions.

There is an implicit assumption that this process will be simple and of the form "maximize this function over here". I don't like this assumption as an assumption about any superintelligent AI system, but it's certainly true that our current methods of building AI systems (specifically reinforcement learning) are trying to do this, so at minimum you need to make sure that we don't build AI using reinforcement learning, or that we get it's reward function right, or that we change how reinforcement learning is done somehow.

If you are literally just taking actions that maximize a particular function, you aren't going to interpret them using common sense, even if you have the ability to use common sense. Again, I think we could build AI systems that used common sense to interpret human goals -- but this is not what current systems do, so there's some work to be done here.

The arguments you present here are broadly similar to ones that make me optimistic that AI will be good for humanity, but there is work to be done to get there from where we are today.

Comment by rohinmshah on Critique of Superintelligence Part 2 · 2018-12-15T09:16:43.513Z · score: 6 (3 votes) · EA · GW
my impression was, that progress was quite jumpy at times, instead of slow and steady.

https://sideways-view.com/2018/02/24/takeoff-speeds/

https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/

Comment by rohinmshah on Critique of Superintelligence Part 2 · 2018-12-15T09:15:14.936Z · score: 2 (2 votes) · EA · GW
So let’s say you have an Artificial Intelligence that thinks enormously faster than a human.

But why didn't you have an AI that thinks only somewhat faster than a human before that?

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-19T17:42:26.850Z · score: 1 (1 votes) · EA · GW

My math-intuition says "that's still not well-defined, such reasons may not exist".

To which you might say "Well, there's some probability they exist, and if they do exist, they trump everything else, so we should act as though they exist."

My intuition says "But the rule of letting things that could exist be the dominant consideration seems really bad! I could invent all sorts of categories of things that could exist, that would trump everything I've considered so far. They'd all have some small probability of existing, and I could direct my actions any which way in this manner!" (This is what I was getting at with the "meta-oughtness" rule I was talking about earlier.)

To which you might say "But moral reasons aren't some hypothesis I pulled out of the sky, they are commonly discussed and have been around in human discourse for millennia. I agree that we shouldn't just invent new categories and put stock into them, but moral reasons hardly seem like a new category."

And my response would be "I think moral reasons of the type you are talking about mostly came from the human tendency to anthropomorphize, combined with the fact that we needed some way to get humans to coordinate. Humans weren't likely to just listen to rules that some other human made up, so the rules had to come from some external source. And in order to get good coordination, the rules needed to be followed, and so they had to have the property that they trumped any prudential reasons. This led us to develop the concept of rules that come from some external source and trump everything else, giving us our concept of moral reasons today. Given that our concept of "moral reasons" probably arose from this sort of process, I don't think that "moral reasons" is a particularly likely thing to actually exist, and it seems wrong to base your actions primarily on moral reason. Also, as a corollary, even if there do exist reasons that trump all other reasons, I'm more likely to reject the intuition that it must come from some external source independent of humans, since I think that intuition was created by this non-truth-seeking process I just described."

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-15T00:19:41.424Z · score: 1 (1 votes) · EA · GW

Okay, cool, I think I at least understand your position now. Not sure how to make progress though. I guess I'll just try to clarify how I respond to imagining that I held the position you do.

From my perspective, the phrase "moral reason" has both the connotation that it is external to humans and that it trumps all other reasons, and that's why the intuition is so strong. But if it is decomposed into those two properties, it no longer seems (to me) that they must go together. So from my perspective, when I imagine how I would justify the position you take, it seems to be a consequence of how we use language.

What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe)

My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-13T21:47:07.547Z · score: 2 (2 votes) · EA · GW
4. I don't think I understand the set up of this question - it doesn't seem to make a coherent sentence to replace X with a number in the way you have written it.

I did mean for you to replace X with a phrase, not a number.

If my intuition here is right then moral reasons must always trump prudential reasons. Note I don't have anything more to offer than this intuition, sorry if I made it seem like I did!

Your intuition involves the complex phrase "moral reason" for which I could imagine multiple different interpretations. I'm trying to figure out which interpretation is correct.

Here are some different properties that "moral reason" could have:

1. It is independent of human desires and goals.

2. It trumps all other reasons for action.

3. It is an empirical fact about either the universe or math that can be derived by observation of the universe and pure reasoning.

My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.

A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but "irreducibly normative" sounds to me like it does not satisfy property 3.

Here are some models of how you might be thinking about moral reasons:

a) Moral reasons are defined as the reasons that satisfy property 1. If I think about those reasons, it seems to me that they also satisfy property 2.

b) Moral reasons are defined as the reasons that satisfy property 2. If I think about those reasons, it seems to me that they also satisfy property 1.

c) Moral reasons are defined as the reasons that satisfy both property 1 and property 2.

My response to a) and b) are of the form "That inference seems wrong to me and I want to delve further."

My response to c) is "Define prudential reasons as the reasons that satisfy property 2 and not-property 1, then prudential reasons and moral reasons both trump all other reasons for action, which seems silly/strange."

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-13T21:31:28.035Z · score: 2 (2 votes) · EA · GW

Not if the best thing to do is actually what the supreme being said, and not what you think is right, which is (a natural consequence of) the argument in this post.

(Tbc, I do not agree with the argument in the post.)

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-12T02:50:13.893Z · score: 2 (2 votes) · EA · GW

There seems to be something that makes you think that moral reasons should trump prudential reasons. The overall thing I'm trying to do is narrow down on what that is. In most of my comments, I've thought I've identified it, and so I argued against it, but it seems I'm constantly wrong about that. So let me try and explicitly figure it out:

How much would you agree with each of these statements:

  • If there is a conflict between moral reasons and prudential reasons, you ought to do what the moral reasons say.
  • If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions one ought to take, then you ought to do what that process prescribes, regardless of what you desire.
  • If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions to take to maximize utility, then you ought to do what that process prescribes, regardless of what you desire.
  • If there is an external-to-you entity satisfying property X that prescribes actions you should take, then you ought to do what it says, regardless of what you desire. (For what value of X would you agree with this statement?)
I have a very low credence that your proposed meta-normative rule would be true?

I also have a very low credence of that meta-normative rule. I meant to contrast it to the meta-normative rule "binding oughtness trumps regular oughtness", which I interpreted as "moral reasons trump prudential reasons", but it seems I misunderstood what you meant there, since you mean "binding oughtness" to apply both to moral and prudential reasons, so ignore that argument.

I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that 'my table has four legs' won't create normative reasons for action, neither will the descriptive fact that 'Harry desires chocolate ice-cream' create them either.

This makes me mildly worried that you aren't able to imagine the worldview where prudential reasons exist. Though I have to admit I'm confused why under this view there are any normative reasons for action -- surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.

(Btw, random note, I suspect that Ben Pace above and I have very similar views, so you can probably take your understanding of his view and apply it to mine.)

Comment by rohinmshah on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-12T02:25:39.871Z · score: 1 (1 votes) · EA · GW

I see, that makes sense, and I agree with it.

Comment by rohinmshah on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-10T17:42:40.563Z · score: 1 (1 votes) · EA · GW

I and most other people (I'm pretty sure) wouldn't chase the highest probability of infinite utility, since most of those scenarios are also highly implausible and feel very similar to Pascal's mugging.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-10T17:36:38.626Z · score: 2 (2 votes) · EA · GW
However these just wouldn't constitute normative reason for action and that's just what you need for an action to be choice-worthy.
[...]
As I don't think that mere desires create reasons for action I think we can ignore them unless they are actually prudential reasons.

I don't know how to argue against this, you seem to be taking it as axiomatic. The one thing I can say is that it seems clearly obvious to me that your desires and goals can make some actions better to choose than others. It only becomes non-obvious if you expect there to be some external-to-you force telling you how to choose actions, but I see no reason to assume that. It really is fine if you're actions aren't guided by some overarching rule granted authority by virtue of being morality.

But I suspect this isn't going to convince you. Can we simply assume that prudential reasons exist and figure out the implications?

The distinction between normative/prudential is one developed in the relevant literature, see this abstract for a paper by Roger Crisp to get a sense for it.

Thanks, I think I've got it now. (Also it seems to be in your appendix, not sure how I missed that before.)

The issues is that we're trying to work out how to act with uncertainty about what sort of world we're in?

I know, and I think in the very next paragraph I try to capture your view, and I'm fairly confident I got it right based on your comment.

However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do.

This seems tautological when you define morality as "binding oughtness" and compare against regular oughtness (which presumably applies to prudential reasons). But why stop there? Why not go to metamorality, or "binding meta-oughtness" that trumps "binding oughtness"? For example, "when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons".

It is again tautologically true that a person who does what there is most metamoral reason to do could not have failed to do what there was most all things considered reason for them to do. It doesn't sound as compelling, but I claim that is because we don't have metamorality as an intuitive concept, whereas we do have morality as an intuitive concept.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-10T01:08:34.350Z · score: 1 (1 votes) · EA · GW

With that terminology, I think your argument is that we should ignore worlds without a binding oughtness. But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons, but I don't really understand that term -- I thought it was synonymous with 'instrumental' reasons, but taking actions for your own desires and goals is certainly not 'instrumental'.

So it seems to me that in worlds with a binding oughtness that you know about, you should take actions according to that binding oughtness, and otherwise you should take actions according to your own desires and goals.

You could argue that binding oughtness always trumps desires and goals, so that your action should always follow the binding oughtness that is most likely, and you can put no weight on desires and goals. But I would want to know why that's true.

Like, I could also argue that actually, you should follow the binding meta-oughtness rule, which tells you how to derive ought statements from is statements, and that should always trump any particular oughtness rule, so you should ignore all of those and follow the most likely meta-oughtness rule. But this seems pretty fallacious. What's the difference?

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-10T00:46:25.742Z · score: 3 (2 votes) · EA · GW

Conditional on theism being true in the sense of this post, it seems especially likely that one of the particular religions that exist currently is most likely to be (approximately) true. If nothing else, you could figure out which religion is true, and then act based on what that religion asks for.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-09T20:47:16.102Z · score: 9 (4 votes) · EA · GW

This seems to be taking an implicit view that our goal in taking actions must be to serve some higher cause. We can also take actions simply because we want to, because it serves our particular goals, and not some goal that is an empirical fact of the universe that is independent of any particular human. (I assume you would classify this position under normative anti-realism, though it fits the definition of normative realism as you've stated it so far.)

Why must the concept of "goodness" or "morality" be separated from individual humans?

Relevant xkcd

Comment by rohinmshah on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-09T20:31:07.363Z · score: 2 (2 votes) · EA · GW

Another argument to be aware of is that it is a bad idea decision-theoretically to pay up, since anyone can then mug you, and you lose all of your money, argued in Pascal's Muggle Pays. On the face of it, this is compatible with expected utility maximization, since you would predictably lose all of your money if there are any adversaries in the environment by following the policy of paying the mugger. However, comments on that post argue against this by saying that even the expected disutility from being continually exploited forevermore would not balance out the huge positive expected utility from paying the mugger, so you would still pay the mugger.

Comment by rohinmshah on RPTP Is a Strong Reason to Consider Giving Later · 2018-11-08T04:20:23.800Z · score: 2 (2 votes) · EA · GW

I see, that makes more sense. Yeah, I agree that that paragraph addresses my objection, I don't think I understood it fully the first time around.

My new epistemic status is that I don't see any flaws in the argument but it seems fishy -- it seems strange that an assumption as weak as the existence of even one investor means you should save.

Comment by rohinmshah on RPTP Is a Strong Reason to Consider Giving Later · 2018-10-19T17:01:57.598Z · score: 0 (2 votes) · EA · GW
But investors are not perfectly patient; they discount their future welfare at some positive rate.

As Michael alluded to, I would expect that the primary explanation for positive real return rates is that people are risk averse. I don't think this changes the conclusion much, qualitatively the rest of the argument would still follow in this case, though the math would be different.

the indifference point of 7% returns implies that the rate at which the cost of welfare is rising (R) is only 5%.

For the people who are actually indifferent at a rate of 7%. I would expect that people in extreme poverty and factory farmed animals don't usually make this choice, so this argument says nothing about them. Similarly, most people don't care about the far future in proportion to its size, so you can't take their choices about it as much evidence.

Because of this, I would take the stock market + people's risk aversion as evidence that investing to give later is probably better if you are trying to benefit only the people who invest in the stock market.

Comment by rohinmshah on EA Concepts: Share Impressions Before Credences · 2018-10-19T16:45:14.817Z · score: 5 (3 votes) · EA · GW

Almost all information is outside information (eg. the GDP of the US, the number of employees at CEA), so I'd prefer saying "beliefs before updating on other people's beliefs" instead of "beliefs before updating on outside information".

I've been using "impressions" and "beliefs" for these terms, but "credence" does seem better than "belief".

Comment by rohinmshah on Doning with the devil · 2018-06-17T09:28:35.959Z · score: 3 (3 votes) · EA · GW

Just wanted to note that this does not mean that you should enter any donor lottery if you have economies of scale. (Not that anyone is saying this.) For example, if a terrorist group needs $100,000 to launch a devastating attack, but won't be able to do anything with their current amount of $10,000, you probably shouldn't enter a donor lottery with them.

Comment by rohinmshah on The counterfactual impact of agents acting in concert · 2018-06-01T21:32:13.279Z · score: 1 (1 votes) · EA · GW

Forget about the organization's own counterfactual impact for a moment.

Do you agree that, from the world's perspective, it would be better in Joey's scenario if GWWC, Charity Science, and TLYCS were to all donate their money directly to AMF?

Comment by rohinmshah on Why Groups Should Consider Direct Work · 2018-05-29T20:24:05.430Z · score: 2 (4 votes) · EA · GW

What are some examples of direct work student groups can do? My understanding was that most groups wanted to do direct work for many of the reasons you mention (certainly I wanted that) but there weren't any opportunities to do so.

I focused on field building mainly because it was the only plausible option that would have real impact. (Like Greg, I'm averse to doing direct work that will knowably be low direct impact.)

Comment by rohinmshah on The counterfactual impact of agents acting in concert · 2018-05-29T18:29:02.878Z · score: 1 (1 votes) · EA · GW

To try to narrow down the disagreement: Would you donate to GWWC instead of AMF if their impact calculation (using their current methodology) showed that $1.10 went to AMF for every $1 given to GWWC? I wouldn't.

Comment by rohinmshah on The counterfactual impact of agents acting in concert · 2018-05-29T18:28:50.628Z · score: 1 (1 votes) · EA · GW

For the first point, see my response to Carl above. I think you're right in theory, but in practice it's still a problem.

For the second point, I agree with Flodorner that you would either use the Shapley value, or you would use the probability of changing the outcome, not both. I don't know much about Shapley values, but I suspect I would agree with you that they are suboptimal in many cases. I don't think there is a good theoretical solution besides "consider every possible outcome and choose the best one" which we obviously can't do as humans. Shapley values are one tractable way of attacking the problem without having to think about all possible worlds, but I'm not surprised that there are cases where they fail. I'm advocating for "think about this scenario", not "use Shapley values".

I think the $1bn benefits case is a good example of a pathological case where Shapley values fail horribly (assuming they do what you say they do, again, I don't know much about them).

My overall position is something like "In the real world when we can't consider all possibilities, one common failure mode in impact calculations is the failure to consider the scenario in which all the participants who contributed to this outcome instead do other altruistic things with their money".

Comment by rohinmshah on The counterfactual impact of agents acting in concert · 2018-05-29T17:13:07.271Z · score: 2 (2 votes) · EA · GW

Yes, that's right. I agree that a perfect calculation of your counterfactual impact would do the right thing in this scenario, and probably all scenarios. This is an empirical claim that the actual impact calculations that meta-orgs do are of the form that I wrote in my previous comment.

For example, consider the impact calculations that GWWC and other meta orgs have. If those impact calculations (with their current methodologies) showed a ratio of 1.1:1, that seems nominally worthwhile (you still have the multiplicative impact), but I would expect that it would be better to give directly to charities to avoid effects like the ones Joey talked about in his post.

A true full counterfactual impact calculation would consider the world in which GWWC just sends the money straight to charities and convinces other meta orgs to do the same, at which point they see that more money gets donated to charities in total, and so they all close operations and send money straight to charities. I'm arguing that this doesn't happen in practice. (I think Joey and Peter are arguing the same thing.)

Comment by rohinmshah on The counterfactual impact of agents acting in concert · 2018-05-27T19:08:23.634Z · score: 4 (8 votes) · EA · GW

It's not a paradox. The problem is just that, if everyone thought this way, we would get suboptimal outcomes -- so maybe we should figure out how to avoid that.

Suppose there are three possible outcomes: P has cost $2000 and gives 15 utility to the world Q has cost $1000 and gives 10 utility to the world R has cost $1000 and gives 10 utility to the world

Suppose Alice and Bob each have $1000 to donate. Consider two scenarios:

Scenario 1: Both Alice and Bob give $1000 to P. The world gets 15 more utility. Both Alice and Bob are counterfactually responsible for giving 15 utility to the world.

Scenario 2: Alice gives $1000 to Q and Bob gives $1000 to R. The world gets 20 more utility. Both Alice and Bob are counterfactually responsible for giving 10 utility to the world.

From the world's perspective, scenario 2 is better. However, from Alice and Bob's individual perspective (if they are maximizing their own counterfactual impact), scenario 1 is better. This seems wrong, we'd want to somehow coordinate so that we achieve scenario 2 instead of scenario 1.

Comment by rohinmshah on Economics, prioritisation, and pro-rich bias   · 2018-01-03T03:44:51.602Z · score: 4 (4 votes) · EA · GW

Crucial Premise: Necessarily, the more someone is willing to pay for a good, the more welfare they get from consuming that good.

It seems to me that this premise as you've stated it is in fact true. The thing that is false is a stronger statement:

Strengthened Premise: Necessarily, if person A is willing to pay more for a good than person B, then person A gets more welfare from that good than person B.

For touting/scalping, you also need to think about the utility of people besides Pete and Rich -- for example, the producers of the show and the scalper (who is trading his time for money). Then there are also more diffuse effects, where if tickets go for $1000 instead of $50, there will be more Book of Mormon plays in the future since it is more lucrative, and more people can watch it. The main benefit of markets is mainly through these sorts of effects.

Comment by rohinmshah on In defence of epistemic modesty · 2017-10-30T00:45:33.586Z · score: 1 (1 votes) · EA · GW

As one data point, I did not have this association with "impressions" vs. "beliefs", even though I do in fact distinguish between these two kinds of credences and often report both (usually with a long clunky explanation since I don't know of good terminology for it).

Comment by rohinmshah on Oxford Prioritisation Project Review · 2017-10-13T00:10:46.897Z · score: 4 (4 votes) · EA · GW

EA Berkeley seemed more positive about their student-led EA class, calling it “very successful”, but we believe it was many times less ambitious

Yeah, that's accurate. I doubt that any of our students are more likely to go into prioritization research as a result of the class. I could name a few people who might change their career as a result of the class, but that would also be a pretty low number, and for each individual person I'd put the probability at less than 50%. "Very successful" here means that a large fraction of the students were convinced of EA ideas and were taking actions in support of them (such as taking the GWWC pledge, and going veg*n). It certainly seems a lot harder to cause career changes, without explicitly selecting for people who want to change their career (as in an 80K workshop).

We implicitly predicted that other team members would also be more motivated by the ambitious nature of the Project, but this turned out not to be the case. If anything, motivation increased after we shifted to less ambitious goals.

We observed the same thing. In the first iteration of EA Berkeley's class, there was some large amount of money (probably ~$5000) that was allocated for the final project, and students were asked to propose projects that they could run with that money. This was in some sense even more ambitious than OxPrio, since donating it to a charity was a baseline -- students were encouraged to think of more out-of-the-box ideas as well. What ended up happening was that the project was too open-ended for students to really make progress on, and while people proposed projects because it was required to pass the course, they didn't actually get implemented, and we used the $5000 to fund costs for EA Berkeley in future semesters.

Comment by rohinmshah on Effective Altruism Grants project update · 2017-09-30T05:56:31.059Z · score: 2 (2 votes) · EA · GW

CEA distributed £20,000 per hour worked by the grants team, whereas we estimate Open Phil distributes ~£600 per hour.

Those numbers are switched around, right?

Comment by rohinmshah on [deleted post] 2017-07-14T05:30:26.618Z

Is Antigravity Investments less of an inconvenience than Wealthfront or Betterment?

(I agree that roboadvisors are better than manual investing because they reduce trivial inconveniences, if that's what you were saying. But I think the major part of this question is why not be a for-profit roboadvisor and then donate the profits.)

Comment by rohinmshah on A model of 80,000 Hours - Oxford Prioritisation Project · 2017-05-25T02:04:49.210Z · score: 0 (0 votes) · EA · GW

Actually I was suggesting you use a qualitative approach (which is what the quoted section says). I don't think I could come up with a quantitative model that I would believe over my intuition, because as you said the counterfactuals are hard. But just because you can't easily quantify an argument doesn't mean you should discard it altogether, and in this particular case it's one of the most important arguments and could be the only one that matters, so you really shouldn't ignore it, even if it can't be quantified.

Comment by rohinmshah on A model of 80,000 Hours - Oxford Prioritisation Project · 2017-05-14T00:56:54.846Z · score: 5 (5 votes) · EA · GW

Attracting more experienced staff with higher salary and nicer office: more experienced staff are more productive which would increase the average cost-effectiveness above the current level, so the marginal must be greater than the current average.

Wait, what? The costs are also increasing, it's definitely possible for marginal cost effectiveness to be lower than the current average. In fact, I would strongly predict it's lower -- if there's an opportunity to get better marginal cost effectiveness than average cost effectiveness, that begs the question of why you don't just cut funding from some of your less effective activities and repurpose it for this opportunity.

Given the importance of such considerations and the difficulty of modelling them quantitatively, to holistically evaluate an organization, especially a young one, there is an argument for using a qualitative approach and “cluster thinking”, in addition to a quantitative approach and “sequential thinking.”

Please do, I think an analysis of the potential for growth (qualitative or quantitative) would significantly improve this post, since that consideration could easily swamp all others.

Comment by rohinmshah on Daniel May: "Should we make a grant to a meta-charity?" (Oxford Prioritisation Project) · 2017-03-19T04:31:55.202Z · score: 2 (2 votes) · EA · GW

Robin Shah

*Rohin Shah (don't worry about it, it's a ridiculously common mistake)

I also find Ben Todd's post on focusing on growth rather than upfront provable marginal impact to be promising and convincing

While I generally agree with the argument that you should focus on growth rather than upfront provable marginal impact, I think you should take the specific argument comparing with vegetarians with many grains of salt. That's speculative enough that there are lots of similarly plausible arguments in both directions, and I don't see strong reasons to prefer any specific one.

For example: Perhaps high growth is bad because people don't have deep engagement and it waters down the EA movement. Perhaps vegetarianism is about as demanding as GWWC, but vegetarianism fits more peoples values than GWWC (environmentalism, animal suffering, health vs. caring about everyone equally). Perhaps GWWC is as demanding and as broadly applicable as vegetarianism, but actually it took hundreds of years to get 1% of the developed world to be vegetarian and it will take similar amounts of effort here. And so on.

I think looking at specifically how a metacharity plans to grow, and how well their plans to grow have worked in the past, is a much better indicator than these sorts of speculative arguments. (The speculative arguments are a good way to argue against "we have reached capacity", which I think was how Ben intended them, but they're not a great argument for the meta charity.)

Comment by rohinmshah on Ethical Reaction Time: What it is and why it matters · 2017-03-14T17:14:55.114Z · score: 1 (1 votes) · EA · GW

^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Comment by rohinmshah on Ethical Reaction Time: What it is and why it matters · 2017-03-12T04:05:33.716Z · score: 5 (5 votes) · EA · GW

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.

Examples?

Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.

Examples?

Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

Comment by rohinmshah on Anonymous EA comments · 2017-02-08T18:22:29.275Z · score: 3 (3 votes) · EA · GW

I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical.

TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.

Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)

In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas.

Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.

Comment by rohinmshah on Talking about the Giving What We Can Pledge · 2017-01-05T00:02:02.976Z · score: 0 (0 votes) · EA · GW

from the outside view this looks unusually high

I would have said this a little over a year ago, but I'm less surprised by it now and I do expect it would replicate. I also expect that it becomes less effective as it scales (I expect the people who currently do it are above average at this, due to selection effects), but not by that much.

This is based on running a local EA group for a year and constantly being surprised by how much easier it is to get a pledge than I thought it would be.

Comment by rohinmshah on Why donate to 80,000 Hours · 2016-12-25T18:36:06.371Z · score: 3 (3 votes) · EA · GW

Yes. Though where it gets tricky is making the assessment at the margin.

I was wondering about this too. Is your calculation of the marginal cost per plan change just the costs for 2016 divided by the plan changes in 2016? That doesn't seem to be an assessment at the margin.

Comment by rohinmshah on Thoughts on the "Meta Trap" · 2016-12-23T19:08:17.041Z · score: 0 (0 votes) · EA · GW

Yeah I didn't have a great term for it so I just went with the term that was used previously and made sure to define what I meant by it. I think this is a little broader than movement building -- I like the suggestion of "promotion traps" above.

Comment by rohinmshah on Thoughts on the "Meta Trap" · 2016-12-23T19:02:36.927Z · score: 0 (0 votes) · EA · GW

Yeah, that does seem to capture the idea better.

Comment by rohinmshah on Thoughts on the "Meta Trap" · 2016-12-23T19:02:00.946Z · score: 0 (0 votes) · EA · GW

Yes, I agree with this. (See also my reply to Rob above.)

Comment by rohinmshah on Thoughts on the "Meta Trap" · 2016-12-23T19:00:09.786Z · score: 0 (0 votes) · EA · GW

I'm pretty unsure whether correcting for impacts from multiple orgs makes 80,000 Hours look better or worse

I'm a little unclear on what you mean here. I see three different factors:

  1. Various orgs are undercounting their impact because they don't count small changes that are part of a larger effort, even though in theory from a single player perspective, they should count the impact.

  2. In some cases, two (or more) organizations both reach out to an individual, but either one of the organizations would have been sufficient, so neither of them get any counterfactual impact (more generally, the sum of the individually recorded impacts is less than the impact of the system as a whole)

  3. Multiple orgs have claimed the same object-level impact (eg. an additional $100,000 to AMF from a GWWC pledge) because they were all counterfactually responsible for it (more generally, the sum of the individually recorded impacts is more than the impact of the system as a whole).

Let's suppose:

X is the impact of an org from a single player perspective

Y is the impact of an org taking a system-level view (so that the sum of Y values for all orgs is equal to the impact of the system as a whole)

Point 1 doesn't change X or Y, but it does change the estimate we make of X and Y, and tends to increase it.

Point 2 can only tend to make Y > X.

Point 3 can only tend to make Y < X.

Is your claim that the combination of points 1 and 2 may outweigh point 3, or just that point 2 may outweigh point 3? I can believe the former, but the latter seems unlikely -- it doesn't seem very common for many separate orgs to all be capable of making the same change, it seems more likely to me that in such cases all of the orgs are necessary which would be an instance of point 3.

We could try to measure the benefit/cost of the movement as a whole

Yeah, this is the best idea I've come up with so far, but I don't really like it much. (Do you include local groups? Do you include the time that EAs spend talking to their friends? If not, how do you determine how much of the impact to attribute to meta orgs vs. normal network effects?) It would be a good start though.

Another possibility is to cross-reference data between all meta orgs, and try to figure out whether for each person, the sum of the impacts recorded by all meta orgs is a reasonable number. Not sure how feasible this actually is (in particular, it's hard to know what a "reasonable number" would be, and coordinating among so many organizations seems quite hard).

Comment by rohinmshah on Thoughts on the "Meta Trap" · 2016-12-23T18:37:02.731Z · score: 0 (0 votes) · EA · GW

Okay, this makes more sense. I was mainly thinking of the second point -- I agree that the first and third points don't make too much of a difference. (However, some students can take on important jobs, eg. Oliver Habryka working at CEA while being a student.)

Another possibility is that you graduate faster. Instead of running a local group, you could take one extra course each semester. Aggregating this, for every two years of not running a local group, you could graduate a semester earlier.

(This would be for UC Berkeley, I think it should generalize about the same to other universities as well.)

Thoughts on the "Meta Trap"

2016-12-20T21:36:39.498Z · score: 8 (12 votes)

EA Berkeley Spring 2016 Retrospective

2016-09-11T06:37:02.183Z · score: 5 (5 votes)

EAGxBerkeley 2016 Retrospective

2016-09-11T06:27:16.316Z · score: 6 (6 votes)