Comment by rohinmshah on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-17T05:13:39.082Z · score: 3 (2 votes) · EA · GW

I don't know of any such stats, but I also don't know much about CFAR.

Comment by rohinmshah on How Europe might matter for AI governance · 2019-07-17T05:12:50.902Z · score: 4 (3 votes) · EA · GW

I was excluding governance papers, because it seems like the relevant question is "will AI development happen in Europe or elsewhere", and governance papers provide ~no evidence for or against that.

Comment by rohinmshah on How Europe might matter for AI governance · 2019-07-15T20:10:38.757Z · score: 22 (8 votes) · EA · GW

My lived experience is that most of the papers I care about (even excluding safety-related papers) come from the US. There are lots of reasons that both of these could be true, but for the sake of improving AGI-related governance, I think my lived experience is a much better measure of the thing we actually care about (which is something like "which region does good AGI-related thinking").

Comment by rohinmshah on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T17:11:25.099Z · score: 5 (4 votes) · EA · GW
From my current read, psychedelics have a stronger evidence base than rationality training programs

I agree if for CFAR you are looking at the metric of how rational their alumni are. If you instead look at CFAR as a funnel for people working on AI risk, the "evidence base" seems clearer. (Similarly to how we can be quite confident that 80K is having an impact, despite there not being any RCTs of 80K's "intervention".)

Comment by rohinmshah on Charity Vouchers [public policy idea] · 2019-07-11T04:51:05.090Z · score: 2 (2 votes) · EA · GW

Sorry, I'm claiming government is supposed to spend money to achieve outcomes the public wants. (That felt self-evident to me, but maybe you disagree with it?) Given that, it's weird to say that it is better to give the money to the public than to let the government spend it.

I think the claim "philanthropic spending can do more good than typical government spending" usually works because we agree with the philanthropist's values more so than "government's values". But I wouldn't expect that "public's values" would be better than "government's values", and I do expect that "government's competence" would be better than "public's competence".

Comment by rohinmshah on Charity Vouchers [public policy idea] · 2019-07-10T19:19:18.493Z · score: 2 (2 votes) · EA · GW

Not necessarily disagreeing, but I wanted to point out that this relies on a perhaps-controversial claim:

Claim: Even though government is supposed to spend money to achieve outcomes the public wants, it is better to give the money to the public so that they can achieve outcomes that they want.

Comment by rohinmshah on Please May I Have Reading Suggestions on Consistency in Ethical Frameworks · 2019-07-08T17:24:22.521Z · score: 1 (1 votes) · EA · GW

To me, the most relevant of these impossibility theorems is the Arrhenius paradox (relevant to population ethics). Unfortunately, I don't know of any good public explanation of it.

Comment by rohinmshah on Not getting carried away with reducing extinction risk? · 2019-06-04T20:37:11.612Z · score: 3 (2 votes) · EA · GW

Even with the astronomical waste argument, which is the most extreme version of this argument, at some point you have astronomical numbers of people living, and the rest of the future isn't tremendously large in comparison, and so focusing on flourishing at that point makes more sense. Of course, this would be quite far in the future.

In practice, I expect the bar comes well before that point, because if everyone is focusing on x-risks, it will become harder and harder to reduce x-risks further, while staying equally as easy to focus on flourishing.

Note that in practice many more people in the world focus on flourishing than on x-risks, so maybe the few long-term focused people might end up always prioritizing x-risks because everyone else picks the low-hanging fruit in flourishing. But that's different from saying "it's never important to work on animal suffering", it's saying "someone else will fix animal suffering, and so I should do the other important thing of reducing x-risk".

Comment by rohinmshah on Not getting carried away with reducing extinction risk? · 2019-06-02T16:41:47.082Z · score: 14 (6 votes) · EA · GW

I'm pretty sure all the people you're thinking about won't make claims any stronger than "All of EA's resources should currently be focused on reducing extinction risks". Once extinction risks are sufficiently small, I would expect them to switch to focusing on flourishing.

Comment by rohinmshah on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-24T21:43:59.047Z · score: 3 (2 votes) · EA · GW
The random chance argument is harder to make if the studies have large effect sizes. If the true effect is 0, it's unlikely we'll observe a large effect by chance.

This is exactly what p-values are designed for, so you are probably better off looking at p-values rather than effect size if that's the scenario you're trying to avoid.

I suppose you could imagine that p-values are always going to be just around 0.05, and that for a real and large effect size people use a smaller sample because that's all that's necessary to get p < 0.05, but this feels less likely to me. I would expect that with a real, large effect you very quickly get p < 0.01, and researchers would in fact do that.

(I don't necessarily disagree with the rest of your comment, I'm more unsure on the other points.)

Comment by rohinmshah on [Link] Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-04-26T16:26:27.734Z · score: 4 (3 votes) · EA · GW

See also my summary and Richard Ngo's comments.

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-14T17:25:53.506Z · score: 1 (1 votes) · EA · GW

Yeah, I've been doing this occasionally (though that started recently).

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-11T02:00:51.632Z · score: 3 (2 votes) · EA · GW
From my present vantage, the AI alignment newsletter is becoming a pretty prominent clearinghouse for academic AI alignment research updates. (I wouldn't be surprised if it were the primary source of such for a sizable portion of newsletter subscribers.)
To the extent that's true, the amplification effects seem possibly strong.

I agree that's true and that the amplification effects for AI safety researchers are strong; it's much less strong of an amplification effect for any other category. My current model is that info hazards are most worrisome when they spread outside the AI safety community.

On confidentiality, the downsides of the newsletter failing to preserve confidentiality seem sufficiently small that I'm not worried (if you ignore info hazards). Failures of confidentiality seem bad in that they harm your reputation and make it less likely that people are willing to talk to you -- it's similar to the reason you wouldn't break a promise even if superficially the consequences of the thing you're doing seem slightly negative. But in the case of the newsletter, we would amplify someone else's failure to preserve confidentiality, which shouldn't reflect all that poorly on us. (Obviously if we knew that the information was supposed to be confidential we wouldn't publish it.)

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-11T01:50:41.511Z · score: 3 (2 votes) · EA · GW
This was in response to "the growing amount of AI safety research."

Yeah, I think I phrased that question poorly. The question is both "should all of it be summarized" and "if yes, how can that be done".

Presumably as there is more research, it takes more time to read & assess the forthcoming literature to figure out what's important / worth including in the newsletter.

I feel relatively capable of that -- I think I can figure out for any given reading whether I want to include it in ~5 minutes or so with relatively high accuracy. It's actually reading and summarizing it that takes time.

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T22:56:11.250Z · score: 3 (2 votes) · EA · GW
Interesting to think about what governance the newsletter should have in place re: info hazards, confidentiality, etc.

Currently we only write about public documents, so I don't think these concerns arise. I suppose you could imagine that someone writes about something they shouldn't have and we amplify it, but I suspect this is a rare case and one that should be up to my discretion.

What did you guys do for GPT-2?

Not sure what specifically you're asking about here. You can see the relevant newsletter here.

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T22:50:27.888Z · score: 1 (1 votes) · EA · GW
My intuition is that this would be a good time to formalize the structure of the newsletter somewhat, especially given that there are multiple contributors & you are starting to function more as an editor.

Certainly more systems are being put into place, which is kind of like "formalizing the structure". Creating an organization feels like a high fixed cost for not much benefit -- what do you think the main benefits would be? (Maybe this is combined with paying content writers and editors, in which case an organization might make more sense?)

Plausibly it's fine to keep it as an informal research product, but I'd guess that "AI alignment newsletter editor" could basically be (or soon become) a full-time job.

If I were to make this my full-time job, the newsletter would approximately double in length (assuming I found enough content to cover), and I'd expect that people wouldn't read most of it. (People already don't read all of it, I'm pretty sure.) What do you think would be the value of more time put into the newsletter?

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T22:44:25.779Z · score: 1 (1 votes) · EA · GW
My first guess is that there's significant value in someone maintaining an open, exhaustive database of AIS research.

Yeah, I agree. But there's also significant value in doing more AIS research, and I suspect that on the current margin for a full-time researcher (such as myself) it's better to do more AIS research compared to writing summaries of everything.

Note that I do intend to keep adding all of the links to the database, it's the summaries that won't keep up.

It is plausible to me that an org with a safety team (e.g. DeepMind/OpenAI) is already doing this in-house, or planning to do so.

I'm 95% confident that no one is already doing this, and if they were seriously planning to do so I'd expect they would check in with me first. (I do know multiple people at all of these orgs.)

More broadly, these labs might have some good systems in place for maintaining databases of new research in areas with a much higher volume than AIS, so could potentially share some best-practices.

You know, that would make sense as a thing to exist, but I suspect it does not. Regardless that's a good idea, I should make sure to check.

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T07:04:28.487Z · score: 1 (1 votes) · EA · GW

Comment thread for the question: What is the value of the newsletter for you?

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T07:04:11.529Z · score: 1 (1 votes) · EA · GW

Comment thread for the question: What is the value of the newsletter for other people?

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T07:03:55.320Z · score: 1 (1 votes) · EA · GW

Comment thread for the question: How should I deal with the growing amount of AI safety research?

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T07:03:37.958Z · score: 1 (1 votes) · EA · GW

Comment thread for the question: What can I do to get more feedback on the newsletter on an ongoing basis (rather than having to survey people at fixed times)?

Comment by rohinmshah on Alignment Newsletter One Year Retrospective · 2019-04-10T07:03:21.129Z · score: 1 (1 votes) · EA · GW

Comment thread for the question: Am I underestimating the risk of causing information cascades? Regardless, how can I mitigate this risk?

Alignment Newsletter One Year Retrospective

2019-04-10T07:00:34.021Z · score: 61 (23 votes)
Comment by rohinmshah on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T16:15:42.475Z · score: 9 (3 votes) · EA · GW

Is this different from having more people on a single granting body?

Possibly with more people on a single granting body, everyone talks to each other more and so can all get stuck thinking the same thing, whereas they would have come up with more / different considerations had they been separate. But this would suggest that granting bodies would benefit from splitting into halves, going over grants individually, and then merging at the end. Would you endorse that suggestion?

Comment by rohinmshah on On AI and Compute · 2019-04-04T21:26:49.232Z · score: 1 (1 votes) · EA · GW

Mostly agree with all of this; some nitpicks:

My understanding (and I think everyone else's) of AI capabilities is largely shaped by how impressive the results of major papers intuitively seem.

I claim that this is not how I think about AI capabilities, and it is not how many AI researchers think about AI capabilities. For a particularly extreme example, the Go-explore paper out of Uber had a very nominally impressive result on Montezuma's Revenge, but much of the AI community didn't find it compelling because of the assumptions that their algorithm used.

I'm not sure I fully understand how the metric would work. For the Atari example, it seems clear to me that we could easily reach it without making a generalizable AI system, or vice versa.

Tbc, I definitely did not intend for that to be an actual metric.

But let's say that we could come up with a relevant metric. Then I'd agree with Garfinkel, as long as people in the community had known roughly the current state of AI in relation to it and the rate of advance toward it before the release of "AI and Compute".

I would say that I have a set of intuitions and impressions that function as a very weak prediction of what AI will look like in the future, along the lines of that sort of metric. I trust timelines based on extrapolation of progress using these intuitions more than timelines based solely on compute.To the extent that you hear timeline estimates from people like me who do this sort of "progress extrapolation" who also did not know about how compute has been scaling, you would want to lengthen their timeline estimates. I'm not sure how timeline predictions break down on this axis.

Comment by rohinmshah on On AI and Compute · 2019-04-04T15:58:43.167Z · score: 9 (4 votes) · EA · GW
DeepMind certainly seems to be saying that AlphaZero is better at searching a more limited set of promising moves than Stockfish, a traditional chess engine (unfortunately they don’t compare it to earlier versions of AlphaGo on this metric).

Only at test time. AlphaZero has much more experience gained from its training phase. (Stockfish has no training phase, though you could think of all of the human domain knowledge encoded in it as a form of "training".)

AlphaZero went from a bundle of blank learning algorithms to stronger than the best human chess players in history...in less than two hours.

Humans are extremely poorly optimized for playing chess.

I don’t agree with Garfinkel that OpenAI’s analysis should make us more pessimistic about human-level AI timelines. While it makes sense to revise our estimate of AI algorithms downward, it doesn’t follow that we should do the same for our estimate of overall progress in AI. By cortical neuron count, systems like AlphaZero are at about the same level as a blackbird (albeit one that lives for 18 years),[7] so there’s a clear case for future advances being more impressive than current ones as we approach the human level.

Sounds like you are using a model where (our understanding of) current capabilities and rates of progress of AI are not very relevant for determining future capabilities, because we don't know the absolute quantitative capability corresponding to "human-level AI". Instead, you model it primarily on the absolute amount of compute needed.

Suppose you did know the absolute capability corresponding to "human-level AI", e.g. you can say something like "once we are able to solve Atari benchmarks using only 10k samples from the environment, we will have human-level AI", and you found that metric much more persuasive than the compute used by a human brain. Would you then agree with Garfinkel's point?

Comment by rohinmshah on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-04T15:34:42.666Z · score: 1 (1 votes) · EA · GW
In my understanding, coordination/collusion can be limited by keeping donations anonymous.

It's not hard for an individual to prove that they donated by other means, e.g. screenshots and bank statements.

(See the first two paragraphs on page 16 in the paper for an example.)

Right after that, the authors say:

There is a broader point here. If perfect harmonization of interests is possible, Capitalism leads to optimal outcomes. LR is intended to overcome such lack of harmonization and falls prey to manipulation when it wrongly assumes harmonization is difficult

With donations it is particularly easy to harmonize interests: if I'm planning to allocate 2 votes to MIRI and you're planning to allocate 2 votes to AMF, we can instead have each of us allocate 1 vote each to MIRI and AMF and we both benefit. Yes, we have to build trust that neither of us would defect by actually putting both of our votes to our preferred charity; but this seems doable in practice: even in the hardest case of vote trading (where there are laws attempting to enforce anonymity and inability to prove your vote) there seems to have been some success.

Comment by rohinmshah on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-03T15:06:56.827Z · score: 9 (4 votes) · EA · GW

Sorry, I meant "collusion" in the sense that it is used in the game theory literature, where it's basically equivalent to "coordination in a way not modeled by the game theory", and doesn't carry the illegal/deceitful connotation it does in English. See e.g. here, which is explicitly talking about this problem for Glen Weyl's proposal.

The overall point is, if donors can coordinate, as they obviously can in the real world, then the optimal provisioning of goods theorem no longer holds. The example with MIRI showcased this effect. I'm not saying that anyone did anything wrong in that example.

Comment by rohinmshah on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-03T00:02:42.696Z · score: -4 (2 votes) · EA · GW

Yes.

Comment by rohinmshah on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-02T15:52:44.348Z · score: 1 (1 votes) · EA · GW

The main issue with the mechanism seems to be collusion between donors. As Aaron mentioned, MIRI took part in such a fundraiser. I claim that it was so successful for them precisely because MIRI supporters were able to coordinate well relative to the supporters of the other charities -- there were a bunch of posts about how supporting this fundraiser was effectively a 50x donation multiplier or something like that.

Comment by rohinmshah on Unsolicited Career Advice · 2019-03-04T18:32:32.669Z · score: 20 (9 votes) · EA · GW

I ran the EA Berkeley group and later the UWashington group, and even this estimate seems high to me (but it would be within my 90% confidence bound, whereas 2000 is definitely not in it).

Comment by rohinmshah on Why do you reject negative utilitarianism? · 2019-02-12T18:31:58.991Z · score: 14 (9 votes) · EA · GW
Therefore, it is a straw man argument that NUs don’t value life or positive states, because NUs value them instrumentally, which may translate into substantial practical efforts to protect them (compared even with someone who claims to be terminally motivated by them).

By my understanding, a universe with no conscious experiences is the best possible universe by ANU (though there are other equally good universes as well). Would you agree with that?

If so, that's a strong reason for me to reject it. I want my ethical theory to say that a universe with positive conscious experiences is strictly better than one with no conscious experiences.

Comment by rohinmshah on What are some lists of open questions in effective altruism? · 2019-02-06T20:20:16.172Z · score: 3 (3 votes) · EA · GW

I was going to post a few lists that hadn't already been posted, but this one had all of them already :)

Comment by rohinmshah on Disentangling arguments for the importance of AI safety · 2019-01-23T17:42:55.145Z · score: 2 (2 votes) · EA · GW

I think 4, 5 and 6 are all valid even if you take the CAIS view. Could you explain how you think those depend on the AGI being an independent agent?

Plausibly 2 and 3 also apply to CAIS, though those are more ambiguous.

Comment by rohinmshah on Altruistic Motivations · 2019-01-05T16:06:05.061Z · score: 2 (2 votes) · EA · GW

Actually, my summary of that post initially dropped the obligation frame because of these reasons :P (Not intentionally, since I try to have objective summaries, but I basically ignored the obligation point while reading and so forgot to put it in the summary.)

I do think the opportunity frame is much more reasonable in that setting, because "human safety problems" are something that you might have been resigned to in the past, and AI design is a surprising option that might let us fix them, so it really does sound like good news. On the other hand, the surprising part about effective altruism is "people are dying for such preventable reasons that we can stop it for thousands of dollars", which is bad news that it's really hard to be excited by.

Comment by rohinmshah on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-04T20:40:01.569Z · score: 8 (4 votes) · EA · GW

Not sure. A few hypotheses:

  • Arxiv sanity has become better at predicting what I care about as I've given it more data. I don't think this is the whole story because the absolute number of papers I see on Twitter has gone down.
  • I did create my Twitter account primarily for academic stuff, but it's possible that over time Twitter has learned to show me non-academic stuff that is more attention-grabbing or controversial, despite me trying not to click on those sorts of things.
  • Academics are promoting their papers less on Twitter.
Comment by rohinmshah on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-31T14:33:22.493Z · score: 7 (3 votes) · EA · GW

Not the OP, but the Alignment Newsletter (which I write) should help for technical AI safety. I source from newsletters, blogs, Arxiv Sanity and Twitter (though Twitter is becoming more useless over time). I'd imagine you could do the same for other fields as well.

Comment by rohinmshah on Critique of Superintelligence Part 3 · 2018-12-24T06:54:13.111Z · score: 2 (2 votes) · EA · GW
these sorts of techniques have been applied for decades and have never achieved anything close to human level AI

We also didn't have the vast amounts of compute that we have today.

other parts of Bostrom's argument rely upon much broader conceptions of intelligence that would entail the AI having common sense.

My claim is that you can write a program that "knows" about common sense, but still chooses actions by maximizing a function, in which case it's going to interpret that function literally and not through the lens of common sense. There is currently no way that the "choose actions" part gets routed through the "common sense" part the way it does in humans. I definitely agree that we should try to build an AI system which does interpret goals using common sense -- but we don't know how to do that yet, and that is one of the approaches that AI safety is considering.

I agree with the prediction that AGI systems will interpret goals with common sense, but that's because I expect that we humans will put in the work to figure out how to build such systems, not because any AGI system that has the ability to use common sense will necessarily apply that ability to interpreting its goals.

If we found out today that someone created our world + evolution in order to create organisms that maximize reproductive fitness, I don't think we'd start interpreting our sex drive using "common sense" and stop using birth control so that we more effectively achieved the original goal we were meant to perform.

Comment by rohinmshah on Critique of Superintelligence Part 3 · 2018-12-15T09:27:59.243Z · score: 2 (2 votes) · EA · GW

I'm not really arguing for Bostrom's position here, but I think there is a sensible interpretation of it.

Goals/motivation = whatever process the AI uses to select actions.

There is an implicit assumption that this process will be simple and of the form "maximize this function over here". I don't like this assumption as an assumption about any superintelligent AI system, but it's certainly true that our current methods of building AI systems (specifically reinforcement learning) are trying to do this, so at minimum you need to make sure that we don't build AI using reinforcement learning, or that we get it's reward function right, or that we change how reinforcement learning is done somehow.

If you are literally just taking actions that maximize a particular function, you aren't going to interpret them using common sense, even if you have the ability to use common sense. Again, I think we could build AI systems that used common sense to interpret human goals -- but this is not what current systems do, so there's some work to be done here.

The arguments you present here are broadly similar to ones that make me optimistic that AI will be good for humanity, but there is work to be done to get there from where we are today.

Comment by rohinmshah on Critique of Superintelligence Part 2 · 2018-12-15T09:16:43.513Z · score: 6 (3 votes) · EA · GW
my impression was, that progress was quite jumpy at times, instead of slow and steady.

https://sideways-view.com/2018/02/24/takeoff-speeds/

https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/

Comment by rohinmshah on Critique of Superintelligence Part 2 · 2018-12-15T09:15:14.936Z · score: 2 (2 votes) · EA · GW
So let’s say you have an Artificial Intelligence that thinks enormously faster than a human.

But why didn't you have an AI that thinks only somewhat faster than a human before that?

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-19T17:42:26.850Z · score: 1 (1 votes) · EA · GW

My math-intuition says "that's still not well-defined, such reasons may not exist".

To which you might say "Well, there's some probability they exist, and if they do exist, they trump everything else, so we should act as though they exist."

My intuition says "But the rule of letting things that could exist be the dominant consideration seems really bad! I could invent all sorts of categories of things that could exist, that would trump everything I've considered so far. They'd all have some small probability of existing, and I could direct my actions any which way in this manner!" (This is what I was getting at with the "meta-oughtness" rule I was talking about earlier.)

To which you might say "But moral reasons aren't some hypothesis I pulled out of the sky, they are commonly discussed and have been around in human discourse for millennia. I agree that we shouldn't just invent new categories and put stock into them, but moral reasons hardly seem like a new category."

And my response would be "I think moral reasons of the type you are talking about mostly came from the human tendency to anthropomorphize, combined with the fact that we needed some way to get humans to coordinate. Humans weren't likely to just listen to rules that some other human made up, so the rules had to come from some external source. And in order to get good coordination, the rules needed to be followed, and so they had to have the property that they trumped any prudential reasons. This led us to develop the concept of rules that come from some external source and trump everything else, giving us our concept of moral reasons today. Given that our concept of "moral reasons" probably arose from this sort of process, I don't think that "moral reasons" is a particularly likely thing to actually exist, and it seems wrong to base your actions primarily on moral reason. Also, as a corollary, even if there do exist reasons that trump all other reasons, I'm more likely to reject the intuition that it must come from some external source independent of humans, since I think that intuition was created by this non-truth-seeking process I just described."

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-15T00:19:41.424Z · score: 1 (1 votes) · EA · GW

Okay, cool, I think I at least understand your position now. Not sure how to make progress though. I guess I'll just try to clarify how I respond to imagining that I held the position you do.

From my perspective, the phrase "moral reason" has both the connotation that it is external to humans and that it trumps all other reasons, and that's why the intuition is so strong. But if it is decomposed into those two properties, it no longer seems (to me) that they must go together. So from my perspective, when I imagine how I would justify the position you take, it seems to be a consequence of how we use language.

What I have most moral reason to do is what there is most reason to do impartially considered (i.e. from the point of view of the universe)

My intuitive response is that that is an incomplete definition and we would also need to say what impartial reasons are, otherwise I don't know how to identify the impartial reasons.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-13T21:47:07.547Z · score: 2 (2 votes) · EA · GW
4. I don't think I understand the set up of this question - it doesn't seem to make a coherent sentence to replace X with a number in the way you have written it.

I did mean for you to replace X with a phrase, not a number.

If my intuition here is right then moral reasons must always trump prudential reasons. Note I don't have anything more to offer than this intuition, sorry if I made it seem like I did!

Your intuition involves the complex phrase "moral reason" for which I could imagine multiple different interpretations. I'm trying to figure out which interpretation is correct.

Here are some different properties that "moral reason" could have:

1. It is independent of human desires and goals.

2. It trumps all other reasons for action.

3. It is an empirical fact about either the universe or math that can be derived by observation of the universe and pure reasoning.

My main claim is that properties 1 and 2 need not be correlated, whereas you seem to have the intuition that they are, and I'm pushing on that.

A secondary claim is that if it does not satisfy property 3, then you can never infer it and so you might as well ignore it, but "irreducibly normative" sounds to me like it does not satisfy property 3.

Here are some models of how you might be thinking about moral reasons:

a) Moral reasons are defined as the reasons that satisfy property 1. If I think about those reasons, it seems to me that they also satisfy property 2.

b) Moral reasons are defined as the reasons that satisfy property 2. If I think about those reasons, it seems to me that they also satisfy property 1.

c) Moral reasons are defined as the reasons that satisfy both property 1 and property 2.

My response to a) and b) are of the form "That inference seems wrong to me and I want to delve further."

My response to c) is "Define prudential reasons as the reasons that satisfy property 2 and not-property 1, then prudential reasons and moral reasons both trump all other reasons for action, which seems silly/strange."

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-13T21:31:28.035Z · score: 2 (2 votes) · EA · GW

Not if the best thing to do is actually what the supreme being said, and not what you think is right, which is (a natural consequence of) the argument in this post.

(Tbc, I do not agree with the argument in the post.)

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-12T02:50:13.893Z · score: 2 (2 votes) · EA · GW

There seems to be something that makes you think that moral reasons should trump prudential reasons. The overall thing I'm trying to do is narrow down on what that is. In most of my comments, I've thought I've identified it, and so I argued against it, but it seems I'm constantly wrong about that. So let me try and explicitly figure it out:

How much would you agree with each of these statements:

  • If there is a conflict between moral reasons and prudential reasons, you ought to do what the moral reasons say.
  • If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions one ought to take, then you ought to do what that process prescribes, regardless of what you desire.
  • If it is an empirical fact about the universe that, independent of humans, there is a process for determining what actions to take to maximize utility, then you ought to do what that process prescribes, regardless of what you desire.
  • If there is an external-to-you entity satisfying property X that prescribes actions you should take, then you ought to do what it says, regardless of what you desire. (For what value of X would you agree with this statement?)
I have a very low credence that your proposed meta-normative rule would be true?

I also have a very low credence of that meta-normative rule. I meant to contrast it to the meta-normative rule "binding oughtness trumps regular oughtness", which I interpreted as "moral reasons trump prudential reasons", but it seems I misunderstood what you meant there, since you mean "binding oughtness" to apply both to moral and prudential reasons, so ignore that argument.

I agree, my view stems from a bedrock of intuition, that just as the descriptive fact that 'my table has four legs' won't create normative reasons for action, neither will the descriptive fact that 'Harry desires chocolate ice-cream' create them either.

This makes me mildly worried that you aren't able to imagine the worldview where prudential reasons exist. Though I have to admit I'm confused why under this view there are any normative reasons for action -- surely all such reasons depend on descriptive facts? Even with religions, you are basing your normative reasons for action upon descriptive facts about the religion.

(Btw, random note, I suspect that Ben Pace above and I have very similar views, so you can probably take your understanding of his view and apply it to mine.)

Comment by rohinmshah on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-12T02:25:39.871Z · score: 1 (1 votes) · EA · GW

I see, that makes sense, and I agree with it.

Comment by rohinmshah on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-10T17:42:40.563Z · score: 1 (1 votes) · EA · GW

I and most other people (I'm pretty sure) wouldn't chase the highest probability of infinite utility, since most of those scenarios are also highly implausible and feel very similar to Pascal's mugging.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-10T17:36:38.626Z · score: 2 (2 votes) · EA · GW
However these just wouldn't constitute normative reason for action and that's just what you need for an action to be choice-worthy.
[...]
As I don't think that mere desires create reasons for action I think we can ignore them unless they are actually prudential reasons.

I don't know how to argue against this, you seem to be taking it as axiomatic. The one thing I can say is that it seems clearly obvious to me that your desires and goals can make some actions better to choose than others. It only becomes non-obvious if you expect there to be some external-to-you force telling you how to choose actions, but I see no reason to assume that. It really is fine if you're actions aren't guided by some overarching rule granted authority by virtue of being morality.

But I suspect this isn't going to convince you. Can we simply assume that prudential reasons exist and figure out the implications?

The distinction between normative/prudential is one developed in the relevant literature, see this abstract for a paper by Roger Crisp to get a sense for it.

Thanks, I think I've got it now. (Also it seems to be in your appendix, not sure how I missed that before.)

The issues is that we're trying to work out how to act with uncertainty about what sort of world we're in?

I know, and I think in the very next paragraph I try to capture your view, and I'm fairly confident I got it right based on your comment.

However, it seems jarring to think that a person who does what there is most moral reason to do could have failed to do what there was most, all things considered, reason for them to do.

This seems tautological when you define morality as "binding oughtness" and compare against regular oughtness (which presumably applies to prudential reasons). But why stop there? Why not go to metamorality, or "binding meta-oughtness" that trumps "binding oughtness"? For example, "when faced with uncertainty over ought statements, choose the one that most aligns with prudential reasons".

It is again tautologically true that a person who does what there is most metamoral reason to do could not have failed to do what there was most all things considered reason for them to do. It doesn't sound as compelling, but I claim that is because we don't have metamorality as an intuitive concept, whereas we do have morality as an intuitive concept.

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-10T01:08:34.350Z · score: 1 (1 votes) · EA · GW

With that terminology, I think your argument is that we should ignore worlds without a binding oughtness. But in worlds without a binding oughtness, you still have your own desires and goals to guide your actions. This might be what you call 'prudential' reasons, but I don't really understand that term -- I thought it was synonymous with 'instrumental' reasons, but taking actions for your own desires and goals is certainly not 'instrumental'.

So it seems to me that in worlds with a binding oughtness that you know about, you should take actions according to that binding oughtness, and otherwise you should take actions according to your own desires and goals.

You could argue that binding oughtness always trumps desires and goals, so that your action should always follow the binding oughtness that is most likely, and you can put no weight on desires and goals. But I would want to know why that's true.

Like, I could also argue that actually, you should follow the binding meta-oughtness rule, which tells you how to derive ought statements from is statements, and that should always trump any particular oughtness rule, so you should ignore all of those and follow the most likely meta-oughtness rule. But this seems pretty fallacious. What's the difference?

Comment by rohinmshah on Even non-theists should act as if theism is true · 2018-11-10T00:46:25.742Z · score: 3 (2 votes) · EA · GW

Conditional on theism being true in the sense of this post, it seems especially likely that one of the particular religions that exist currently is most likely to be (approximately) true. If nothing else, you could figure out which religion is true, and then act based on what that religion asks for.

Thoughts on the "Meta Trap"

2016-12-20T21:36:39.498Z · score: 8 (12 votes)

EA Berkeley Spring 2016 Retrospective

2016-09-11T06:37:02.183Z · score: 6 (6 votes)

EAGxBerkeley 2016 Retrospective

2016-09-11T06:27:16.316Z · score: 10 (6 votes)