Posts

Comments

Comment by ESRogs on The Center for Election Science Appeal for 2020 · 2021-01-14T07:26:06.781Z · EA · GW

Which city is next?

Comment by ESRogs on Uncorrelated Investments for Altruists · 2021-01-11T18:10:17.655Z · EA · GW

Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance

I wonder if it makes sense to rebalance more frequently when volatility (or trading volume) is high.

Comment by ESRogs on Uncorrelated Investments for Altruists · 2021-01-11T17:36:27.365Z · EA · GW

The AlphaArchitect funds are more expensive than Vanguard funds, but they're just as cheap after adjusting for factor exposure.

Do you happen to have the numbers available that you used for this calculation? Would be curious to see how you're doing the adjustment for factor exposure.

Comment by ESRogs on Uncorrelated Investments for Altruists · 2021-01-11T17:34:23.295Z · EA · GW

Looking at historical performance of those Alpha Architects funds (QVAL, etc), it looks like they all had big dips in March 2020 of around 25%, at the same time as the rest of the market.

And I've heard it claimed that assets in general tend to be more correlated during drawdowns.

If that's so, it seems to mitigate to some extent the value of holding uncorrelated assets, particularly in a portfolio with leverage, because it means your risk of margin call is not as low as you might otherwise think.

Have you looked into this issue of correlations during drawdowns, and do you think it changes the picture?

Comment by ESRogs on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-16T06:07:21.196Z · EA · GW

Ah, good point! This was not already clear to me. (Though I do remember thinking about these things a bit back when Piketty's book came out.)

Comment by ESRogs on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-14T18:36:32.348Z · EA · GW

I just feel like I don't know how to think about this because I understand too little finance and economics

Okay, sounds like we're pretty much in the same boat here. If anyone else is able to chime in and enlighten us, please do so!

Comment by ESRogs on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-13T20:44:23.651Z · EA · GW

My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn't really a consensus for what explains it.

Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns. Does that tell us about the difference between equity returns and GDP growth?

A priori, would you expect both equities and treasuries to have returns that match GDP growth?

Comment by ESRogs on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-13T06:12:15.422Z · EA · GW

But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.

Shouldn't investing into the stock market be considered a source of average returns, by default? In the long run, the stock market grows at the same rate as GDP

If you think you have some edge, that might be a reason to pick particular stocks (as I sometimes do) and expect returns above GDP growth.

But generically I don't think the stock market should be considered a source of above-average returns. Am I missing something?

Comment by ESRogs on Thoughts on whether we're living at the most influential time in history · 2020-11-05T23:34:35.511Z · EA · GW

You could make an argument that a certain kind of influence strictly decreases with time. So the hinge was at the Big Bang.

But, there (probably) weren't any agents around to control anything then, so maybe you say there was zero influence available at that time. Everything that happened was just being determined by low level forces and fields and particles (and no collections of those could be reasonably described as conscious agents).

Today, much of what happens (on Earth) is determined by conscious agents, so in some sense the total amount of extant influence has grown.

Let's maybe call the first kind of influence time-priority, and the second agency. So, since the Big Bang, the level of time-priority influence available in the universe has gone way down, but the level of aggregate agency in the universe has gone way up.

On a super simple model that just takes these two into account, you might multiply them together to get the total influence available at a certain time (and then divide by the number of people alive at that time to get the average person's influence). This number will peak somewhere in the middle (assuming it's zero both at the Big Bang and at the Heat Death).

That maybe doesn't tell you much, but then you could start taking into account some other considerations, like how x-risk could result in a permanent drop of agency down to zero. Or how perhaps there's an upper limit on how much agency is potentially available in the universe.

In any case, it seems like the direction of causality should be a pretty important part of the analysis (even if it points in the opposite direction of another factor, like increasing agency), either as part of the prior or as one of the first things you update on.

Comment by ESRogs on Thoughts on whether we're living at the most influential time in history · 2020-11-05T23:13:51.782Z · EA · GW

Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

Do you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can't influence us, is pretty significant, and should be factored into the argument somewhere.)

I wouldn't call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.

Comment by ESRogs on Thoughts on whether we're living at the most influential time in history · 2020-11-05T18:56:25.868Z · EA · GW

And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.

Didn't quite follow this part. Are you saying that if hinginess keeps going up (or stays at the current, high level), that implies a high level of x-risk as well, which means that, with enough time at that hinginess (and therefore x-risk) level, we'll wipe ourselves out; and therefore that we can't have sustained, increasing / high hinginess for a long time?

(That's where I would have guessed you were going with that argument, but I got confused by the part about "drastic reduction of the value of worlds ..." since the x-risk component seems like a reason the high-hinginess can't last a long time, rather than an argument that it would last but coincide with a sad / low-value scenario.)

Comment by ESRogs on Are we living at the most influential time in history? · 2020-11-05T17:52:02.632Z · EA · GW

Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point.

Doesn't the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you've done here.)

Comment by ESRogs on Who should / is going to win 2020 FLI award 2020? · 2020-10-14T07:54:41.925Z · EA · GW

The Nobel Prize comes with a million dollars (9,000,000 SEK). 50k doesn't seem like that much, in comparison.

Comment by ESRogs on EA reading list: miscellaneous · 2020-08-05T20:55:00.200Z · EA · GW

Another Karnofsky series that I thought was important (and perhaps doesn't fit anywhere else) is his posts on The Straw Ratio.

Comment by ESRogs on EA reading list: EA motivations and psychology · 2020-08-05T20:53:31.562Z · EA · GW

Also: Charity: The video game that’s real, by Holden Karnofsky

Comment by ESRogs on EA reading list: EA motivations and psychology · 2020-08-05T20:48:14.333Z · EA · GW

FYI Purchase fuzzies and utilons separately is showing up twice in the list.

Comment by ESRogs on What's the big deal about hypersonic missiles? · 2020-05-22T17:59:14.385Z · EA · GW
ballistic ones are faster, but reach Mach 20 and similar speeds outside of the atmosphere

This seems notable, since there is no sound w/o atmosphere. So perhaps ballistic missiles never actually engage in hypersonic flight, despite reaching speeds that would be hypersonic if in the atmosphere? Though I would be surprised if they're reaching Mach 20 at a high altitude and then not still going super fast (above Mach 5) on the way down.

Comment by ESRogs on What's the big deal about hypersonic missiles? · 2020-05-22T17:54:07.941Z · EA · GW
according to Thomas P. Christie (DoD director of Operational Test and Evaluation from 2001–2005) current defense systems “haven’t worked with any degree of confidence”.[12] A major unsolved problem is that credible decoys are apparently “trivially easy” to build, so much so that during missile defense tests, balloon decoys are made larger than warheads--which is not something a real adversary would do. Even then, tests fail 50% of the time.

I didn't follow this. What are the decoys? Are they made by the attacking side or the defending side? Why does them being easy to build mean that people make large ones during tests, and why wouldn't that also happen in a real attack? Why is it notable that tests still fail at a high rate in the presence of large decoys?

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2020-01-26T00:44:17.497Z · EA · GW

Thanks! Just read it.

I think there's a key piece of your thinking that I don't quite understand / disagree with, and it's the idea that normativity is irreducible.

I think I follow you that if normativity were irreducible, then it wouldn't be a good candidate for abandonment or revision. But that seems almost like begging the question. I don't understand why it's irreducible.

Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn't seem like it poses any challenge to your intuitions from footnote 6 in the document (starting with "I personally care a lot about the question: 'Is there anything I should do, and, if so, what?'"). And at the same time it explains why there are weird edge cases where the concept seems to break down.

So few things in life seem to be irreducible. (E.g. neither Eric nor Ben is irreducible!) So why would normativity be?

[You also should feel under no social obligation to respond, though it would be fun to discuss this the next time we find ourselves at the same party, should such a situation arise.]

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-12-01T07:58:27.116Z · EA · GW
Don't Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don't Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
...
One could argue that R_CDT sympathists don't actually have much stronger intuitions regarding the first principle than the second -- i.e. that their intuitions aren't actually very "targeted" on the first one -- but I don't think that would be right. At least, it's not right in my case.

I would agree that, with these two principles as written, more people would agree with the first. (And certainly believe you that that's right in your case.)

But I feel like the second doesn't quite capture what I had in mind regarding the DMTW intuition applied to P_'s.

Consider an alternate version:

If a decision would definitely make things worse, then taking that decision is not good policy.

Or alternatively:

If a decision would definitely make things worse, a rational person would not take that decision.

It seems to me that these two claims are naively intuitive on their face, in roughly the same way that the "... then taking that decision is not rational." version is. And it's only after you've considered prisoners' dilemmas or Newcomb's paradox, etc. that you realize that good policy (or being a rational agent) actually diverges from what's rational in the moment.

(But maybe others would disagree on how intuitive these versions are.)

EDIT: And to spell out my argument a bit more: if several alternate formulations of a principle are each intuitively appealing, and it turns out that whether some claim (e.g. R_CDT is true) is consistent with the principle comes down to the precise formulation used, then it's not quite fair to say that the principle fully endorses the claim and that the claim is not counter-intuitive from the perspective of the original intuition.

Of course, this argument is moot if it's true that the original DMTW intuition was always about rational in-the-moment action, and never about policies or actors. And maybe that's the case? But I think it's a little more ambiguous with the "... is not good policy" or "a rational person would not..." versions than with the "Don't commit to a policy..." version.

EDIT2: Does what I'm trying to say make sense? (I felt like I was struggling a bit to express myself in this comment.)

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-29T06:58:38.601Z · EA · GW
There may be a pretty different argument here, which you have in mind. I at least don't see it yet though.

Perhaps the argument is something like:

  • "Don't make things worse" (DMTW) is one of the intuitions that leads us to favoring R_CDT
  • But the actual policy that R_CDT recommends does not in fact follow DMTW
  • So R_CDT only gets intuitive appeal from DMTW to the extent that DMTW was about R_'s, and not about P_'s
  • But intuitions are probably(?) not that precisely targeted, so R_CDT shouldn't get to claim the full intuitive endorsement of DMTW. (Yes, DMTW endorses it more than it endorses R_FDT, but R_CDT is still at least somewhat counter-intuitive when judged against the DMTW intuition.)
Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-29T06:23:07.156Z · EA · GW
both R_UDT and R_CDT imply that the decision to commit yourself to a two-boxing policy at the start of the game would be rational

That should be "a one-boxing policy", right?

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-27T03:31:57.236Z · EA · GW

Thanks! This is helpful.

It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn't even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reaction, though, is to continue to believe that some things have property P -- but just drop the assumption that these things will also have both property Q1 and property Q2.

I think I disagree with the claim (or implication) that keeping P is more often more natural. Well, you're just saying it's "often" natural, and I suppose it's natural in some cases and not others. But I think we may disagree on how often it's natural, though hard to say at this very abstract level. (Did you see my comment in response to your Realism and Rationality post?)

In particular, I'm curious what makes you optimistic about finding a "correct" criterion of rightness. In the case of the politician, it seems clear that learning they don't have some of the properties you thought shouldn't call into question whether they exist at all.

But for the case of a criterion of rightness, my intuition (informed by the style of thinking in my comment), is that there's no particular reason to think there should be one criterion that obviously fits the bill. Your intuition seems to be the opposite, and I'm not sure I understand why.

My best guess, particularly informed by reading through footnote 15 on your Realism and Rationality post, is that when faced with ethical dilemmas (like your torture vs lollipop examples), it seems like there is a correct answer. Does that seem right?

(I realize at this point we're talking about intuitions and priors on a pretty abstract level, so it may be hard to give a good answer.)

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-26T09:10:15.335Z · EA · GW
But the arguments I've seen for "CDT is the most rational decision theory" to date have struck me as either circular, or as reducing to "I know CDT doesn't get me the most utility, but something about it just feels right".

It seems to me like they're coming down to saying something like: the "Guaranteed Payoffs Principle" / "Don't Make Things Worse Principle" is more core to rational action than being self-consistent. Whereas others think self-consistency is more important.

Mind you, if the sentence "CDT is the most rational decision theory" is true in some substantive, non-trivial, non-circular sense

It's not clear to me that the justification for CDT is more circular than the justification for FDT. Doesn't it come down to which principles you favor?

Maybe you could say FDT is more elegant. Or maybe that it satisfies more of the intuitive properties we'd hope for from a decision theory (where elegance might be one of those). But I'm not sure that would make the justification less-circular per se.

I guess one way the justification for CDT could be more circular is if the key or only principle that pushes in favor of it over FDT can really just be seen as a restatement of CDT in a way that the principles that push in favor of FDT do not. Is that what you would claim?

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-26T02:03:48.116Z · EA · GW

Just want to note that I found the R_ vs P_ distinction to be helpful.

I think using those terms might be useful for getting at the core of the disagreement.

Comment by ESRogs on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-26T01:31:33.493Z · EA · GW
is more relevant when trying to judge the likelihood of a criterion of rightness being correct

Sorry to drop in in the middle of this back and forth, but I am curious -- do you think it's quite likely that there is a single criterion of rightness that is objectively "correct"?

It seems to me that we have a number of intuitive properties (meta criteria of rightness?) that we would like a criterion of rightness to satisfy (e.g. "don't make things worse", or "don't be self-effacing"). And so far there doesn't seem to be any single criterion that satisfies all of them.

So why not just conclude that, similar to the case with voting and Arrow's theorem, perhaps there's just no single perfect criterion of rightness.

In other words, once we agree that CDT doesn't make things worse, but that UDT is better as a general policy, is there anything left to argue about about which is "correct"?

EDIT: Decided I had better go and read your Realism and Rationality post, and ended up leaving a lengthy comment there.

Comment by ESRogs on 'Longtermism' · 2019-08-02T23:37:17.702Z · EA · GW

IMHO the most natural name for "people at any time have equal value" should be something like temporal indifference, which more directly suggests that meaning.

Edit: I retract temporal indifference in favor of Holly Elmore's suggestion of temporal cosmopolitanism.

Comment by ESRogs on 'Longtermism' · 2019-08-02T23:35:04.823Z · EA · GW
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.

Why not include this in the definition of strong longtermism, but not weak longtermism?

Having longtermism just mean "caring a lot about the long-term future" seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you're going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, "begs the question").

When most people first hear the term longtermism, they're going to hear it in conversation or see it in writing without the definition attached to it. And they are going to assume it means caring a lot about the long-term future. So why define it to mean anything other than that?

On the other hand, anyone who comes across strong longtermism, is much more likely to realize that it's a very specific technical term, so it seems much more natural to attach a very specific definition to it.

Comment by ESRogs on Inadequacy and Modesty · 2017-10-30T01:20:10.594Z · EA · GW
  1. Extra potency may arise if the product is important enough to affect the market or indeed the society it operates in creating a feedback loop (what George Soros calls reflexivity). The development of credit derivatives and subsequent bust could be a devastating example of this. And perhaps ‘the Big Short’ is a good illustration of Eliezer’s points.

Could you say more about this point? I don't think I understand it.

My best guess is that it means that when changes to the price of an asset result in changes out in the world, which in turn cause the asset price to change again in the same direction, then the asset price is likely to be wrong, and one can expect a correction. Is that it?

Comment by ESRogs on Lunar Colony · 2016-12-22T18:13:21.176Z · EA · GW

Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth...

What if it's a big asteroid?

Comment by ESRogs on Why I'm donating to MIRI this year · 2016-12-02T09:08:20.894Z · EA · GW

Note that this is particularly an argument about money. I think that there are important reasons to skew work towards scenarios where AI comes particularly soon, but I think it’s easier to get leverage over that as a researcher choosing what to work on (for instance doing short-term safety work with longer-term implications firmly in view) than as a funder.

I didn't understand this part. Are you saying that funders can't choose whether to fund short-term or long-term work (either because they can't tell which is which, or there aren't enough options to choose from)?

Comment by ESRogs on EA Ventures Request for Projects + Update · 2015-06-09T18:05:34.369Z · EA · GW

The project was successfully funded for $19,000. We found the fundraising process to take slightly longer and be slightly more difficult than we were expecting.

Hey Kerry, I'm listed as one of the funders on the eaventures.org front page, but I didn't hear anything about this fund raise. Should I have?

Comment by ESRogs on Should you give your best now or later? · 2015-05-12T05:17:36.197Z · EA · GW

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Paul Allen didn't turn to "our side" yet.

Has Paul Allen come round to advocating caution and AI safety? The sources I can find right now suggest Allen is not especially worried.

http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/

Comment by ESRogs on Risk aversion and investment (for altruists) · 2014-12-12T18:29:12.876Z · EA · GW

You're multiplying by X inside the log.

Good catch, my bad.

Edit: how did you do the code? I had difficulty with formatting, hence the excess line breaks.

Add four spaces at the beginning of a line to make it appear as code.

~ E(log(1 + d/(Y+d)))

= E(log((Y + d + d)/Y))

How did you get from one of these steps to the other? Shouldn't the second be E[log((Y+d+d)/(Y+d))]?

Comment by ESRogs on Risk aversion and investment (for altruists) · 2014-12-12T10:38:03.254Z · EA · GW

Hmm, I've thought about this some more and I actually still don't understand it. I might just be being dense, but I feel like you've made a very interesting claim here that would be important if true, so I'd really like to understand it. Perhaps others can benefit as well.

Here's what I was able to work out for myself. Given that log(X+d) ~ log(X) + d/X, then:

d/X ~ log(X+d) - log(X)
d/X ~ log((X+d)/X)
d/X ~ log(1 + d/X)

So maximizing E[d/X] should be approximately equivalent to maximizing E[log(1 + d/X)]. This is looking closer to what you said, but there are two things I still don't understand.

  1. It looks like you've multiplied the second term (E[log(1+d/X)]) through by an X. Can you do that within an expectation, given that X isn't a constant?

  2. Even once you have E[log(d+X)] as a maximization target, I'd describe that as maximizing the log of the sum of your wealth and the world's. And it seems like a quite different goal from maximizing the log-wealth of the world. Is there another step I'm missing?

Comment by ESRogs on Risk aversion and investment (for altruists) · 2014-10-24T18:19:28.635Z · EA · GW

A simple argument suggests that an investor concerned with maximizing their influence ought to maximize the expected fraction of world wealth they control. This means that the value of an extra dollar of investment returns should vary inversely with the total wealth of the world. This means that the investor should act as if they were maximizing the expected log-wealth of the world.

Could someone explain how the final sentence follows from the others?

If I understand correctly, the first sentence says an investor should maximize E(wealth-of-the-investor / wealth-of-the-world), while the final sentence says they should maximize E(log(wealth-of-the-world)). Is that right? How does that follow?

Comment by ESRogs on Brainstorming thread: ideas for large EA funders · 2014-10-02T04:21:20.192Z · EA · GW

The risk from investing in individual stocks rather than broad indices is pretty minor

This depends a lot on how many stocks you're buying, right? Or would you still make this claim if someone were buying < 10 stocks? < 5?