Posts

Wei_Dai's Shortform 2019-12-11T21:07:57.056Z · score: 9 (3 votes)
How should large donors coordinate with small donors? 2019-01-08T22:47:56.661Z · score: 54 (26 votes)
Beyond Astronomical Waste 2018-12-27T09:27:26.728Z · score: 22 (7 votes)

Comments

Comment by wei_dai on EricHerboso's Shortform · 2020-09-05T08:36:38.762Z · score: 16 (6 votes) · EA · GW

I definitely agree with that. The articles I cited make it clear that overloading the word with new definitions creates opportunities for both unintentional confusion and strategic ambiguity, as well as makes it harder to think and talk about the original concept of "racist".

Comment by wei_dai on EricHerboso's Shortform · 2020-09-05T08:10:34.683Z · score: 28 (8 votes) · EA · GW

I think it would be fair to say that parts of academia have redefined "racist" or "racism" in different ways, some similar to Eric's definition. But my understanding is that they've done it for political (as opposed to scholarly) reasons. (Otherwise they would have created new terms to refer to their new concepts, instead of overloading an existing word.) These articles may help explain what is going on:

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T07:04:29.700Z · score: 19 (6 votes) · EA · GW

I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:

and this comment in the comments section of a NYT story about cancel culture among the students:

Having just graduated from the University of Minnesota last year, a very liberal college, I believe these examples don’t adequately show how far cancel culture has gone and what it truly is. The examples used of disassociating from obvious homophobes, or more classic bullying that teenage girls have always done to each other since the dawn of time is not new and not really cancel culture. The cancel culture that is truly new to my generation is the full blocking or shutting out of someone who simply has a different opinion than you. My experience in college was it morphed into a culture of fear for most. The fear of cancellation or punishment for voicing an opinion that the “group” disagreed with created a culture where most of us sat silent. My campus was not one of fruitful debate, but silent adherence to whatever the most “woke” person in the classroom decided was the correct thing to believe or think. This is not how things worked in the past, people used to be able to disagree, debate and sometimes feel offended because we are all looking to get closer to the truth on whatever topic it may be. Our problem with cancel culture is it snuffs out any debate, there is no longer room for dissent or nuance, the group can decide that your opinion isn’t worth hearing and—poof you’ve been canceled into oblivion. Whatever it’s worth I’d like to note I’m a liberal, voted for Obama and Hillary, those who participate in cancel culture aren’t liberals to me, they’ve hijacked the name.

About "I have lots of friends in academia and follow academic blogs etc., and basically don’t hear any of them talking about cancel culture within that context." there could be a number of explanations aside from cancel culture not being that bad in academia. Maybe you could ask them directly about it?

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T02:49:11.053Z · score: 23 (8 votes) · EA · GW

I think the biggest reason I'm worried is that seemingly every non-conservative intellectual or cultural center has fallen prey to cancel culture, e.g., academia, journalism, publishing, museums/arts, tech companies, local governments in left-leaning areas, etc. There are stories about it happening in a crochet group, and I've personally seen it in action in my local parent groups. Doesn't that give you a high enough base rate that you should think "I better assume EA is in serious danger too, unless I can understand why it happened to those places, and why the same mechanisms/dynamics don't apply to EA"?

Your reasoning (from another comment) is "I've seen various incidents that seem worrying, but they don't seem to form a pattern." Well if you only get seriously worried once there's a clear pattern, that may well be too late to do anything about it! Remember that many of those intellectual/cultural centers were once filled with liberals who visibly supported free speech, free inquiry, etc., and many of them would have cared enough to try to do something about cancel culture once they saw a clear pattern of movement in that direction, but that must have been too late already.

For what it’s worth, if I had to choose a top issue that might lead EA to “fail”, I’d cite “low or stagnant growth,” which is something I think about a lot, inside and outside of work.

"Low or stagnant growth" is less worrying to me because that's something you can always experiment or change course on, if you find yourself facing that problem. In other words you can keep trying until you get it right. With cancel culture though, if you don't get it right the first time (i.e., you allow cancel culture to take over) then it seems very hard to recover.

I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the “cancel culture” direction.

Thanks for this information. It does makes it more understandable why you're personally not focusing on this problem. I still think it should be on or near the top of your mind too though, especially as you think about and discuss related issues like this particular cancellation of Robin Hanson.

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T00:47:18.356Z · score: 28 (11 votes) · EA · GW

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed "introductory spaces" like undergrad classes to become "safe spaces", thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded "safe spaces" everywhere they went. And how is anyone supposed to argue against "safety", especially once its importance has been institutionalized (i.e., departments were built in part to enforce "safe spaces", which can then easily extend their power beyond "introductory spaces").

ETA: Jonathan Haidt has a book and an Atlantic article titled The Coddling of the American Mind detailing problems caused by the introduction of "safe spaces" in universities.

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T23:35:46.329Z · score: 26 (9 votes) · EA · GW

That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I've observed and as we've now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I'm worried however that each "victory" by CC has a ratcheting effect on EA culture, whereas failed cancellations don't really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.

Honestly I'm not sure what the solution is in the long run. I mean academia is full of smart people many of whom surely dislike CC as much as most of us and would push back against it if they could, yet academia is now the top example of cancel culture. What is something that we can do that they couldn't, or didn't think of?

Comment by wei_dai on EricHerboso's Shortform · 2020-09-02T05:29:13.838Z · score: 24 (10 votes) · EA · GW

people of the global majority

Apologies for not having the time to engage more substantively with your post, but before this term starts spreading as the fashion of the day, can someone explain to me, given that there are countless ways to divide the global population into a majority and a minority, why does it makes sense to privilege the white/non-white divide and call non-whites "the" global majority, with the implication that whites are "the" global minority? Are you basically saying that of all the possible differences between people in the world, this is the most important one, and therefore deserving of "the"? And how is anyone supposed to know that's what you've decided to do, when first encountering this term?

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T04:23:29.275Z · score: 16 (6 votes) · EA · GW

I’d be happy to read any arguments for this being a uniquely bad time

There were extensive discussions around this at https://www.greaterwrong.com/posts/PjfsbKrK5MnJDDoFr/have-epistemic-conditions-always-been-this-bad, including one about the 1950s. (Note that those discussions were from before the recent cluster of even more extreme cancellations like David Shor and the utility worker who supposedly made a white power sign.)

ETA: See also this Atlantic article that just came out today, and John McWhorter's tweet:

Whew! Because of the Atlantic article today, I am now getting another flood of missives from academics deeply afraid. Folks, I hear you but the volume outstrips my ability to write back. Please know I am reading all of them eventually, and they all make me think.

If you're not sure whether EA can avoid sharing this fate, shouldn't figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of "50 or 60 bullet points"? (Unless you know that others are already working on the problem, and it sure doesn't sound like it.)

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T01:11:13.327Z · score: 49 (21 votes) · EA · GW

(I'm occupied with some things so I'll just address this point and maybe come back to others later.)

It seems like the balance of opinion is very firmly anti-CC.

That seems true, but on the other hand, the upvotes show that concern about CC is very widespread in EA, so why did it take someone like me to make the concern public? Thinking about this, I note that:

  1. I have no strong official or unofficial relationships with any EA organizations and have little personal knowledge of "EA politics". If there's a danger or trend of EA going in a CC direction, I should be among the last to know.
  2. Until recently I have had very little interest in politics or even socializing. (I once wrote "And while perhaps not quite GPGPU, I speculate that due to neuroplasticity, some of my neurons that would have gone into running social interactions are now being used for other purposes instead.") Again it seems very surprising that someone like me would be the first to point out a concern about EA developing or joining CC, except:
  3. I'm probably well within the top percentile of all EAs in terms of "cancel proofness", because I have both an independent source of income and a non-zero amount of "intersectional currency" (e.g., I'm a POC and first-generation immigrant). I also have no official EA affiliations (which I deliberately maintained in part to be a more unbiased voice, but I had no idea that it would come in handy for this) and I don't like to do talks/presentations, so there's pretty much nothing about me that can be canceled.

The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.

Indeed, I also have direct evidence in the form of EAs contacting me privately (after seeing my earlier comments) to say that they're worried about EA developing/joining CC, and telling me what they've seen to make them worried, and saying that they can't talk publicly about it.

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T20:46:57.949Z · score: 47 (20 votes) · EA · GW

I maybe should have said something like “concerns related to social justice” when I said “diversity.” I wound up picking the shorter word, but at the price of ambiguity.

I find it interesting that you thought "diversity" is a good shorthand for "social justice", whereas other EAs naturally interpreted it as "intellectual diversity" or at least thought there's significant ambiguity in that direction. Seems to say a lot about the current moment in EA...

Getting the right balance seems difficult.

Well, maybe not, if some of the apparent options aren't real options. For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope. (Or to quickly run back to level ground while you still have some chance, as I'm starting to suspect that EA has taken quite a few steps down the slope already.)

It may be that in the end EA can't fight (i.e., can't win against) SJ-like dynamics, and therefore EA joining cancel culture is more "effective" than it getting canceled as a whole. If EA leaders have made an informed and well-considered decision about this, then fine, tell me and I'll defer to them. (If that's the case, I'll appreciate that it would be politically impossible to publicly lay out all of their reasoning.) It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about "getting the right balance" without even mentioning the obvious possibility of a slippery slope.

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T11:09:55.191Z · score: 16 (8 votes) · EA · GW

The "alternative view" ("emotional damage") I mentioned was in part trying to summarize the view apparently taken by EA Munich and being defended in the OP: "And yet, many people are actually uncomfortable with Hanson for some of the same reasons brought up in the Slate piece; they find his remarks personally upsetting or unsettling."

The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people’s impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

This would be a third view, which I hadn't seen anyone mention in connection with Robin Hanson until now. I guess it seems plausible although I personally haven't observed the "negative effect" you describe so I don't know how big the effect is.

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T09:05:15.024Z · score: 35 (19 votes) · EA · GW

I don’t think he’s even trying, and maybe he’s trying to deliberately walk as close to the line as possible. What’s the point in that?

I can think of at least three reasons for someone to be "edgy" like that:

  1. To signal intelligence, because it takes knowledge and skill to be able to walk as close to a line as possible without crossing it. This could be the (perhaps subconscious) intent even if the effort ends up failing or backfiring.
  2. To try to hold one end of the overton window in place, if one was worried about the overton window shifting or narrowing.
  3. To try to desensitize people (i.e., reduce their emotional reactions) about certain topics, ideas, or opinions.

One could think of "edgy" people as performing a valuable social service (2 and 3 above) while taking a large personal risk (if they accidentally cross the line), while receiving the personal benefits of intelligence signaling as compensation. On this view, it's regretable that more people aren't willing to be "edgy" (perhaps because we as a culture have devalued intelligence signaling relative to virtue signaling), and as a result our society is suffering the negative consequences of an increasingly narrow overton window and an increasingly sensitive populace.

An alternative view would be that there are too many "edgy" people causing damage to society by making the overton window too wide or anchoring it in the wrong place, and causing emotional damage to lots of people who they have no business trying to "desensitize", and they're doing that for the selfish benefit of signaling their intelligence to others. Therefore we should coordinate to punish such people by canceling/deplatforming/shaming them, etc.

(You can perhaps tell which view I'm sympathetic to, and which view is the one that the most influential parts of Western civilization have implicitly adopted in recent years.)

Comment by wei_dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T07:41:31.407Z · score: 54 (35 votes) · EA · GW

Do you have any thoughts on this earlier comment of mine? In short, are you worried about about EA developing a full-scale cancel culture similar to other places where SJ values currently predominate, like academia or MSM / (formerly) liberal journalism? (By that I mean a culture where many important policy-relevant issues either cannot be discussed, or the discussions must follow the prevailing "party line" in order for the speakers to not face serious negative consequences like career termination.) If you are worried, are you aware of any efforts to prevent this from happening? Or at least discussions around this among EA leaders?

I realize that EA Munich and other EA organizations face difficult trade-offs and believe that they are making the best choices possible given their values and the information they have access to, but people in places like academia must have thought the same when they started what would later turn out to be their first steps towards cancel culture. Do you think EA can avoid sharing the same eventual fate?

Comment by wei_dai on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T03:07:41.444Z · score: 17 (8 votes) · EA · GW

From the podcast transcript:

I think something that sometimes people have in mind when they talk about the orthogonality of intelligence and goals is they have this picture of AI development where we’re creating systems that are, in some sense, smarter and smarter. And then there’s this separate project of trying to figure out what goals to give these AI systems. The way this works in, I think, in some of the classic presentations of risk is that there’s this deadline picture. That there will come a day where we have extremely intelligent systems. And if we can’t by that day figure out how to give them the right goals, then we might give them the wrong goals and a disaster might occur. So we have this exogenous deadline of the creep of AI capability progress, and that we need to solve this issue before that day arises. That’s something that I think I, for the most part, disagree with.

I continue to have a lot of uncertainty about how likely it is that AI development will look like "there’s this separate project of trying to figure out what goals to give these AI systems" vs a development process where capability and goals are necessarily connected. (I didn't find your arguments in favor of the latter very persuasive.) For example it seems GPT-3 can be seen as more like the former than the latter. (See this thread for background on this.)

To the extent that AI development is more like the latter than the former, that might be bad news for (a certain version of) the orthogonality thesis, but it can be even worse news for the prospect of AI alignment. Because instead of disaster striking only if we can't figure out the right goals to give to the AI, it can also be the case that we know what goals we want to give it, but due to constraints of the development process, we can't give it those goals and can only build AI with unaligned goals. So it seems to me that the latter scenario can also be rightly described as "exogenous deadline of the creep of AI capability progress". (In both cases, we can try to refrain from developing/deploying AGI, but it may be a difficult coordination problem for humanity to stay in a state where we know how to build AGI but chooses not to, and in any case this consideration cuts equally across both scenarios.)

Comment by wei_dai on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T06:08:24.067Z · score: 59 (19 votes) · EA · GW

I want to push back against this, from one of your slides:

If we’ve failed to notice important issues with classic arguments until recently, we should also worry about our ability to assess new arguments

I feel like the LW community did notice many important issues with the classic arguments. Personally, I was/am pessimistic about AI risk, but thought my reasons were not fully or most captured by the those arguments, and I saw various issues/caveats with them that I talked about on LW. I'm going to just cite my own posts/comments because they're the easiest to find, but I'm sure there were lots of criticisms from others too. 1 2 3 4

Of course I'm glad that you thought about and critiqued those arguments in a more systematic and prominent way, but it seems wrong to say or imply that nobody noticed their issues until now.

Comment by wei_dai on Concern, and hope · 2020-07-10T20:58:05.650Z · score: 18 (8 votes) · EA · GW

The witch hunts were sometimes endorsed/supported by the authorities, and other times not, just like the Red Guards:

Under Charlemagne, for example, Christians who practiced witchcraft were enslaved by the Church, while those who worshiped the Devil (Germanic gods) were killed outright.

By early 1967 Red Guard units were overthrowing existing party authorities in towns, cities, and entire provinces. These units soon began fighting among themselves, however, as various factions vied for power amidst each one’s claims that it was the true representative of Maoist thought. The Red Guards’ increasing factionalism and their total disruption of industrial production and of Chinese urban life caused the government in 1967–68 to urge the Red Guards to retire into the countryside. The Chinese military was called in to restore order throughout the country, and from this point the Red Guard movement gradually subsided.

I would say the most relevant difference between them is that witch hunts were more "organic", in other words they happened pretty much everywhere where people believed in the possibility of witches (which was pretty much everywhere period), whereas the Cultural Revolution was driven/enabled entirely by ideology indoctrinated by schools, universities, and mass media propaganda.

Comment by wei_dai on What values would EA want to promote? · 2020-07-09T22:02:33.619Z · score: 33 (18 votes) · EA · GW

If so, what are the support values needed for maximizing those values?

I think a healthy dose of moral uncertainty (and normative uncertainty in general) is really important to have, because it seems pretty easy for any ethical/social movement to become fanatical or to incur a radical element, and end up doing damage to itself, its members, or society at large. ("The road to hell is paved with good intentions" and all that.)

A large part of what I found attractive about EA is that its leaders emphasize normative uncertainty so much in their writings (starting with Nick Bostrom back in 2009), but perhaps it's not "proselytized" as much as it should be day-to-day.

Comment by wei_dai on Concern, and hope · 2020-07-09T04:51:46.048Z · score: 38 (14 votes) · EA · GW

Re "Cultural Revolution" comparison, let me put it this way: I'm a naturalized citizen of the US who has lived here for 30+ years, and recently I've spent 20+ hours researching the political climate and immigration policies of other countries I could potentially move to. I've also refrained multiple times from making a public comment on a topic that I have an opinion on (including on this forum), because of potential consequences that I've come to fear may happen in a few years or decades later. (To be clear I do not mean beatings, imprisonment, or being killed, except as unlikely tail risks, but more along the lines of public humiliation, forced confessions/apologies, career termination, and collective punishment of my family and associates.)

If there are better or equally valid historical analogies for thinking about what is happening and what it may lead to, I'm happy to hear them out. But if some people are just offended by the comparison, I can only say that I totally understand where they're coming from.

Comment by wei_dai on EA considerations regarding increasing political polarization · 2020-06-24T20:44:11.650Z · score: 12 (8 votes) · EA · GW

The Cultural Revolution analogy may be more fitting in some ways though. For example it pretty quickly devolved into factions of Red Guards fighting (physically as well as rhetorically) each other to show who is more "red" or more "revolutionary", which is a bit similar to how many people being canceled today are Democrats who strongly oppose Trump. (See this and this.) My knowledge of history is limited, but I'm not aware of this kind of thing happening during the Red Scares?

Comment by wei_dai on Are we entering an experiment in Modern Monetary Theory (MMT)? · 2020-06-13T00:16:09.670Z · score: 4 (3 votes) · EA · GW

And here's a confirmation from a prominent proponent of MMT:

In 2020, Congress has been showing us — in practice if not in its rhetoric — exactly how M.M.T. works: It committed trillions of dollars this spring that in the conventional economic sense it did not “have.” It didn’t raise taxes or borrow from China to come up with dollars to support our ailing economy. Instead, lawmakers simply voted to pass spending bills, which effectively ordered up trillions of dollars from the government’s bank, the Federal Reserve. In reality, that’s how all government spending is paid for.

Comment by wei_dai on Are we entering an experiment in Modern Monetary Theory (MMT)? · 2020-05-28T06:58:57.514Z · score: 6 (4 votes) · EA · GW

This has been the best article I can find on MMT. It's very long, but the most relevant part to your question is this:

The QE model can create a synthetic version of MMT if the government and Federal Reserve work closely together, which is what is happening now. My previous example of the Treasury sending out helicopter checks to people that are ultimately paid for by issuing Treasuries that the Federal Reserve buys with newly-created dollars (with primary dealer banks as intermediaries), is basically MMT in practice. In other words, what people think of as MMT can essentially be done in the current legal framework.

However, although QE creates new dollars out of thin air, the process still goes through the motions of pretending to respectfully treat money in the same way it was treated in the first two models, meaning something that has to be borrowed from somewhere before spent, and balanced by an asset on the other side (a Treasury security that gets locked away on a central bank balance sheet in place of newly-created dollars, forever to be rolled to the next one when it matures). For a while, those motions along with statements by officials provided many investors reasons to believe that maybe newly-created dollars would be paid back, maybe the Federal Reserve will be able to reduce their balance sheet, and so forth. Those beliefs proved to be unrealistic, but the realization that it was debt monetization is only coming years later for many people after the temporary nature of it proved permanent when quantitative tightening failed in 2019.

MMT, on the other hand, drops a lot of those pretenses of QE and just treats money as something that can be printed whenever unused economic capacity exists. It's not that fundamentally different than QE; it just cuts to the heart of it and removes some of the steps.

Comment by wei_dai on Racial Demographics at Longtermist Organizations · 2020-05-02T00:56:53.118Z · score: 134 (60 votes) · EA · GW

I'm a POC, and I've been recruited by multiple AI-focused longtermist organizations (in both leadership and research capacities) but did not join for personal reasons. I've participated in online longtermist discussions since the 1990s, and AFAICT participants in those discussions have always skewed white. Specifically I don't know anyone else of Asian descent (like myself) who was a frequent participant in longtermist discussions even as of 10 years ago. This has not been a problem or issue for me personally – I guess different groups participate at different rates because they tend to have different philosophies and interests, and I've never faced any racism or discrimination in longtermist spaces or had my ideas taken less seriously for not being white. I'm actually more worried about organizations setting hiring goals for themselves that assume that everyone do have the same philosophies and interests, potentially leading to pathological policies down the line.

Comment by wei_dai on How Much Leverage Should Altruists Use? · 2020-04-26T19:06:33.916Z · score: 3 (2 votes) · EA · GW

In a taxable account I can buy 1x SPY and 1.3x SPY futures. Then my after-tax expected return is again (2x SPY − 1x interest).

The catch is that if I lose money, some of my wealth will take the form of taxable losses that I can use to offset gains in future years.

This is a really interesting and counterintuitive idea, that I really like, but after thinking about it a lot, decided probably does not work. Here's my argument. For simplicity let's assume that I know for sure I'm going to die in 30 years[1] and I'm planning to donate my investment to a tax-exempt org at that point, and ignore dividends[2]. First, the reason I'm able to get a better expected return buying stocks instead of a 30-year government bond is that the market is compensating me for the risk that stocks will be worth less than the 30-year government bond at the end of 30 years. If that happens, I'm left with 0.3x more losses by buying 1.3x futures instead of 1x stock, but the tax offset I incurred is worth nothing because they go away when I die so they don't compensate me for the extra losses. (I don't think there's a way to transfer them to another person or entity?) So (compared to leveraged buy-and-hold) the futures strategy gives you equal gains if stocks do better than risk free return, but is 0.3x worse if stocks do worse than risk free return. Therefore leveraged buy-and-hold does seem to represent a significant free lunch (ultimately coming out of government pockets) compared to futures.

ETA: The situation is actually worse than this because there's a significant risk that during the 30 years the market first rises and then falls, so I end up paying taxes on capital gains during the rise, that later become taxable losses that become worthless when I die.

ETA2: To summarize/restate this in a perhaps more intuitive way, comparing 1x stocks with 1x futures, over the whole investment period stocks give you .3x more upside potential and the same or lower downside risk.

[1] Are you perhaps assuming that you'll almost certainly live much longer than that?

[2] Re: dividends, my understanding is that equity futures are a pure bet on stock prices and ignore dividends, but buying ETFs obviously does give you dividends, so (aside from taxes) equity futures actually represent a different risk/return profile compared to buying index ETFs. I'm not sure how to think about this, e.g., can we still treat SPY and SPX futures as nearly identical (aside from taxes), and which is a better idea overall if we do take both dividends and taxes into account?

Comment by wei_dai on How Much Leverage Should Altruists Use? · 2020-04-24T01:53:08.103Z · score: 3 (2 votes) · EA · GW

Side note on tax considerations of financing methods (for investing in taxable accounts):

  • With futures you are forced to realize capital gains or losses at the end of every year even if you hold the futures longer than that.
  • With either box spread financing or margin loans, if you buy and hold investments that rise in value, you don't have to realize capital gains and can avoid paying capital gains taxes on them altogether if you donate those investments later.
  • With box spread financing, the interest you pay appears in the form of capital losses (upon expiration of the box spread options, in other words the loan), which you can use to offset your capital gains if you have any, but can't reduce your other taxable income such as dividend or interest income (except by a small fixed amount each year).
  • With margin loans, your interest expense is tax deductible but you have to itemize deductions (which means you give up your standard deductions).
  • With futures, the interest you "pay" is baked into the amount of capital gains/losses you end up with.

I think (assuming the same implicit/explicit interest rates for all 3 financing methods) for altruists investing in taxable accounts, this means almost certainly avoiding futures, and considering going with margin loans over box spread financing if you have significant interest expenses and don't have a lot of realized capital gains each year that you can offset. (Note that currently, possibly for a limited time, it's possible to lock in a 2.7-year interest rate using box options, around .6%, that is lower than IB's minimum interest rate, .75%, so the stated assumption doesn't hold.)

Comment by wei_dai on How Much Leverage Should Altruists Use? · 2020-04-23T20:52:51.318Z · score: 2 (2 votes) · EA · GW

Thanks for engaging on this. I've been having trouble making up my mind about international equities, which is delaying my plan to leverage up (while hedging due to current market conditions), and it really helps to have someone argue the other side to make sure I'm not missing something.

This is most obvious in the case of bonds—if 30-year bonds from A are yielding 2%/year and then fall to 1.5%/year over a decade, while 30-year bonds from B are yielding 2%/year and stay at 2%/year, then it will look like the bonds from A are performing about twice as well over the decade. But this is a very bad reason to invest in A. It’s anti-inductive not only because of EMH but for the very simple reason that return chasing leads you to buy high and sell low.

Assuming EMH, A's yield would only have fallen if it has become less risky, so buying A isn't actually bad, unless also buying B provides diversification benefits. Applying this to stocks, we can say that under EMH buying only US stocks has no downsides unless international equities provide diversification benefits, and since they have been highly correlated in recent decades (after about 1990) we lose very little by buying only US stocks.

Of course in the long run this high correlation between US and international equities can't last forever, but it seems to change slowly enough over time that I can just diversify into international equities when it looks like they've started to decorrelate.

If I had to guess I’d bet that US markets are salient to investors in many countries and their recent outperformance has made many people overweight them, so that they will very slightly underperform. But I’d be super interested in good empirical evidence on this front too.

US stock is 35% owned by non-US investors as of 2018 and had been going up recently. Meantime non-US stock is probably >90% owned by non-US investors (not sure how to find the data directly, but US investors only have 10% international equities in their stock portfolio). My interpretation is that non-US investors are still under-weighing US stocks but have reduced their bias recently and this contributed to US outperformance, and the trend can continue for a while longer before petering out.

A lot of my thinking here comes from observing that people in places like China have much higher savings rates, but it's a big hassle at best for them to invest in US stocks (due to anti-money laundering and tax laws) and many have just never even thought in that direction, so international investment opportunities have been exhausted to a greater degree than US ones, and the data seems consistent with this.

Let me know if the above convinces you to move in my direction. If not, I might move to a 4:1 ratio of US to international equities exposure instead of 9:1.

I personally just hold the market portfolio.

BTW while looking for data, I came across this article which seems relevant here, although I'm not totally sure their reasoning is correct. I'm confused about how to reason about "market portfolio" or "properly balanced portfolio" in a world with strong "home bias" and "controlling shareholders".

But in Corporate Governance and the Home Bias (NBER Working Paper No. 8680), authors Lee Pinkowitz, Rene Stulz, and Rohan Williamson assert that at least some of the oft-noted tilt is not a bias at all but simply a reflection of the fact that a sizeable number of shares worldwide are not for sale to the average investor. They find that comparisons of U.S. portfolios to the world market for equities have failed to consider that the "controlling shareholders" who dominate many a foreign corporation do not make their substantial holdings available for normal trading.

Take this into account, the authors argue, and as much as half of the home bias disappears. A more accurate assessment of globally available shares, they say, would show that about 67 percent of a properly balanced U.S. portfolio would be invested in U.S. companies.

Comment by wei_dai on How Much Leverage Should Altruists Use? · 2020-04-20T19:59:20.803Z · score: 5 (4 votes) · EA · GW

RE international equities, I wrote about this here to explain why I think most people should underweight US equities.

A large part of your argument is that one's salaries come from a US source, which doesn't apply to me (the company that provides most of my income has a pretty international revenue source). Also, as I mentioned in the FB thread linked above, US and international equities have become highly correlated in recent decades so using international equities to provide diversification against US economy tanking will not have much effect.

First, if EMH is true, there is no reason to expect US equities to have a higher Sharpe ratio than international equities.

EMH probably isn't true across national boundaries, due to "equity home bias". The US could have a higher Sharpe ratio because of that in combination with things like lower savings rate (higher time preference), better monetary policies (or more cost-effective policies due to reserve currency status), better governance (it seems terrible to me but perhaps still better than most other countries?), sole superpower status (allowing its companies to extract rent across the global with fewer political consequences), etc.

Second, US outperformance is only a recent phenomenon (see this tweet and its replies), and the outperformance is pretty marginal if you look over a long time horizon.

The tweet says outperformance was after 2009, so I asked Portfolio Visualizer to maximize Sharpe ratio based on pre-2009 data, and it says to allocate 8.31% to "Global ex-US Stock Market", but that drops to 0% if I allow it to include "Total US Bond Market" (in which case it says 11% US stocks 89% US bonds). If I also add "Global Bonds (Unhedged)" it says to include 4.38% of that but still 0% of international equities.

add in an assumption that P/E ratios will partially mean revert

From https://www.institutionalinvestor.com/article/b1j0mvcy9792vt/Why-Value-Investing-Sucks:

“This expensing of intangibles, leading to their absence from book values, started to have a major effect on financial data (book values, earnings) from the late 1980s, due to the growth of corporate investment in intangibles,” they wrote. Speaking to II, Lev explains that this “madness of accounting” has dragged down the performance of value investors ever since.

“All the important investments like R&D and IT are immediately expensed, and people are left with highly misleading ideas about profitability and about value,” he says. “Especially with respect to small companies and medium companies that are not followed by a lot of financial analysts and not written up by the media, people rely on the financial reports. And they are terrible.”

I don’t see where RAFI recommends holding 0% exposure to US equities?

On https://interactive.researchaffiliates.com/asset-allocation#!/?category=Efficient&currency=USD&model=ER&scale=LINEAR&selected=160&terms=REAL&type=Portfolios, on the left side-bar click on "Efficient" to expand it, click on "14.0% Volatility" or any other one there, on the right side-bar click on "Equities" to expand it, and it says 0.0% for "US Large" and "US Small".

Comment by wei_dai on How Much Leverage Should Altruists Use? · 2020-04-19T21:07:35.055Z · score: 9 (4 votes) · EA · GW

Thanks for your answers! I think I'll probably stick with broad indexes, since as you said investing in factors and managed futures can dramatically underperform the broader market in short time horizons and it will be hard to convince myself that it's a good idea to hold onto them. (This actually happened to me recently. I found a small factor-based investment in my portfolio, saw that it underperformed and couldn't remember the original reason for buying it, so I sold it. Oh, it was also because I needed to raise cash to buy puts and that investment had the lowest or negative capital gains.)

Some discussions I've had on Facebook recently (after writing my question here) that you may find interesting: international equities, cheap leverage, dynamic leverage, bonds, bankruptcy risk

My biggest question currently is about international equities. Looking at historical data it seems that international equities have underperformed (had worse Sharpe ratio than) the US while being strongly correlated with it in recent decades (which made me want to have 9:1 ratio of US to international exposure, as I said in the above FB thread), but a source you cited is predicting much better performance in the future, to the extent that they're recommending 0% exposure to US equities(!) and mostly EM exposure which is super surprising to me. Can you summarize or link to a summary of why they think that?

What distinguishes those from other, similar-looking opportunities that failed?

I don't have a good answer for this. Basically those opportunities just kind of came to me, and I haven't really had to do much to try to filter out other, similar-looking but actually less promising opportunities. I think I just thought about each opportunity for a few days and invested after still thinking it was a good idea.

Have you made any special investments that didn’t pan out?

During the dot-com bubble I tried stock picking, which didn't work out. Recently I put some time/reputation (but no money) into a cryptocurrency startup which didn't go anywhere. I can't recall anything else. But there was another investment that did work out in the 10-100x range that I haven't mention yet, which is that shortly after college I took a year off from work to write a piece of software (Windows SSH server, which didn't exist on the market when I started), then handed it to a partner to develop/sell/manage (while I went back to work for another company) and within about 5 years it started throwing off enough income that I could "retire".

ETA: Oh, I also worked at a couple of startups that compensated partly in stock options that later became worthless.

Comment by wei_dai on How Much Leverage Should Altruists Use? · 2020-03-31T04:38:43.595Z · score: 8 (3 votes) · EA · GW

What is the easiest, most efficient way to buy the global "agnostic" portfolio? Can you suggest some combination of ETFs (or other vehicles) that would do it? Should FDIC-insured savings accounts and/or CDs also be part of the portfolio? (It seems to be a good deal because the federal government is essentially subsidizing them by providing an implicit guarantee that you won't lose money even if the FDIC exhausts its reserves.)

14% 30 year bonds 14% 10 year foreign bonds

There was some recent relevant discussion under one of Paul Christiano's FB posts, which suggests that buying government bonds may not be a good idea unless one needs the unique features they offer (and I think most of us probably don't?):

This paper and its references seem relevant: https://www.nber.org/papers/w12881.pdf

At a broad level, our evidence is consistent with theories that ascribe a unique value to government debt. Bansal and Coleman (1996) present a theory in which debt, but not equity claims, are money-like and carry a convenience value. They argue that the theory can account for the high average equity premium and low average risk-free rate in the US.4 Our finding of a unique value provided by government debt relative to private debt support theories such as Woodford (1990), Holmstrom and Tirole (1998), and Caballero and Krishnamurthy (2006). In these papers, the government’s credibility gives its securities unique collateral and liquidity features relative to private assets and thereby induces a premium on government assets.

Finally, how would this discussion change if say about every 10 years there was an opportunity to 10x your investment at the cost of .5 probability of it going to 0? (For context see this and this.)

Comment by wei_dai on Wei_Dai's Shortform · 2020-03-14T19:13:51.855Z · score: 20 (7 votes) · EA · GW

Missed opportunity for EA: I posted my coronavirus trade in part to build credibility/reputation, but someone should have done it on a larger scale, for example taken out a full page ad in the NY Times in the very early stages of the outbreak to warn the public about it. Then the next time EAs need to raise the alarm about something even bigger, they might be taken a lot more seriously. It's too late now for this outbreak, but keep this in mind for the future?

Comment by wei_dai on Activism for COVID-19 Local Preparedness · 2020-03-03T20:24:33.779Z · score: 1 (1 votes) · EA · GW

Thanks for reporting the incorrect link. I left off the "https://" (I copied it from my Chrome address bar, which leaves off the protocol if you click on the address bar instead of pressing "alt-d"; very annoying), and it still worked on ea.greaterwrong.com but not on forum.effectivealtruism.org.

Comment by wei_dai on Activism for COVID-19 Local Preparedness · 2020-03-03T06:31:42.089Z · score: 13 (4 votes) · EA · GW

This page collects expert opinions on the spread of COVID-19, and has one quote giving 40-70% and one quote giving 60% (and no other concrete predictions). Marc Lipsitch gave his reasoning for the 40-70% prediction here.

Note that he said "Should have said 40-70% of adults in a situation without effective controls." Based on my observations (reading a large amount of COVID-19 discussions and news stories), I think China, Taiwan, and Singapore have effective controls, South Korea is borderline, and Japan, US, and most of Europe are not likely to have effective controls. (And of course less developed countries almost certainly will not have effective controls.)

ETA: For example:

“We cannot do what China has done here, as that would start a panic, runs on supermarkets and banks, and any contingency measure has a negative effect on businesses and the real economy,” said a senior German government official involved in the crisis management.

Comment by wei_dai on Wei_Dai's Shortform · 2020-02-26T05:51:59.845Z · score: 4 (4 votes) · EA · GW

Someone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn't give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about? for a related idea.)

I think this means we have to find other ways of explaining/modeling charity giving, including the kind encouraged in the EA community.

Comment by wei_dai on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-09T21:41:56.662Z · score: 1 (1 votes) · EA · GW

Thanks Rob, I emailed you.

Comment by wei_dai on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-08T23:30:52.614Z · score: 3 (2 votes) · EA · GW

I’ve spent a lot of time on social media trying to get people to tone down their more extreme statements re: nCOV.

I would be interested to learn more about your views on the current outbreak. Can you link to the statements you made on social media, or present your perspective here (or as a top-level comment or post)?

Comment by wei_dai on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-08T11:37:54.896Z · score: 13 (9 votes) · EA · GW

Robert (or anyone else), do you know anyone who actually works in pandemic preparedness? I'm wondering how to get ideas to such people. For example:

  1. artificial summer (optimize indoor temperature and humidity to reduce viral survival time on surfaces)
  2. study mask reuse, given likely shortages (for example bake used masks in home ovens at low enough temperature to not damage the fibers but still kill the viruses)
  3. scale up manufacturing of all drugs showing effectiveness against 2019-nCoV in vitro, ahead of clinical trial results

longer term:

  1. subsidize or mandate anti-microbial touch surfaces in public spaces (door handles, etc.)
  2. stockpile masks and other supplies, make the stockpiles big enough, and publicize them to avoid panic/hoarding/shortages
Comment by wei_dai on Candidate Scoring System recommendations for the Democratic presidential primaries · 2020-02-06T04:28:44.739Z · score: 14 (6 votes) · EA · GW

I'm trying to figure out which Democratic presidential candidate is likely to be best with regard to epistemic conditions in the US (i.e., most likely to improve them or at least not make them worse). This seems closely related to "sectarian tension" which is addressed in the scoring system but perhaps not identical. I wonder if you can either formally incorporate this issue into your scoring system, or just comment on it informally here.

Comment by wei_dai on A small observation about the value of having kids · 2020-01-19T19:06:54.882Z · score: 5 (3 votes) · EA · GW

There a common thought that Effective Altruists can, through careful, good parenting, impart positive values and competence to their descendants.

I'm pretty interested in this topic. Can you say more about the best available evidence for this, and best guesses as to how to go about doing it? For example are there books you can recommend?

Comment by wei_dai on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2020-01-19T00:37:54.662Z · score: 11 (5 votes) · EA · GW

It feels very relevant that you’re flagrantly violating the “Don’t Make Things Worse” principle.

By triggering the bomb, you're making things worse from your current perspective, but making things better from the perspective of earlier you. Doesn't that seem strange and deserving of an explanation? The explanation from a UDT perspective is that by updating upon observing the bomb, you actually changed your utility function. You used to care about both the possible worlds where you end up seeing a bomb in the box, and the worlds where you don't. After updating, you think you're either a simulation within Omega's prediction so your action has no effect on yourself or you're in the world with a real bomb, and you no longer care about the version of you in the world with a million dollars in the box, and this accounts for the conflict/inconsistency.

Giving the human tendency to change our (UDT-)utility functions by updating, it's not clear what to do (or what is right), and I think this reduces UDT's intuitive appeal and makes it less of a slam-dunk over CDT/EDT. But it seems to me that it takes switching to the UDT perspective to even understand the nature of the problem. (Quite possibly this isn't adequately explained in MIRI's decision theory papers.)

Comment by wei_dai on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-16T07:45:48.881Z · score: 18 (7 votes) · EA · GW

(I really should ask you some questions about AI risk and policy/strategy/governance ("Policy" from now on). I was actually thinking a lot about that just before I got sidetracked by the SJ topic.)

  1. My understanding is that aside from formally publishing papers, Policy researchers usually communicate with each other via private Google Docs. Is that right? Would you find it useful to have a public or private forum for Policy discussion similar to the AI Alignment Forum? See also Where are people thinking and talking about global coordination for AI safety?
  2. In the absence of a Policy Forum, I've been posting Policy-relevant ideas to the Alignment Forum. Do you and other Policy researchers you know follow AF?
  3. In this comment I wrote, "Worryingly, it seems that there’s a disconnect between the kind of global coordination that AI governance researchers are thinking and talking about, and the kind that technical AI safety researchers often talk about nowadays as necessary to ensure safety." Would you agree with this?
  4. I'm interested in your thoughts on The Main Sources of AI Risk?, especially whether any of the sources/types of AI risk listed there are new to you, if you disagree with any of them, or if you can suggest any additional ones.
Comment by wei_dai on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-16T03:49:36.029Z · score: 36 (14 votes) · EA · GW

Founder effects and strong communal norms towards open discussion in the EA community to which I think most newcomers get pretty heavily inculcated.

This does not reassure me very much, because academia used to have strong openness norms but is quickly losing them or has already lost them almost everywhere, and it seems easy for founders to lose their influence (i.e., be pushed out or aside) these days, especially if they do not belong to one of the SJ-recognized marginalized/oppressed groups (and I think founders of EA mostly do not?).

Cause prioritization and consequentialism are somewhat incongruous with these things, since many of the things that can get people to be unfairly “canceled” are quite small from an EA perspective.

One could say that seeking knowledge and maximizing profits are somewhat incongruous with these things, but that hasn't stopped academia and corporations from adopting harmful SJ practices.

Heavy influence of and connection to philosophy selects for openness norms as well.

Again it doesn't seem like openness norms offer enough protection against whatever social dynamics is operating.

Ability and motivation to selectively adopt the best SJ positions without adopting some of its most harmful practices.

Surely people in academia and business also had the motivation to avoid the most harmful practices, but perhaps didn't have the ability? Why do you think that EA has the ability? I don't see any evidence, at least from the perspective of someone not privy to private or internal discussions, that any EA person has a good understanding of the social dynamics driving adoption of the harmful practices, or (aside from you and a few others I know who don't seem to be close to the centers of EA) are even thinking about this topic at all.

Comment by wei_dai on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-14T11:14:10.255Z · score: 63 (28 votes) · EA · GW
  1. Social justice in relation to effective altruism

I've been thinking a lot about this recently too. Unfortunately I didn't see this AMA until now but hopefully it's not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various "isms" or "phobias", or worse, get demoted/fired/deplatformed) will come to affect EA as well.

I can see at least two ways of this happening to EA:

  1. Whatever social dynamic is responsible for this happening within SJ applies to EA as well, and EA will become like SJ in this regard for purely internal reasons. (In this case EA will probably come to have a different set of politically correct beliefs from SJ that one must profess faith in.)
  2. SJ comes to control even more of the cultural/intellectual "high grounds" (journalism, academia, K-12 education, tech industry, departments within EA organizations, etc.) than it already does, and EA will be forced to play by SJ's rules. (See second link above for one specific scenario that worries me.)

From your answers so far it seems like you're not particularly worried about this. If you have good reasons to not worry about this, please share them so I can move on to other problems myself.

(I think SJ is already actively doing harm because it pursues actions/policies based on these politically correct beliefs, many of which are likely wrong but can't be argued about. But I'm more worried about EA potentially doing this in the future because EAs tend to pursue more consequential actions/policies that will be much more disastrous (in terms of benefits foregone if nothing else) if they are wrong.)

Comment by wei_dai on Wei_Dai's Shortform · 2019-12-11T21:07:57.211Z · score: 15 (4 votes) · EA · GW

A post that I wrote on LW that is also relevant to EA: What determines the balance between intelligence signaling and virtue signaling?

Comment by wei_dai on Overview of Capitalism and Socialism for Effective Altruism · 2019-11-08T06:16:18.639Z · score: 3 (2 votes) · EA · GW

I’ve heard that the same thing is go­ing on again with China to­day—Western­ers think the Chi­nese gov­ern­ment is effi­cient com­pared to democ­racy but re­ally it isn’t.

Can you please explain why you think this, or link to some relevant resources? (For context, I came across this comment after posting Ways that China is surpassing the US on LW, and I'd like to hear more from your contrasting perspective.)

Comment by wei_dai on Book launch: "Effective Altruism: Philosophical Issues" (Oxford) · 2019-09-18T01:28:46.674Z · score: 12 (9 votes) · EA · GW

Here's a link to the Introduction in Google Books, so people can read that and see what the papers are about.

Comment by wei_dai on How much EA analysis of AI safety as a cause area exists? · 2019-09-17T16:53:25.900Z · score: 4 (4 votes) · EA · GW

A reason I consider what I described likely is not least that I find it more likely that future software systems will consist in a multitude of specialized systems with quite different designs, even in the presence of AGI, as opposed to most everything being done by copies of some singular AGI system.

Can you explain why this is relevant to how much effort we should put into AI alignment research today?

Comment by wei_dai on How much EA analysis of AI safety as a cause area exists? · 2019-09-15T23:25:13.303Z · score: 9 (3 votes) · EA · GW

as opposed to countless different agents, cooperating and competing with many (for those future agents) non-intentional factors influencing the outcomes.

I think there are good reasons to think this isn't likely, aside from the possibility of FOOM:

Comment by wei_dai on Are we living at the most influential time in history? · 2019-09-13T01:39:58.860Z · score: 10 (3 votes) · EA · GW

The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time. [...] So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

Are you referring to average or marginal cost-effectiveness here? If "average", then this seems wrong. From the perspective of making a decision on whether to spend on longtermist causes now or later, what matters is the marginal cost-effectiveness of the best opportunities available now versus later. For example, it could well be the case that the next century is more influential than this century (has higher average cost-effectiveness) but because longtermism has gained a lot more ground in terms of popularity, all the highly cost-effective interventions are already done so the money I've invested will have to be spent on marginal interventions that are less cost-effective than the marginal opportunities available today.

If you're referring to marginal cost-effectiveness instead, then your conception of "influentialness of a time" seems really counterintuitive. For example suppose people in the next century manage to build a Singleton that locks in aligned values, thus largely preventing x-risks for all time, but because longtermism is extremely popular, there aren't any interventions with even medium cost-effectiveness left unfunded. It would be quite counterintuitive to say that century has low "influentialness".

In any case, if the ultimate motivation for this discussion here is to make the "spend now or later" decision, why not talk directly about "marginal cost-effectiveness"?

Comment by wei_dai on [Link] Progress Studies (Jasmine Wang) · 2019-09-12T01:56:39.637Z · score: 5 (4 votes) · EA · GW

Seems to be missing any mention of Growth Economics.

Comment by wei_dai on What should Founders Pledge research? · 2019-09-11T22:08:27.649Z · score: 18 (6 votes) · EA · GW

Not sure if this counts, but I did make a critique that Open Phil seemed to have evaluated MIRI in a biased way relative to OpenAI.

Comment by wei_dai on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-11T13:35:51.692Z · score: 1 (1 votes) · EA · GW

As I mentioned earlier, I am uncertain about meta-ethics, so I was trying to craft a sentence that would be true under a number of different meta-ethical theories. I wrote "should" instead of "it is rational to" because under moral realism that "should" could be interpreted as a "moral should" while under anti-realism it could be interpreted as an "epistemic should". (I also do think there may be something in common between moral and epistemic normativity but that's not my main motivation.) Your suggestion “Utilitarianism endorses replacing existing humans with these new beings.” would avoid this issue, but the main reason I wrote my original comment was to create a thought experiment where concerns about moral uncertainty and contractarianism clearly do not apply, and “Utilitarianism endorses replacing existing humans with these new beings.” doesn't really convey that since you could say that even in scenarios where moral uncertainty and contractarianism do apply.