How Do AI Timelines Affect Giving Now vs. Later? 2021-08-03T03:36:43.356Z
Metaculus Questions Suggest Money Will Do More Good in the Future 2021-07-22T01:56:48.028Z
Reverse-Engineering the Philanthropic Discount Rate 2021-07-09T18:32:15.150Z
A Comparison of Donor-Advised Fund Providers 2021-04-05T20:13:46.727Z
Asset Allocation and Leverage for Altruists with Constraints 2020-12-14T20:48:26.789Z
Uncorrelated Investments for Altruists 2020-11-23T23:03:23.933Z
If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant 2020-11-23T22:47:32.514Z
Donor-Advised Funds vs. Taxable Accounts for Patient Donors 2020-10-19T20:38:23.801Z
The Risk of Concentrating Wealth in a Single Asset 2020-10-18T17:15:17.651Z
MichaelDickens's Shortform 2020-09-24T00:01:24.005Z
"Disappointing Futures" Might Be As Important As Existential Risks 2020-09-03T01:15:50.466Z
Giving Now vs. Later for Existential Risk: An Initial Approach 2020-08-29T01:04:34.488Z
Should We Prioritize Long-Term Existential Risk? 2020-08-20T02:23:43.393Z
The Importance of Unknown Existential Risks 2020-07-23T19:09:56.031Z
Estimating the Philanthropic Discount Rate 2020-07-03T16:58:54.771Z
How Much Leverage Should Altruists Use? 2020-01-07T04:25:31.492Z
How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? 2019-02-03T01:11:09.991Z
Should Global Poverty Donors Give Now or Later? An In-Depth Analysis 2019-01-22T04:45:56.500Z
Why Do Small Donors Give Now, But Large Donors Give Later? 2018-10-28T01:51:56.710Z
Where Some People Donated in 2017 2018-02-11T21:55:09.730Z
Where I Am Donating in 2016 2016-11-01T04:10:02.389Z
Dedicated Donors May Not Want to Sign the Giving What We Can Pledge 2016-10-30T03:26:44.215Z
Altruistic Organizations Should Consider Counterfactuals When Hiring 2016-09-11T04:19:39.164Z
Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering 2016-08-26T02:08:53.190Z
Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply) 2016-06-10T21:35:50.236Z
A Complete Quantitative Model for Cause Selection 2016-05-18T02:17:28.769Z
Quantifying the Far Future Effects of Interventions 2016-05-18T02:15:07.240Z
GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics 2016-05-17T01:51:15.218Z
On Priors 2016-04-26T22:35:14.359Z
How Should a Large Donor Prioritize Cause Areas? 2016-04-25T20:46:38.304Z
Expected Value Estimates You Can (Maybe) Take Literally 2016-04-06T15:11:59.359Z
Are GiveWell Top Charities Too Speculative? 2015-12-21T04:05:07.675Z
More on REG's Room for More Funding 2015-11-16T17:31:40.493Z
Cause Selection Blogging Carnival Conclusion 2015-10-05T20:16:43.945Z
Charities I Would Like to See 2015-09-20T15:22:43.083Z
My Cause Selection: Michael Dickens 2015-09-15T23:29:40.701Z
On Values Spreading 2015-09-11T03:57:55.148Z
Some Writings on Cause Selection 2015-09-08T21:56:01.033Z
EA Blogging Carnival: My Cause Selection 2015-08-16T01:07:22.005Z
Why Effective Altruists Should Use a Robo-Advisor 2015-08-04T03:37:13.789Z
Stanford EA History and Lessons Learned 2015-07-02T03:36:56.688Z
How We Run Discussions at Stanford EA 2015-04-14T16:36:05.363Z
Meetup : Stanford THINK 2014-10-23T02:10:42.641Z


Comment by MichaelDickens on How Do AI Timelines Affect Giving Now vs. Later? · 2021-08-05T14:58:12.067Z · EA · GW

The reason I made the model only have one thing to spend on pre-AGI is not because it's realistic (which it isn't), but because it makes the model more tractable. I was primarily interested in answering a simple question: do AI timelines affect giving now vs. later?

Comment by MichaelDickens on How Do AI Timelines Affect Giving Now vs. Later? · 2021-08-05T02:36:57.024Z · EA · GW

I don't have any well-formed opinions about what the post-AGI world will look like, so I don't think it's obvious that logarithmic utility of capital is more appropriate than simply trying to maximize the probability of a good outcome. The way you describe it is how my model worked originally, but I changed it because I believe the new model gives a stronger result even if the model is not necessarily more accurate. I wrote in a paragraph buried in Appendix B:

In an earlier draft of this essay, my model did not assign value to any capital left over after AGI emerges. It simply tried to minimize the probability of extinction. This older model came to the same basic conclusion—namely, shorter timelines mean we should spend faster. (The difference was that it spent a much larger percentage of the budget each decade, and under some conditions it would spend 100% of the budget at a certain point.[5]) But I was concerned that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research—obviously if that's the only thing we can spend money on, then we should spend lots of money on it. The new model allows for spending money on other things but still reaches the same qualitative conclusion, which is a stronger result.

Comment by MichaelDickens on What is the role of public discussion for hits-based Open Philanthropy causes? · 2021-08-04T22:15:34.569Z · EA · GW

It seems to me that the problem isn't just with Open Phil-funded speculative orgs, but with all speculative orgs.

To give some more specific examples, it's unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.

I think it's just as unclear how someone inside Open Phil could advocate for those. Open Phil might have access to some private information, but that won't help much with something like estimating the EV of a highly speculative nonprofit.

Comment by MichaelDickens on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-04T21:21:05.769Z · EA · GW

Some evidence in this direction: Eliezer Yudkowsky recently wrote on a Facebook post:

This is your regular reminder that, if I believe there is any hope whatsoever in your work for AGI alignment, I think I can make sure you get funded.

This implies that all the really good funding opportunities Eliezer is aware of have already been funded, and any that appear can get funded quickly. Eliezer is not Nick Bostrom, but they're in similar positions.

(Note: Eliezer's Facebook post is publicly viewable, so I think reposting this quote here is ok from a privacy standpoint.)

Comment by MichaelDickens on How Do AI Timelines Affect Giving Now vs. Later? · 2021-08-04T21:04:20.029Z · EA · GW

I think we are falling for the double illusion of transparency: I misunderstood you, and the thing I thought you were saying was even further off than what you thought I thought you were saying. I wasn't even thinking about capacity-building labor as analogous to investment. But now I think I see what you're saying, and the question of laboring on capacity vs. direct value does seem analogous to spending vs. investing money.

At a high level, you can probably model labor in the same way as I describe in OP: you spend some amount of labor on direct research, and the rest on capacity-building efforts that increase the capacity for doing labor in the future. So you can take the model as is and just change some numbers.

Example: If you take the model in OP and assume we currently have an expected (median) 1% of required labor capacity, a rate of return on capacity-building of 20%, and a median AGI date of 2050, then the model recommends exclusively capacity-building until 2050, then spending about 30% of each decade's labor on direct research.

One complication is that this super-easy model treats labor as something that only exists in the present. But in reality, if you have one laborer, that person can work now and can also continue working for some number of decades. The super-easy model assumes that any labor spent on research immediately disappears, when it would be more accurate to say that research labor earns a 0% return (or let's say a -3% return, to account for people retiring or quitting) while capacity-building labor earns a 20% return (or whatever the number is).

This complication is kind of hard to wrap my head around, but I think I can model it with a small change to my program, changing the line in run_agi_spending that reads

capital *= (1 - spending_schedule[y]) * (1 + self.investment_return)**10


        research_return = -0.03
        capital *= spending_schedule[y] * ((1 + research_return)**10) + (1 - spending_schedule[y]) * ((1 + self.investment_return)**10)

In that case, the model recommends spending 100% on capacity-building for the next three decades, then about 30% per decade on research from 2050 through 2080, and then spending almost entirely on capacity-building for the rest of time.

But I'm not sure I'm modeling this concept correctly.

Comment by MichaelDickens on How Do AI Timelines Affect Giving Now vs. Later? · 2021-08-04T14:39:02.826Z · EA · GW

That's an interesting question, and I agree with your reasoning on why it's important. My off-the-cuff thoughts:

Labor tradeoffs don't work in the same way as capital tradoffs because there's no temporal element. With capital, you can spend it now or later, and if you spend later, you get to spend more of it. But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. This is something EAs have already written a lot about, and it's probably worth more attention overall than the question of giving (money) now vs. later, but I believe the latter question is more neglected and has more low-hanging fruit.

The question of optimal giving rate might be irrelevant if, say, we're confident that the optimal rate is somewhere above 1%, we don't know where, but it's impossible to spend more than 1% due to a lack of funding opportunities. But I don't think we can be that confident that the optimal spending rate is that high. And even if we are, knowing the optimal rate still matters if you expect that we can scale up work capacity in the future.

I'd guess >50% chance that the optimal spending rate is faster than the longtermist community[1] is currently spending, but I also expect the longtermist spending rate to increase a lot in the future due to increasing work capacity plus capital becoming more liquid—according to Ben Todd's estimate, about half of EA capital is currently too illiquid to spend.

[1] I'm talking about longtermism specifically and not all EA because the optimal spending rate for neartermist causes could be pretty different.

Comment by MichaelDickens on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-31T03:00:00.645Z · EA · GW

As of yesterday, my position on mission hedging was that it was probably crowded out by other investments with better characteristics[1], and therefore not worth doing. But I didn't have any good justification for this, it was just my intuition. After messing around with the spreadsheet in the parent comment, I am inclined to believe that the optimal altruistic portfolio contains at least a little bit of mission hedging.

Some credences off the top of my head:

  • 70% chance that the optimal portfolio contains some mission hedging
  • 50% chance that the optimal portfolio allocates at least 10% to mission hedging
  • 20% chance that the optimal portfolio allocates 100% to mission hedging

[1] See here for more on what investments I think have good characteristics. More precisely, my intuition was that the global market portfolio (GMP) + mission hedging was probably a better investment than pure GMP, but a more sophisticated portfolio that included GMP plus long/short value and momentum had good enough expected return/risk to outweigh the benefits of mission hedging.

EDIT: I should add that I think it's less likely that AI mission hedging is worth it on the margin, given that (at least in my anecdotal experience) EAs already tend to overweight AI-related companies. But the overweight is mostly incidental—my impression is EAs tend to overweight tech companies in general, not just AI companies. So a strategic mission hedger might want to focus on companies that are likely to benefit from AI, but that don't look like traditional tech companies. As a basic example, I'd probably favor Nvidia over Google or Tesla. Nvidia is still a tech company so maybe it's not an ideal example, but it's not as popular as Google/Tesla.

Comment by MichaelDickens on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-30T19:34:38.258Z · EA · GW

As an extension to this model, I wrote a solver that finds the optimal allocation between the AI portfolio and the global market portfolio. I don't think Google Sheets has a solver, so I wrote it in LibreOffice. Link to download

I don't know if the spreadsheet will work in Excel, but if you don't have LibreOffice, it's free to download. I don't see any way to save the solver parameters that I set, so you have to re-create the solver manually. Here's how to do it in LibreOffice:

  1. Go to "Tools" -> "Solver..."
  2. Click "Options" and change Solver Engine to "LibreOffice Swarm Non-Linear Solver"
  3. Set "Target cell" to D32 (the green-colored cell)
  4. Set "By changing cells" to E7 (the blue-colored cell)
  5. Set two limiting conditions: E7 => 0 and E7 <= 1
  6. Click "Solve"

Given the parameters I set, the optimal allocation is 91.8% to the global market portfolio and 8.2% to the AI portfolio. The parameters were fairly arbitrary, and it's easy to get allocations higher or lower than this.

Comment by MichaelDickens on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-29T03:08:54.861Z · EA · GW

To be clear, my model is exactly the same as your model, I just changed one of the parameters—I changed the AI portfolio's overall expected return from 4.7% to 1.3%.

It's not intuitively obvious to me whether, given the 1.3%-return assumption, the optimal portfolio contains more AI than the global market portfolio. I know how I'd write a program to find the answer, but it's complicated enough that I don't want to do it right now.

(The way you'd do it is to model the correlation between the AI portfolio and the market, and set your assumptions such that the optimal value-neutral portfolio (given the two investments of "AI stocks" and "all other stocks") equals the global market portfolio. Then write a utility function that assigns more utility to money in the short-timelines world and maximize that function where the independent variable is % allocation to each portfolio. You can do this with Python's scipy.optimize, or any other similar library.)

EDIT: I wrote a spreadsheet to do this, see this comment

Comment by MichaelDickens on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-29T02:58:17.988Z · EA · GW

50 randomly-chosen stocks are much better diversified than 50 stocks that are specifically selected for having a high correlation to a particular outcome (e.g., AI development).

This paper provides some more in-depth explanation of what I was talking about with the math. It's fairly technical, but it doesn't use any math beyond high school algebra/statistics.

The key point I was making is that, if markets are efficient, then you shouldn't expect a 5% (or even 4.7%) geometric mean return from the AI portfolio. Instead, you should expect more like 1.3%. I might have messed up some of the details, but I'm confident that the geometric return for an un-diversified portfolio in an efficient market is meaningfully lower than the global market return. This is not to say that mission hedging is a bad idea, just that this is an important fact to take into account.

Comment by MichaelDickens on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-26T23:22:26.648Z · EA · GW

Thanks for making this model extension!

I believe the most important downside to a mission hedging portfolio is that it's poorly diversified, and thus experiences much more volatility than the global market portfolio. More volatility reduces the geometric return due to volatility drag.

Example case:

  • Stocks follow geometric Brownian motion.
  • AI portfolio has the same arithmetic mean return as the global market portfolio.
  • Market standard deviation is 15%, AI portfolio standard deviation is 30%.
  • Market geometric mean return is 5%.

In geometric Brownian motion, arithmetic return = geometric return + stdev^2 / 2. Therefore, the geometric mean return of the AI portfolio is 5% + 15%^2/2 - 30%^2/2 = 1.6%. If we still assume a 20% return to AI stocks in the short-timelines scenario, that gives 1.3% return in the long-timelines scenario. And the annual return thanks to mission hedging is -1.1%.

(I'm only about 60% confident that I set up those calculations correctly. When to use arithmetic vs. geometric returns can be confusing.)

Of course, you could also tweak the model to make mission hedging look better. For instance, it's plausible that in the short-timeline world, money is 100x more valuable instead of 10x, in which case mission hedging is equivalent to a 24% higher return even with my more pessimistic assumption for the AI portfolio's return.

Comment by MichaelDickens on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-26T22:41:01.575Z · EA · GW

It seems to me that for mission hedging to work, there needs to be a strong positive relationship between production and stock price. That is, when (say) a fossil fuel company produces more oil, its stock price goes up. That might happen, but it might not. Several things need to happen:

  1. The increased quantity is not offset by a decrease in price
  2. The increased revenue translates into higher profit (this might not happen if, e.g., increased revenue inuces more competition, or induces increased costs for the oil company)
  3. Higher profit translates into a higher stock price

Step 3 seems very likely to happen in the long run, but steps 1 and 2 seem more uncertain to me, and I don't have a great understanding of the relevant economics. Do we have good reason to expect increased production to translate into stock returns? Or do we at least understand the circumstances under which it will or will not translate?

(Alternatively, we could look at the relationship between, say, oil production and the price of oil futures. This is a simpler relationship, but I'd guess the two numbers are basically uncorrelated. They will move together if demand changes, and will move oppositely if supply changes.)

Comment by MichaelDickens on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T17:19:37.522Z · EA · GW

It was an accident. I should have made a post, not a question.

Comment by MichaelDickens on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T17:14:48.527Z · EA · GW

I mistakenly submitted this as a question instead of as a post. Is there any way to convert it to a post?

Comment by MichaelDickens on What important questions are missing from Metaculus? · 2021-06-09T21:52:04.486Z · EA · GW

The question is intended to look at tail risk associated with stock markets shutting down. Transformative AI may or may not constitute such a risk; for example, the AI might shut down the stock market because it's going to do something far better with people's money, or it might shut down the market because everyone is turned into paperclips. So I think it should be unconditional.

Comment by MichaelDickens on Looking for more 'PlayPumps' like examples · 2021-05-28T18:18:28.874Z · EA · GW

In a number of cases, this reduction in hospital admissions and emergency room visits resulted in a cost savings in excess of $10,130, the cost of the average wish. In other words, Make-A-Wish helped, and helped in a cost-effective way.

This doesn't follow. The $10,130 cost savings went into hospital budgets, not into buying bednets, so it doesn't particularly matter that this money was saved.

Also, it seems implausible that Make-A-Wish could meaningfully reduce hospital admissions, so I'm inclined to disbelieve this study.

Comment by MichaelDickens on What important questions are missing from Metaculus? · 2021-05-27T16:07:21.887Z · EA · GW

Just to be clear, you specifically mean to exclude not-yet-EAs who set up DAFs in, say, 2025?

Yes, the intention is to predict the maximum length of time that foundations and DAFs created now (or before now) can continue to exist.

It might be interesting to have forecasts on the amount of resources expected to be devoted to EA causes in the future [...]


Comment by MichaelDickens on What important questions are missing from Metaculus? · 2021-05-26T14:57:56.368Z · EA · GW

I have a doc on my computer with some notes on Metaculus questions that I want to see, but either haven't gotten around to writing up yet, or am not sure how to operationalize. Feel free to take any of them.

Giving now vs. later parameter values

  • "In 2030, I personally will either donate at least 10% of my income to an EA cause or will work directly on an EA cause full time"
    • attempting to measure value drift
    • or maybe ask about Jeff Kaufman or somebody like that because he's public about his donations
      • or make a list of people, and ask how many of them will fulfill the above criteria
  • "According to the EA Survey, what percent of people who donated at least 10% in 2018 will donate at least 10% in 2023?"
    • Not sure if it's possible to derive this info
    • According to David Moss in Rethink Priorities Slack, it's probably not feasible to get data on this
  • "When will the Founders Pledge's long-term investment fund make its last grant?"
  • "When the long-term investment fund run by Founders Pledge ceases to make grants, will it happen because the fund is seized by an outside actor?"
    • by a government, etc.
  • "When will the longest-lived foundation or DAF owned by an EA make its last grant?"
    • EA defined as someone who identifies as an EA as of this prediction
    • the DAF must already exist and contain nonzero dollars
  • question about Rockefeller/Ford/Gates foundation longevity
  • best achievable QALYs per dollar in 2030 according to ACE, etc.
  • "Will the US stock market close by 2120?"
    • A stock market is considered to have closed if all public exchanges cease trading for at least one year
    • Could also ask about any developed market, but I think it makes most sense to ask about a single country

Open research questions

  • "By 2040, there will be a broadly accepted answer on how to construct a rank ordering of possible worlds where some of the worlds have a nonzero probability of containing infinite utility."

    • "broadly accepted" doesn't mean everyone agrees with its prescriptions, but at least people agree that it's internally consistent and largely aligns with intuitions on finite-utility cases
  • "In 2121, it will be broadly agreed that, all things considered, donations to GiveDirectly were net positive."

    • attempt at addressing cluelessness
    • "broadly agreed" is hard to define in a useful way. it's already broadly agreed right now, in spite of cluelessness
      • maybe "broadly agreed among philosophers who have written about cluelessness" but this might limit your sample to like 4 people
  • "By 2040, there will be a broadly accepted answer on what prior to use for the lifespan of humanity." see

    • alternate formulation: Toby Ord and Will MacAskill both agree (to some level of confidence) on the correct prior
  • "By 3020, a macroscopic object will be observed traveling faster than the speed of light."

    • relevant to Beyond Astronomical Waste


  • "What annual real return will be realized by the Good Ventures investment portfolio 2022-2031?"
    • Can be calculated by Form 990-PF, Schedule B, Part II, which gives the gain of any assets held
    • Might make more sense to look at Dustin Moskowitz's net worth
      • But that doesn't account for spending
  • "Will the momentum factor have a positive return in the United States 2022-2031?"
    • Fama/French 12-2 momentum over a total market index
    • As measured by "Momentum Factor (Mom)" on Ken French Data Library
    • Gross of costs
  • "Will the Fama-French value factor (using E/P) be positive in the United States 2022-2031?"
    • Fama-French value over a total market index (not S&P 500), measured with E/P, not B/P
    • French "Portfolios Formed on Earnings/Price"
    • Factor is considered positive if the low 30% portfolio (equal-weighted) outperforms the high 30% portfolio.
    • E/P chosen due to being less subject to company structure than B/P
  • "What annualized real return will be obtained by the top decile of momentum stocks in the United States 2022-2031?"
    • same definitions as previous question
  • "What will be the magnitude of the S&P 500's largest drawdown 2022-2031?"
    • magnitude = percent decline from peak to trough
Comment by MichaelDickens on A Comparison of Donor-Advised Fund Providers · 2021-05-10T21:11:14.113Z · EA · GW

Where are you getting that info? I thought Fidelity Charitable had no distribution requirement. Distribution requirement is definitely relevant if there is one.

Comment by MichaelDickens on A Comparison of Donor-Advised Fund Providers · 2021-04-30T21:24:27.973Z · EA · GW

Unless you're putting a lot of work into optimizing your DAF investments (like I describe here), Fidelity is pretty much just as good as Vanguard.

Comment by MichaelDickens on Ramiro's Shortform · 2021-04-06T16:01:51.029Z · EA · GW

If wild animals have bad lives on net, then indiscriminately increasing wild animal populations is bad under any plausible theory of population ethics.

Comment by MichaelDickens on A Comparison of Donor-Advised Fund Providers · 2021-04-06T15:56:50.523Z · EA · GW

Thanks for sharing your experience with Vanguard! That aligns with anecdotes I've heard about Vanguard's brokerage service.

  • Are there any simple ways of making investments in these accounts that offer 2x leverage or more? Are there things here that you'd recommend?

I just published something about DAF investing strategies: In this section, I talk about leveraged ETFs. I believe the only way to invest with leverage in a DAF is through a leveraged ETF or mutual fund, although I've heard conflicting things about what the actual legal requirements are. In general, I don't think leveraged ETFs are good investments.

  • Do you have an intuition around when one should make a Donor-Advised Fund?

If you want to use leverage, probably never. (Or just use it to convert stock into cash for donations, as akrolsmir described.) Otherwise, you want to have at least $10,000 or so, otherwise the minimum fee will eat too large a % of your assets each year. (Schwab and Fidelity both have a $100 minimum fee.)

  • How easy is it for others to invest in one's Donor-Advised Fund?

It's definitely possible. I personally don't have my own DAF, I use my parents' DAF. I'm a full authorized user on the account, which means I had to connect my Fidelity account to the DAF. If you don't care about managing anything and just want to donate to the DAF, I would think that should be pretty easy, but I haven't tried it. I think it should be as simple as writing a check to Fidelity Charitable with a note that the money is for that particular DAF.

Comment by MichaelDickens on A Comparison of Donor-Advised Fund Providers · 2021-04-06T15:32:20.568Z · EA · GW

Yeah this is for US only. I actually thought I had said that in the post, but looks like I forgot to! I'll edit it.

Comment by MichaelDickens on MathiasKirkBonde's Shortform · 2021-03-09T16:06:24.519Z · EA · GW

I think a lot of people feel this way, and it's something I've experienced. I don't have any great solutions but I generally do two things:

  1. Set reasonable expectations. The application process has a lot of randomness, and almost all applications will get ignored even if they're good, so I should expect any particular application to have a very low chance of getting a response.
  2. Spend less time on individual applications; apply to a lot of things; use commonalities across applications to copy/paste things I wrote on previous applications.
Comment by MichaelDickens on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2021-01-12T18:55:04.075Z · EA · GW

The stock market should grow faster than GDP in the long run. Three different simple arguments for this:

  1. This falls out of the commonly-used Ramsey model. Specifically, because people discount the future, they will demand that their investments give better return than the general economy.
  2. Corporate earnings should grow at the same rate as GDP, and stock price should grow at the same rate as earnings. But stock investors also earn dividends, so your total return should exceed GDP in the long run. (The reason this works is because in aggregate, investors spend the dividends rather than re-investing them.)
  3. Stock returns are more volatile than economic growth, so they should pay a risk premium even if they don't have a higher risk-adjusted return.
Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2021-01-11T19:43:48.828Z · EA · GW

(These numbers are actually more similar than I expected—I would have predicted the top-10% portfolio to have something like 5x more value factor loading than the top-half portfolio, not 2x.)

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2021-01-11T19:01:35.663Z · EA · GW

I'm not sure how to calculate it precisely, I think you'd want to run a regression where the independent variable is the value factor and the dependent variable is the fund or strategy being considered. But roughly speaking, a Vanguard value fund holds the 50% cheapest stocks (according to the value factor), while QVAL and IVAL hold the 5% cheapest stocks, so they are 10x more concentrated, which loosely justifies a 10x higher expense ratio. Although 10x higher concentration doesn't necessarily mean 10x more exposure to the value factor, it's probably substantially less than that.

I just ran a couple of quick regressions using Ken French data, and it looks like if you buy the top half of value stocks (size-weighted) while shorting the market, that gives you 0.76 exposure to the value factor, and buying the top 10% (equal-weighted) while shorting the market gives you 1.3 exposure (so 1.3 is the slope of a regression between that strategy and the value factor). Not sure I'm doing this right, though.

To look at it another way, the top-half portfolio described above had a 5.4% annual return (gross), while the top-10% portfolio returned 12.8% (both had similar Sharpe ratios). Note that most of this difference comes from the fact that the first portfolio is size-weighted and the second is equal-weighted; I did it that way because most big value funds are size-weighted, while QVAL/IVAL are equal-weighted.

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2021-01-11T18:28:24.445Z · EA · GW

That could help. "Standard" trendfollowing rebalances monthly because it's simple, frequent enough to capture most changes in trends, but infrequent enough that it doesn't incur a lot of transaction costs. But there could be more complicated approaches that do a better job of capturing trends without incurring too many extra costs. One idea I've considered is to look at buy-side signals monthly but sell-side signals daily, so if the market switches from a positive to negative trend, you'll sell the following day, but if it switches back, you won't buy until the next month. On the backtests I ran, it seemed to work reasonably well.

These were the results of a backtest I ran using the Ken French data on US stock returns 1926-2018:

CAGR Stdev Ulcer Trades/Yr
B&H 9.5 16.8 23.0
Monthly 9.3 11.7 14.4 1.4
Daily 10.7 11.0 9.6 5.1
Sell-Daily 9.7 10.3 9.2 2.3
Buy-Daily 10.6 12.3 12.3 1.8

("Ulcer" is the ulcer index, which IMO is a better measure of downside risk than standard deviation. It basically tells you the frequency and severity of drawdowns.)

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2021-01-11T17:53:12.816Z · EA · GW

The AlphaArchitect funds (except for VMOT) are long-only, so they're going to be pretty correlated with the market. The idea is you buy those funds (or something similar) while simultaneously shorting the market.

And I've heard it claimed that assets in general tend to be more correlated during drawdowns.

This is true. Factors aren't really asset classes, but it's still true for some factors. This AQR paper looked at the performance of a bunch of diversifiers during drawdowns and found that trendfollowing provided good return, as did "styles", by which they mean a long/short factor portfolio consisting of the value, momentum, carry, and quality factors. I'd have to do some more research to say how each of those four factors have tended to perform during drawdowns, so take this with a grain of salt, but IIRC:

  • value and carry tend to perform somewhat poorly
  • quality tends to perform well
  • momentum tends to perform well during drawdowns, but then performs really badly when the market turns around (e.g., this happened in 2009)

I'm talking about long/short factors here, so e.g., if the value factor has negative performance, that means long-only value stocks perform worse than the market.

Also, short-term trendfollowing (e.g., 3-month moving average) tends to perform better during drawdowns than long-term trendfollowing (~12 month moving average), but it has worse long-run performance, and both tend to beat the market, so IMO it makes more sense to use long-term trendfollowing.

We never know how this will continue in the future. For example, the 2020 drawdown happened much more quickly than usual—the market dropped around 30% in a month, as opposed to, say, the 2000-2002 drawdown, where the market dropped 50% over the course of two years. Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance, although it happened to perform reasonably well this year.

There's a lot more I could say about the implementation of trendfollowing strategies, but I don't want to get too verbose so I'll stop there.

Comment by MichaelDickens on Where are you donating in 2020 and why? · 2021-01-04T18:10:35.729Z · EA · GW

Monthly is fine, it's probably better for charities. I personally donate annually because it's a lot simpler. I donate appreciated stock, and transferring stock is a substantial amount of work.

Comment by MichaelDickens on Big List of Cause Candidates · 2020-12-26T05:19:14.559Z · EA · GW

At the risk of being overly self-promotional, I have written a few posts on cause candidates that I don't see listed here.

Another potential cause area that's not listed: reducing value drift (e.g., this post).

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2020-12-14T18:14:40.173Z · EA · GW

I only skimmed the linked source but my rough impression is that I'm fairly bearish on art, mainly because there's no expectation that it will appreciate. The linked article doesn't really present evidence to the contrary—the only relevant bit I saw was a graph showing appreciation from 2000 to 2010. 10 years of appreciation is almost meaningless, I'd want to see more like 50 years of data showing an asset class has positive real return.

Perhaps it would be worth buying art if you have some reason to believe you can outperform the market at predicting which pieces will be more valuable in the future. The art market is probably less efficient than more liquid financial markets, but on priors I wouldn't expect to be able to pick "winning" art pieces.

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2020-12-07T23:36:06.541Z · EA · GW

That's an interesting idea, I'm thinking about the best way to model it. I think what you'd want to do is to calculate the safe withdrawal rate for different portfolios and see which is best. The problem is, we don't have enough historical data to get good results, so we'd have to do simulations. But those simulations couldn't assume that returns follow a log-normal distribution, because the fact that assets tend to experience big drawdowns substantially affects the safe withdrawal rate.

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2020-12-04T18:38:21.484Z · EA · GW

In my experience, when the market is down a lot, the payouts would increase as a percentage, because donors would not want to have inefficient cuts in charities.

This is a good point that I hadn't thought of. This would still reduce donations overall, right? Because if people donate a larger % when markets are down, that means they have less money to donate later. It's not obvious to me off hand how this should be modeled, but that's something to think about.

I do agree that a fully market-neutral position is probably not optimal in practice. That only makes sense if you assume leverage costs the risk-free rate, you can get however much leverage you want, and you can rebalance continuously with no transaction costs. If you impose more realistic restrictions, you probably want to aim for a higher expected return with low fees rather than going for pure market neutral. I'm writing a new essay about this right now. According to my new model, the optimal allocation under realistic costs and restrictions is something like 200% long, 50% short. In my previous essay on leverage, I do think I overstated the value of reducing correlation rather than increasing expected return.

Comment by MichaelDickens on Where are you donating in 2020 and why? · 2020-11-25T20:06:43.719Z · EA · GW

There's a good chance I will give to the long-term investment fund once it's up and running, depending on how much I like its investment portfolio. I think the optimal altruistic portfolio (on the margin) looks pretty weird, and they might not want to invest like that. (It might be entirely rational for the long-term investment fund not to invest in a way that looks too weird, because that could make it harder to attract donations.)

EDIT: I realized I only answered half of your question. RE my long-term plan, I honestly don't know what to do to reduce the risk of value drift if I don't end up giving to the long-term investment fund. Reducing value drift seems like an important open problem.

Comment by MichaelDickens on Where are you donating in 2020 and why? · 2020-11-25T17:39:50.056Z · EA · GW

This year, I am investing to give with 100% of my donation budget. I am moderately convinced by the arguments in favor of giving later. I'm not entirely convinced—in particular, for some types of work (such as foundational research), it seems more important to do early—but the state of knowledge on the question seems to be improving rapidly. If (to simplify) the optimal time to donate is either now or centuries from now, then it seems much less harmful to incorrectly donate a few years too late than to incorrectly donate centuries too early. So the safer choice is not to donate anything right now.

My biggest concern with investing to give is that I will become less altruistic over time, and won't end up donating the money. I considered putting my donation budget into a donor-advised fund, but I decided against it for the reasons explained here.

Alternatively, I could donate a little of my donation budget and invest the rest, but I'm willing to bite the bullet on the argument that all altruistic funds on the margin should be invested.

(My income is unusually low this year, so I barely have a donation budget anyway. But this is what I'd do if I had more money.)

Comment by MichaelDickens on Uncorrelated Investments for Altruists · 2020-11-24T20:01:31.351Z · EA · GW

If you're long-only, it probably makes more sense to buy VMOT than QVAL/IVAL/QMOM/IMOM. VMOT is a fund that holds those four funds, but also includes a tactical trendfollowing component, so it moves to market neutral under certain market conditions. This tends to reduce correlation to the broad stock market, particularly during downturns.

Here's my basic thinking on the tradeoffs between those three options:

  • I would predict VMOT to have the highest forward-looking risk-adjusted return with moderate correlation to ordinary investments.
  • EDC probably has the highest expected return, but also the highest volatility, and pretty high correlation to ordinary investments.
  • AXS Chesapeake probably has close to zero correlation to ordinary investments, and with risk-adjusted performance that's not much worse than VMOT.

I'm inclined to say AXS Chesapeake would make the most sense to buy, because getting low correlation is more important than getting the highest possible expected return.

Comment by MichaelDickens on Where are you donating in 2020 and why? · 2020-11-23T17:36:54.558Z · EA · GW

Side note:

I'd previously gotten into a rather weird Feb donation cycle so I'm looking to shift this year back to December.

You might consider keeping with your February donation cycle. I've heard from some charities that they don't like how a disproportionate amount of their funding comes from December donations, because it makes budget planning much harder.

Comment by MichaelDickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-11-23T17:09:08.050Z · EA · GW

Good to hear, thanks for confirming!

Comment by MichaelDickens on A Complete Quantitative Model for Cause Selection · 2020-11-04T20:22:45.137Z · EA · GW

My apologies, I'm not very good at monitoring it, so occasionally it breaks and I don't notice. It should be working now.

Comment by MichaelDickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-30T12:44:59.582Z · EA · GW

I have no comment on whether it's a good idea to build the global market portfolio with leveraged ETFs, but since you asked:

You can use the screener to find ETFs matching your criteria. I just searched on there and based on the 10 minutes I spent looking, I think this is about the closest you can get:

20% SPXL: 3x leveraged S&P 500
30% EFO: 2x leveraged MSCI EAFE (developed markets, excluding US)
5% EDC: 3x leveraged emerging markets equity
40% TMF: 3x leveraged 20+ year US Treasury bonds
5% UGL: 2x leveraged gold

This is still not really the global market portfolio, but it's at least kind of close. Also a couple of these ETFs are really small, so they'll have high trading costs.

Comment by MichaelDickens on seanrson's Shortform · 2020-10-27T21:41:00.901Z · EA · GW

You might try the East Bay EA/Rationality Housing Board

Comment by MichaelDickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-27T16:21:50.491Z · EA · GW

labor has an opportunity cost of $3 million per year

This seems really high. You could hire an experienced investment manager for a lot less than that. But the general structure of your analysis seems sound.

Another consideration is that you can probably reduce correlation to other altruists' investments (I wrote about this a bit here, and I'm currently writing something more detailed). Uncorrelated investments have much higher marginal utility of returns, at least until they become popular enough that they represent a significant percentage of the altruistic portfolio. And leveraging uncorrelated investments looks particularly promising. So you could get more than a 1% excess certainty equivalent return that way.

Edit: Published Uncorrelated Investments for Altruists

Comment by MichaelDickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-26T17:17:33.679Z · EA · GW

Yeah, because adding leverage will increase taxes on dividends. My calculator correctly accounts for this, but I didn't account for it in my previous comment. But it doesn't lower the certainty-equivalent rate by much.

Also, do you happen to know how effortful and feasible tax loss harvesting might be for leveraged portfolios in taxable accounts?

It shouldn't be too hard, but I don't think you'd get much benefit from it. I'm not sure though, I'm not too familiar with the mechanics of tax loss harvesting.

Comment by MichaelDickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-26T17:12:49.057Z · EA · GW

5% is the geometric mean return, the Samuelson share formula uses the arithmetic mean on the numerator (see here. So the correct formula is (5% + 0.16^2/2)/(0.16^2 * 1) = 2.45.

Comment by MichaelDickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-24T19:22:13.091Z · EA · GW

Do you know if it possible to give to an EA Fund from a DAF?

That should definitely be possible.

Comment by MichaelDickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T21:22:31.866Z · EA · GW

We can estimate how valuable that would be by comparing the certainty-equivalent interest rates (I talked about this here).

Some quick analysis using

Historical long-run global equities have returned about 5% with a standard deviation of about 16% (source). Let's use that as a rough forward-looking estimate.

  • With a relative risk aversion (RRA) coefficient of 1 (= logarithmic utility), the certainty-equivalent interest rate of an un-leveraged portfolio is 5% (logarithmic utility doesn't care about standard deviation as long as geometric return is held constant). With optimal leverage (2.45:1), the certainty-equivalent rate is 7.7%. That means the ability to get leverage is as good as a guaranteed 2.7% extra return (= 7.7% - 5.0%).
  • With RRA=1.5, optimal leverage = 1.63:1, and the excess certainty-equivalent rate is 0.8%.
  • With RRA=2, optimal leverage = 1.22:1, and the excess certainty-equivalent rate = 0.14%.

I think altruistic RRA is probably somewhere around 1 to 1.5, so under these assumptions, the ability to use leverage is roughly as good as a guaranteed 1-3% extra return.

(FWIW I think you can also get better return by tilting toward the value and momentum factors, so if you're willing to do that, that makes the ability to invest flexibly look relatively more important.)

Comment by MichaelDickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T20:57:48.442Z · EA · GW

I wonder if there's scope for circumventing this issue by setting up a registered charity that can take donations from a DAF and then forward on to wherever the donor desires. Even an existing charity like CEA could act as a middle man like this. Is this a completely silly idea or a promising one?

I'm not sure about this, but I don't think charities are allowed to give money to for-profits.

Comment by MichaelDickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-23T20:52:58.867Z · EA · GW

My thinking is that donating during drawdowns might be particularly bad

This is true, and the standard deviation fully captures the extent to which drawdowns are bad (assuming isoelastic utility and log-normal returns). Increasing the standard deviation is bad because doing so increases the probability of both very good and very bad outcomes, and bad outcomes are more bad than good outcomes are good.

Is it actually the Sharpe ratio that should be maximized with isoelastic utility (assuming log-normal returns, was it?)?

Yes, if you also assume that you can freely use leverage. The portfolio with the maximum Sharpe ratio allows for the highest expected return at a given standard deviation, or the lowest standard deviation at a given expected return.

Comment by MichaelDickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-22T18:18:41.289Z · EA · GW

Thank you, I appreciate the positive feedback, especially from someone as knowledgeable as you!