## Posts

Asset Allocation and Leverage for Altruists with Constraints 2020-12-14T20:48:26.789Z
Uncorrelated Investments for Altruists 2020-11-23T23:03:23.933Z
Donor-Advised Funds vs. Taxable Accounts for Patient Donors 2020-10-19T20:38:23.801Z
The Risk of Concentrating Wealth in a Single Asset 2020-10-18T17:15:17.651Z
MichaelDickens's Shortform 2020-09-24T00:01:24.005Z
"Disappointing Futures" Might Be As Important As Existential Risks 2020-09-03T01:15:50.466Z
Giving Now vs. Later for Existential Risk: An Initial Approach 2020-08-29T01:04:34.488Z
Should We Prioritize Long-Term Existential Risk? 2020-08-20T02:23:43.393Z
The Importance of Unknown Existential Risks 2020-07-23T19:09:56.031Z
Estimating the Philanthropic Discount Rate 2020-07-03T16:58:54.771Z
How Much Leverage Should Altruists Use? 2020-01-07T04:25:31.492Z
How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? 2019-02-03T01:11:09.991Z
Should Global Poverty Donors Give Now or Later? An In-Depth Analysis 2019-01-22T04:45:56.500Z
Why Do Small Donors Give Now, But Large Donors Give Later? 2018-10-28T01:51:56.710Z
Where Some People Donated in 2017 2018-02-11T21:55:09.730Z
Where I Am Donating in 2016 2016-11-01T04:10:02.389Z
Dedicated Donors May Not Want to Sign the Giving What We Can Pledge 2016-10-30T03:26:44.215Z
Altruistic Organizations Should Consider Counterfactuals When Hiring 2016-09-11T04:19:39.164Z
Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering 2016-08-26T02:08:53.190Z
Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply) 2016-06-10T21:35:50.236Z
A Complete Quantitative Model for Cause Selection 2016-05-18T02:17:28.769Z
Quantifying the Far Future Effects of Interventions 2016-05-18T02:15:07.240Z
GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics 2016-05-17T01:51:15.218Z
On Priors 2016-04-26T22:35:14.359Z
How Should a Large Donor Prioritize Cause Areas? 2016-04-25T20:46:38.304Z
Expected Value Estimates You Can (Maybe) Take Literally 2016-04-06T15:11:59.359Z
Are GiveWell Top Charities Too Speculative? 2015-12-21T04:05:07.675Z
More on REG's Room for More Funding 2015-11-16T17:31:40.493Z
Cause Selection Blogging Carnival Conclusion 2015-10-05T20:16:43.945Z
Charities I Would Like to See 2015-09-20T15:22:43.083Z
My Cause Selection: Michael Dickens 2015-09-15T23:29:40.701Z
Some Writings on Cause Selection 2015-09-08T21:56:01.033Z
EA Blogging Carnival: My Cause Selection 2015-08-16T01:07:22.005Z
Why Effective Altruists Should Use a Robo-Advisor 2015-08-04T03:37:13.789Z
Stanford EA History and Lessons Learned 2015-07-02T03:36:56.688Z
How We Run Discussions at Stanford EA 2015-04-14T16:36:05.363Z
Meetup : Stanford THINK 2014-10-23T02:10:42.641Z

Comment by michaeldickens on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2021-01-12T18:55:04.075Z · EA · GW

The stock market should grow faster than GDP in the long run. Three different simple arguments for this:

1. This falls out of the commonly-used Ramsey model. Specifically, because people discount the future, they will demand that their investments give better return than the general economy.
2. Corporate earnings should grow at the same rate as GDP, and stock price should grow at the same rate as earnings. But stock investors also earn dividends, so your total return should exceed GDP in the long run. (The reason this works is because in aggregate, investors spend the dividends rather than re-investing them.)
3. Stock returns are more volatile than economic growth, so they should pay a risk premium even if they don't have a higher risk-adjusted return.
Comment by michaeldickens on Uncorrelated Investments for Altruists · 2021-01-11T19:43:48.828Z · EA · GW

(These numbers are actually more similar than I expected—I would have predicted the top-10% portfolio to have something like 5x more value factor loading than the top-half portfolio, not 2x.)

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2021-01-11T19:01:35.663Z · EA · GW

I'm not sure how to calculate it precisely, I think you'd want to run a regression where the independent variable is the value factor and the dependent variable is the fund or strategy being considered. But roughly speaking, a Vanguard value fund holds the 50% cheapest stocks (according to the value factor), while QVAL and IVAL hold the 5% cheapest stocks, so they are 10x more concentrated, which loosely justifies a 10x higher expense ratio. Although 10x higher concentration doesn't necessarily mean 10x more exposure to the value factor, it's probably substantially less than that.

I just ran a couple of quick regressions using Ken French data, and it looks like if you buy the top half of value stocks (size-weighted) while shorting the market, that gives you 0.76 exposure to the value factor, and buying the top 10% (equal-weighted) while shorting the market gives you 1.3 exposure (so 1.3 is the slope of a regression between that strategy and the value factor). Not sure I'm doing this right, though.

To look at it another way, the top-half portfolio described above had a 5.4% annual return (gross), while the top-10% portfolio returned 12.8% (both had similar Sharpe ratios). Note that most of this difference comes from the fact that the first portfolio is size-weighted and the second is equal-weighted; I did it that way because most big value funds are size-weighted, while QVAL/IVAL are equal-weighted.

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2021-01-11T18:28:24.445Z · EA · GW

That could help. "Standard" trendfollowing rebalances monthly because it's simple, frequent enough to capture most changes in trends, but infrequent enough that it doesn't incur a lot of transaction costs. But there could be more complicated approaches that do a better job of capturing trends without incurring too many extra costs. One idea I've considered is to look at buy-side signals monthly but sell-side signals daily, so if the market switches from a positive to negative trend, you'll sell the following day, but if it switches back, you won't buy until the next month. On the backtests I ran, it seemed to work reasonably well.

These were the results of a backtest I ran using the Ken French data on US stock returns 1926-2018:

B&H 9.5 16.8 23.0
Monthly 9.3 11.7 14.4 1.4
Daily 10.7 11.0 9.6 5.1
Sell-Daily 9.7 10.3 9.2 2.3

("Ulcer" is the ulcer index, which IMO is a better measure of downside risk than standard deviation. It basically tells you the frequency and severity of drawdowns.)

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2021-01-11T17:53:12.816Z · EA · GW

The AlphaArchitect funds (except for VMOT) are long-only, so they're going to be pretty correlated with the market. The idea is you buy those funds (or something similar) while simultaneously shorting the market.

And I've heard it claimed that assets in general tend to be more correlated during drawdowns.

This is true. Factors aren't really asset classes, but it's still true for some factors. This AQR paper looked at the performance of a bunch of diversifiers during drawdowns and found that trendfollowing provided good return, as did "styles", by which they mean a long/short factor portfolio consisting of the value, momentum, carry, and quality factors. I'd have to do some more research to say how each of those four factors have tended to perform during drawdowns, so take this with a grain of salt, but IIRC:

• value and carry tend to perform somewhat poorly
• quality tends to perform well
• momentum tends to perform well during drawdowns, but then performs really badly when the market turns around (e.g., this happened in 2009)

I'm talking about long/short factors here, so e.g., if the value factor has negative performance, that means long-only value stocks perform worse than the market.

Also, short-term trendfollowing (e.g., 3-month moving average) tends to perform better during drawdowns than long-term trendfollowing (~12 month moving average), but it has worse long-run performance, and both tend to beat the market, so IMO it makes more sense to use long-term trendfollowing.

We never know how this will continue in the future. For example, the 2020 drawdown happened much more quickly than usual—the market dropped around 30% in a month, as opposed to, say, the 2000-2002 drawdown, where the market dropped 50% over the course of two years. Trendfollowing tends to perform worse in rapid drawdowns because it doesn't have time to rebalance, although it happened to perform reasonably well this year.

There's a lot more I could say about the implementation of trendfollowing strategies, but I don't want to get too verbose so I'll stop there.

Comment by michaeldickens on Where are you donating in 2020 and why? · 2021-01-04T18:10:35.729Z · EA · GW

Monthly is fine, it's probably better for charities. I personally donate annually because it's a lot simpler. I donate appreciated stock, and transferring stock is a substantial amount of work.

Comment by michaeldickens on Big List of Cause Candidates · 2020-12-26T05:19:14.559Z · EA · GW

At the risk of being overly self-promotional, I have written a few posts on cause candidates that I don't see listed here.

Another potential cause area that's not listed: reducing value drift (e.g., this post).

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2020-12-14T18:14:40.173Z · EA · GW

I only skimmed the linked source but my rough impression is that I'm fairly bearish on art, mainly because there's no expectation that it will appreciate. The linked article doesn't really present evidence to the contrary—the only relevant bit I saw was a graph showing appreciation from 2000 to 2010. 10 years of appreciation is almost meaningless, I'd want to see more like 50 years of data showing an asset class has positive real return.

Perhaps it would be worth buying art if you have some reason to believe you can outperform the market at predicting which pieces will be more valuable in the future. The art market is probably less efficient than more liquid financial markets, but on priors I wouldn't expect to be able to pick "winning" art pieces.

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2020-12-07T23:36:06.541Z · EA · GW

That's an interesting idea, I'm thinking about the best way to model it. I think what you'd want to do is to calculate the safe withdrawal rate for different portfolios and see which is best. The problem is, we don't have enough historical data to get good results, so we'd have to do simulations. But those simulations couldn't assume that returns follow a log-normal distribution, because the fact that assets tend to experience big drawdowns substantially affects the safe withdrawal rate.

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2020-12-04T18:38:21.484Z · EA · GW

In my experience, when the market is down a lot, the payouts would increase as a percentage, because donors would not want to have inefficient cuts in charities.

This is a good point that I hadn't thought of. This would still reduce donations overall, right? Because if people donate a larger % when markets are down, that means they have less money to donate later. It's not obvious to me off hand how this should be modeled, but that's something to think about.

I do agree that a fully market-neutral position is probably not optimal in practice. That only makes sense if you assume leverage costs the risk-free rate, you can get however much leverage you want, and you can rebalance continuously with no transaction costs. If you impose more realistic restrictions, you probably want to aim for a higher expected return with low fees rather than going for pure market neutral. I'm writing a new essay about this right now. According to my new model, the optimal allocation under realistic costs and restrictions is something like 200% long, 50% short. In my previous essay on leverage, I do think I overstated the value of reducing correlation rather than increasing expected return.

Comment by michaeldickens on Where are you donating in 2020 and why? · 2020-11-25T20:06:43.719Z · EA · GW

There's a good chance I will give to the long-term investment fund once it's up and running, depending on how much I like its investment portfolio. I think the optimal altruistic portfolio (on the margin) looks pretty weird, and they might not want to invest like that. (It might be entirely rational for the long-term investment fund not to invest in a way that looks too weird, because that could make it harder to attract donations.)

EDIT: I realized I only answered half of your question. RE my long-term plan, I honestly don't know what to do to reduce the risk of value drift if I don't end up giving to the long-term investment fund. Reducing value drift seems like an important open problem.

Comment by michaeldickens on Where are you donating in 2020 and why? · 2020-11-25T17:39:50.056Z · EA · GW

This year, I am investing to give with 100% of my donation budget. I am moderately convinced by the arguments in favor of giving later. I'm not entirely convinced—in particular, for some types of work (such as foundational research), it seems more important to do early—but the state of knowledge on the question seems to be improving rapidly. If (to simplify) the optimal time to donate is either now or centuries from now, then it seems much less harmful to incorrectly donate a few years too late than to incorrectly donate centuries too early. So the safer choice is not to donate anything right now.

My biggest concern with investing to give is that I will become less altruistic over time, and won't end up donating the money. I considered putting my donation budget into a donor-advised fund, but I decided against it for the reasons explained here.

Alternatively, I could donate a little of my donation budget and invest the rest, but I'm willing to bite the bullet on the argument that all altruistic funds on the margin should be invested.

(My income is unusually low this year, so I barely have a donation budget anyway. But this is what I'd do if I had more money.)

Comment by michaeldickens on Uncorrelated Investments for Altruists · 2020-11-24T20:01:31.351Z · EA · GW

If you're long-only, it probably makes more sense to buy VMOT than QVAL/IVAL/QMOM/IMOM. VMOT is a fund that holds those four funds, but also includes a tactical trendfollowing component, so it moves to market neutral under certain market conditions. This tends to reduce correlation to the broad stock market, particularly during downturns.

Here's my basic thinking on the tradeoffs between those three options:

• I would predict VMOT to have the highest forward-looking risk-adjusted return with moderate correlation to ordinary investments.
• EDC probably has the highest expected return, but also the highest volatility, and pretty high correlation to ordinary investments.
• AXS Chesapeake probably has close to zero correlation to ordinary investments, and with risk-adjusted performance that's not much worse than VMOT.

I'm inclined to say AXS Chesapeake would make the most sense to buy, because getting low correlation is more important than getting the highest possible expected return.

Comment by michaeldickens on Where are you donating in 2020 and why? · 2020-11-23T17:36:54.558Z · EA · GW

Side note:

I'd previously gotten into a rather weird Feb donation cycle so I'm looking to shift this year back to December.

You might consider keeping with your February donation cycle. I've heard from some charities that they don't like how a disproportionate amount of their funding comes from December donations, because it makes budget planning much harder.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-11-23T17:09:08.050Z · EA · GW

Good to hear, thanks for confirming!

Comment by michaeldickens on A Complete Quantitative Model for Cause Selection · 2020-11-04T20:22:45.137Z · EA · GW

My apologies, I'm not very good at monitoring it, so occasionally it breaks and I don't notice. It should be working now.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-30T12:44:59.582Z · EA · GW

I have no comment on whether it's a good idea to build the global market portfolio with leveraged ETFs, but since you asked:

You can use the etf.com screener to find ETFs matching your criteria. I just searched on there and based on the 10 minutes I spent looking, I think this is about the closest you can get:

20% SPXL: 3x leveraged S&P 500
30% EFO: 2x leveraged MSCI EAFE (developed markets, excluding US)
5% EDC: 3x leveraged emerging markets equity
40% TMF: 3x leveraged 20+ year US Treasury bonds
5% UGL: 2x leveraged gold


This is still not really the global market portfolio, but it's at least kind of close. Also a couple of these ETFs are really small, so they'll have high trading costs.

Comment by michaeldickens on seanrson's Shortform · 2020-10-27T21:41:00.901Z · EA · GW

You might try the East Bay EA/Rationality Housing Board

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-27T16:21:50.491Z · EA · GW

labor has an opportunity cost of \$3 million per year

This seems really high. You could hire an experienced investment manager for a lot less than that. But the general structure of your analysis seems sound.

Another consideration is that you can probably reduce correlation to other altruists' investments (I wrote about this a bit here, and I'm currently writing something more detailed). Uncorrelated investments have much higher marginal utility of returns, at least until they become popular enough that they represent a significant percentage of the altruistic portfolio. And leveraging uncorrelated investments looks particularly promising. So you could get more than a 1% excess certainty equivalent return that way.

Edit: Published Uncorrelated Investments for Altruists

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-26T17:17:33.679Z · EA · GW

Yeah, because adding leverage will increase taxes on dividends. My calculator correctly accounts for this, but I didn't account for it in my previous comment. But it doesn't lower the certainty-equivalent rate by much.

Also, do you happen to know how effortful and feasible tax loss harvesting might be for leveraged portfolios in taxable accounts?

It shouldn't be too hard, but I don't think you'd get much benefit from it. I'm not sure though, I'm not too familiar with the mechanics of tax loss harvesting.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-26T17:12:49.057Z · EA · GW

5% is the geometric mean return, the Samuelson share formula uses the arithmetic mean on the numerator (see here. So the correct formula is (5% + 0.16^2/2)/(0.16^2 * 1) = 2.45.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-24T19:22:13.091Z · EA · GW

Do you know if it possible to give to an EA Fund from a DAF?

That should definitely be possible.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T21:22:31.866Z · EA · GW

We can estimate how valuable that would be by comparing the certainty-equivalent interest rates (I talked about this here).

Some quick analysis using leverage.py:

Historical long-run global equities have returned about 5% with a standard deviation of about 16% (source). Let's use that as a rough forward-looking estimate.

• With a relative risk aversion (RRA) coefficient of 1 (= logarithmic utility), the certainty-equivalent interest rate of an un-leveraged portfolio is 5% (logarithmic utility doesn't care about standard deviation as long as geometric return is held constant). With optimal leverage (2.45:1), the certainty-equivalent rate is 7.7%. That means the ability to get leverage is as good as a guaranteed 2.7% extra return (= 7.7% - 5.0%).
• With RRA=1.5, optimal leverage = 1.63:1, and the excess certainty-equivalent rate is 0.8%.
• With RRA=2, optimal leverage = 1.22:1, and the excess certainty-equivalent rate = 0.14%.

I think altruistic RRA is probably somewhere around 1 to 1.5, so under these assumptions, the ability to use leverage is roughly as good as a guaranteed 1-3% extra return.

(FWIW I think you can also get better return by tilting toward the value and momentum factors, so if you're willing to do that, that makes the ability to invest flexibly look relatively more important.)

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T20:57:48.442Z · EA · GW

I wonder if there's scope for circumventing this issue by setting up a registered charity that can take donations from a DAF and then forward on to wherever the donor desires. Even an existing charity like CEA could act as a middle man like this. Is this a completely silly idea or a promising one?

I'm not sure about this, but I don't think charities are allowed to give money to for-profits.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-23T20:52:58.867Z · EA · GW

My thinking is that donating during drawdowns might be particularly bad

This is true, and the standard deviation fully captures the extent to which drawdowns are bad (assuming isoelastic utility and log-normal returns). Increasing the standard deviation is bad because doing so increases the probability of both very good and very bad outcomes, and bad outcomes are more bad than good outcomes are good.

Is it actually the Sharpe ratio that should be maximized with isoelastic utility (assuming log-normal returns, was it?)?

Yes, if you also assume that you can freely use leverage. The portfolio with the maximum Sharpe ratio allows for the highest expected return at a given standard deviation, or the lowest standard deviation at a given expected return.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-22T18:18:41.289Z · EA · GW

Thank you, I appreciate the positive feedback, especially from someone as knowledgeable as you!

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:42:27.673Z · EA · GW

Worth remembering that, especially today, there are hundreds of thousands to millions of other highly intelligent and resourceful people trying to do the exact same thing. So you need to have a reason to believe you can do a better job than they can.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-19T17:39:00.637Z · EA · GW

I don't think that's how DAFs work? I believe the DAF legally owns the money and can do anything they want with it. You can ask them to donate the money to a different DAF that you created, but they have the right to refuse to do that.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:37:09.524Z · EA · GW

The answer sort of depends on what you mean by moonshot, but under one reasonable definition, it's actually the opposite: investing in potential moonshots would have resulted in worse performance than an index fund. Or to put it another way, boring companies tend to outperform exciting companies.

You can divide stocks into two types: growth stocks and value stocks. Value stocks are cheaply priced relative to their fundamentals (e.g., they have a low price to earnings or price to sales ratio) because the market expects these companies to be "boring" and not show good earnings growth. Growth stocks are priced expensively because the market expects them to grow. This sounds basically like what you're talking about with "moonshot" companies. If you wanted to systematically invest in moonshots, you could maybe buy the 10% most expensive stocks, because these are the ones the market believes have the most upside potential. But if you did that historically, you would've underperformed the market by a lot—something on the order of 5 percentage points per year. The seminal paper on this is Fama & French (1992), The Cross-Section of Expected Stock Returns.

In theory, savvy investors could identify the most promising publicly-traded growth companies and outperform the market by buying them. But studies on fund managers have found that pretty much nobody can do this.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:12:51.173Z · EA · GW

IMO the ulcer index is the best measure of volatility that matches what people intuitively care about. It essentially measures the frequency and severity of drawdowns (the linked page explains it in more detail).

I didn't discuss the ulcer index in this post because in theory, investors with isoelastic utility should care about standard deviation, not drawdowns, and I lean toward the belief that people's focus on drawdowns is somewhat irrational (although probably somewhat justified by the fact that most asset returns are left-skewed). But broadly speaking, if you use the ulcer index as your measure of risk, concentrating in a small number of assets looks even worse than if you use standard deviation, so the case for diversification is even stronger.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:09:01.868Z · EA · GW

Even if you can beat the market by buying a basket of houses (which I'm not sure is true), buying a single house has probably 2-3x the risk of the broad real estate market and 3-4x the risk of the global market portfolio, assuming real estate works similarly to stocks (which is probably a reasonable assumption). So it still seems like a bad idea, for the reasons discussed in the essay.

It might make sense if a bunch of individual EAs buy real estate such that the overall portfolio is well-diversified. I don't expect this to happen in practice, because EAs are geographically concentrated in a small number of cities, so if people own investment properties in the cities where they live, the overall EA real estate portfolio will be too concentrated in those cities.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-16T03:40:13.597Z · EA · GW

One concern I have with the Community Foundation in Boulder is it's not clear how committed they are to letting donors direct money however they want. Unlike the national DAF providers (Schwab, Vanguard, Fidelity), it seems like there's a decent chance they will at some point change their mind and decide you are only allowed to give to the causes that they like. How are you thinking about that risk?

Comment by michaeldickens on Linch's Shortform · 2020-10-15T03:27:43.611Z · EA · GW

I haven't really thought about it, but it seems to me that if an empirical claim implies an implausible normative claim, that lowers my subjective probability of the empirical claim.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-15T03:24:54.794Z · EA · GW

Cool! Does that mean you're overweighting emerging markets?

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-12T22:42:53.874Z · EA · GW

How did you estimate the expected return in a DAF vs. unconstrained?

Comment by michaeldickens on Thomas Kwa's Shortform · 2020-10-11T18:41:00.324Z · EA · GW

Yes

Comment by michaeldickens on Thomas Kwa's Shortform · 2020-10-11T03:39:30.602Z · EA · GW

Probably the easiest way to do this is to give to a donor-advised fund, and then instruct the fund to give to the EA Fund. Even for charities that can accept stock, my experience has been that donating through a donor-advised fund is much easier (it requires less paperwork).

Comment by michaeldickens on niplav's Shortform · 2020-10-05T21:38:29.170Z · EA · GW

If there is a non-trivial possibility that a zero discount rate is correct, then the case with a zero discount rate dominates expected value calculations. See https://scholar.harvard.edu/files/weitzman/files/why_far-distant_future.pdf

Comment by michaeldickens on Denise_Melchin's Shortform · 2020-10-02T20:13:01.750Z · EA · GW

IMO the best type of positive comment adds something new on top of the original post, by extending it or by providing new and relevant information. This is more difficult than generic praise, but I don't think it's particularly harder than criticism.

Comment by michaeldickens on On GiveWell's estimates of the cost of saving a life · 2020-10-01T17:21:05.343Z · EA · GW

GiveWell's full cost-effectiveness calculations are available here: https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models

Comment by michaeldickens on Nathan Young's Shortform · 2020-09-26T03:55:18.374Z · EA · GW

Harris is a marmite figure - in my experience people love him or hate him.

My guess is people who like Sam Harris are disproportionately likely to be potentially interested in EA.

Comment by michaeldickens on Buck's Shortform · 2020-09-24T00:12:47.637Z · EA · GW

I pretty much agree with your OP. Regarding that post in particular, I am uncertain about whether it's a good or bad post. It's bad in the sense that its author doesn't seem to have a great grasp of longtermism, and the post basically doesn't move the conversation forward at all. It's good in the sense that it's engaging with an important question, and the author clearly put some effort into it. I don't know how to balance these considerations.

Comment by michaeldickens on MichaelDickens's Shortform · 2020-09-24T00:01:24.348Z · EA · GW

"Are Ideas Getting Harder to Find?" (Bloom et al.) seems to me to suggest that ideas are actually surprisingly easy to find.

The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing.

But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically with effort, or possibly even sub-logarithmically. If effort is growing exponentially, we'd expect to see linear or sub-linear growth in ideas. But instead we see exponential growth in ideas.

I don't have a great understanding of the math used in this paper, so I might be misinterpreting something.

Comment by michaeldickens on antimonyanthony's Shortform · 2020-09-19T21:22:14.352Z · EA · GW

This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.

It seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.

Comment by michaeldickens on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-18T21:16:52.043Z · EA · GW

Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty?

I think so, because the article includes some statements like,

"How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?"

and

"[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’."

Maybe instead of "make decisions under uncertainty", I should have said "make decisions that are informed by uncertain empirical forecasts".

Comment by michaeldickens on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-18T18:20:42.429Z · EA · GW

Agreed with Mathias that the authors have a good grasp of what EA is and what causes EAs prioritize, and I appreciate how respectful the article is. Also like Mathias, I feel like I have some pretty fundamental worldview differences from the authors, so I'm not sure how well I can explain my disagreements. But I'll try my best.

The article's criticism seems to focus on the notion that EA ignores power dynamics and doesn't address the root cause of problems. This is a pretty common criticism. I find it a bit confusing, and I don't really understand what the authors consider to be root causes. For example, efforts to create cheap plant-based or cultured meat seem to address the root cause of factory farming because, if successful, they will eliminate the need to farm and kill sentient animals. AI safety work, if successful, could eliminate the root causes of all suffering and bring about an unimaginably good utopia. But the authors don't seem to agree with me that these qualify as "addressing root causes". I don't understand how they distinguish between the EA work that I perceive as addressing root causes and the things they consider to be root causes. Critics like these authors seem to want EAs to do something that they're not doing, but I don't understand what it is.

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

It seems to me that if rich people come to terms with the origins of their wealth, they might conclude that they don't "deserve" it any more than poor people in Kenya, and decide to distribute the money to them (via GiveDirectly) instead of spending it on themselves. Isn't that ultimately the point? What outcome would the authors like to come out of this self-reflection, if not using their wealth to help disadvantaged people?

EAs spend more time than any other group I know talking about how they are among the richest people in the world, and they should use their wealth to help the less fortunate. But this doesn't seem to count in the authors' eyes.

This article argues that EAs fixate too much on "doing the most good", and then appears to argue that they believe people should focus on addressing root causes/grassroots activism/power dynamics/etc. because it will do the most good—or maybe I'm misinterpreting the article because I'm seeing it from an EA lens. Sometimes it seems like the authors disagree with EAs about fundamental principles like maximizing good, and other times it seems like they just disagree about what does the most good. I wasn't clear on that.

If they do agree in principle that we should do as much good as possible, then I would like to see a more rigorous justification for why the authors' favored causes do more good than EA causes. I realize they're not amenable to cost-effectiveness analysis than GiveWell's top charities, but I would like to see at least some attempt at a justification.

For example, many EAs prioritize existential risk. There's no rigorous cost-effective analysis of x-risk, but you can at least make an argument that it's more cost-effective than other things:

1. Extinction is way worse than anything else.
2. Extinction is not that unlikely.
3. We can probably make significant progress on reducing extinction risk.

Bostrom basically makes this argument in Existential Risk Prevention as Global Priority.

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.

More broadly, I would have an easier time understanding articles like these if they gave more concrete examples of what they consider to be the best things to work on, and why—something more specific than "grassroots activism". For example (not saying I think the authors believe this, just that this is the general sort of thing I'd like to see):

We should support community groups that organize meetups where they promote the idea of the fundamental unfairness of global wealth inequality. We believe that once sufficiently many people worldwide are paying attention to this problem, people will develop and move toward a new system of government that will redistribute wealth and provide basic services to everyone. We aren't sure what this government structure will look like, but we're confident that it's possible because [insert argument here]. We also believe this plan has a good chance of getting broad support because [insert argument here], and that once it has broad support, it has a good chance of actually getting implemented, because [insert argument here].

Comment by michaeldickens on Pablo Stafforini’s Forecasting System · 2020-09-16T23:41:33.612Z · EA · GW

Pablo, is any of your custom Emacs code publicly available?

Comment by michaeldickens on Pablo Stafforini’s Forecasting System · 2020-09-16T23:40:02.875Z · EA · GW

Then I go back to Emacs, position the cursor anywhere on the Metaculus section, and press a shortcut. A whole new question is created as a to-do task.

This is an aside, but you can generate org-mode entries from templates from anywhere in Emacs using org capture—you don't have to position your cursor in the correct section. This is one of my favorite features of org mode.

Comment by michaeldickens on Formalizing longtermism · 2020-09-16T17:25:03.627Z · EA · GW

Do you have any notion as to the solution to this model (for some reasonable parameter values)? I've tried to solve models like this one and haven't succeeded, although I'm not good at differential equations.

It looks to me like it's unsolvable without some nonzero exogenous extinction risk, because otherwise there will be multiple parameter choices that result in infinite utility, so you can't say which one is best. But it's not clear what rate of exogenous x-risk to use, and our distribution over possible values might still result in infinite utility in expectation.

Perhaps you could simplify the model by leaving out the concept of improving technology, and just say you can either spend on safety, spend on consumption, or invest to grow your capital. That might make the model easier to solve, and I don't think it loses much explanatory power. (It would still have the infinity problem.)

Comment by michaeldickens on Keynesian Altruism · 2020-09-14T16:36:36.214Z · EA · GW

1. The standard arguments for giving later don't hold up.
2. "Keynesian Altruism": It's better to give when the economy is weaker.

I believe these can be true or false independently. I want to expand a bit on the first claim.

You identify a lot of relevant concerns that I agree need to be addressed, and that often get ignored. I think that even after addressing them, giving later may still look better than giving now.

Are you familiar with the Ramsey equation (e.g., see this SEP entry)? The Ramsey equation states that, in an efficient market, where r is the risk-free rate, is the rate of time preference, is the rate of risk aversion, and g is the consumption (GDP) growth rate. The claim in RPTP Is a Strong Reason to Consider Giving Later is that most market actors use a value of that's too high, which pushes up interest rates, and therefore "patient" actors should prefer to invest. (Right now, it looks like r < g. I don't know how to explain this. I did a little bit of reading on the matter and my impression is economists believe it shouldn't be true and it's a bit of a puzzle as to why it's true currently, but there are some potential explanations.)

You point out that donors need to worry about taxes and expropriation. That basically means . This is true for both altruists and non-altruists. But as long as most people have a pure time preference and altruists don't, altruists will have a lower than most people, and therefore will relatively favor investing. (I made an attempt to estimate the philanthropic discount rate here.)

Another thing you brought up is that most people don't invest exclusively in risk-free assets. The Ramsey equation does use the risk-free rate, but there's an extended version of the equation that allows for risk. The extended Ramsey equation (taken from here) is where g follows a normal distribution with standard deviation , and r and g are perfectly correlated. When accounting for risk, the same basic theoretical argument holds: impatient actors will push up interest rates, making investing look more promising to patient actors.

Of course, there's a case to be made that this theoretical model doesn't hold up (e.g., current risk-free rates seem incompatible with a positive pure time preference). I haven't seriously studied economics but my impression is economists generally believe this is a good model.