Posts

Donor-Advised Funds vs. Taxable Accounts for Patient Donors 2020-10-19T20:38:23.801Z · score: 33 (14 votes)
The Risk of Concentrating Wealth in a Single Asset 2020-10-18T17:15:17.651Z · score: 42 (16 votes)
MichaelDickens's Shortform 2020-09-24T00:01:24.005Z · score: 7 (1 votes)
"Disappointing Futures" Might Be As Important As Existential Risks 2020-09-03T01:15:50.466Z · score: 58 (21 votes)
Giving Now vs. Later for Existential Risk: An Initial Approach 2020-08-29T01:04:34.488Z · score: 11 (3 votes)
Should We Prioritize Long-Term Existential Risk? 2020-08-20T02:23:43.393Z · score: 28 (14 votes)
The Importance of Unknown Existential Risks 2020-07-23T19:09:56.031Z · score: 65 (26 votes)
Estimating the Philanthropic Discount Rate 2020-07-03T16:58:54.771Z · score: 67 (24 votes)
How Much Leverage Should Altruists Use? 2020-01-07T04:25:31.492Z · score: 61 (21 votes)
How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? 2019-02-03T01:11:09.991Z · score: 27 (13 votes)
Should Global Poverty Donors Give Now or Later? An In-Depth Analysis 2019-01-22T04:45:56.500Z · score: 22 (7 votes)
Why Do Small Donors Give Now, But Large Donors Give Later? 2018-10-28T01:51:56.710Z · score: 11 (5 votes)
Where Some People Donated in 2017 2018-02-11T21:55:09.730Z · score: 18 (18 votes)
Where I Am Donating in 2016 2016-11-01T04:10:02.389Z · score: 17 (23 votes)
Dedicated Donors May Not Want to Sign the Giving What We Can Pledge 2016-10-30T03:26:44.215Z · score: 15 (19 votes)
Altruistic Organizations Should Consider Counterfactuals When Hiring 2016-09-11T04:19:39.164Z · score: 1 (7 votes)
Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering 2016-08-26T02:08:53.190Z · score: 22 (30 votes)
Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply) 2016-06-10T21:35:50.236Z · score: 8 (8 votes)
A Complete Quantitative Model for Cause Selection 2016-05-18T02:17:28.769Z · score: 20 (24 votes)
Quantifying the Far Future Effects of Interventions 2016-05-18T02:15:07.240Z · score: 8 (8 votes)
GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics 2016-05-17T01:51:15.218Z · score: 26 (28 votes)
On Priors 2016-04-26T22:35:14.359Z · score: 9 (9 votes)
How Should a Large Donor Prioritize Cause Areas? 2016-04-25T20:46:38.304Z · score: 13 (13 votes)
Expected Value Estimates You Can (Maybe) Take Literally 2016-04-06T15:11:59.359Z · score: 19 (23 votes)
Are GiveWell Top Charities Too Speculative? 2015-12-21T04:05:07.675Z · score: 17 (20 votes)
More on REG's Room for More Funding 2015-11-16T17:31:40.493Z · score: 9 (11 votes)
Cause Selection Blogging Carnival Conclusion 2015-10-05T20:16:43.945Z · score: 7 (7 votes)
Charities I Would Like to See 2015-09-20T15:22:43.083Z · score: -5 (25 votes)
My Cause Selection: Michael Dickens 2015-09-15T23:29:40.701Z · score: 35 (30 votes)
On Values Spreading 2015-09-11T03:57:55.148Z · score: 22 (11 votes)
Some Writings on Cause Selection 2015-09-08T21:56:01.033Z · score: 4 (4 votes)
EA Blogging Carnival: My Cause Selection 2015-08-16T01:07:22.005Z · score: 11 (11 votes)
Why Effective Altruists Should Use a Robo-Advisor 2015-08-04T03:37:13.789Z · score: 10 (10 votes)
Stanford EA History and Lessons Learned 2015-07-02T03:36:56.688Z · score: 25 (25 votes)
How We Run Discussions at Stanford EA 2015-04-14T16:36:05.363Z · score: 15 (14 votes)
Meetup : Stanford THINK 2014-10-23T02:10:42.641Z · score: 1 (1 votes)

Comments

Comment by michaeldickens on seanrson's Shortform · 2020-10-27T21:41:00.901Z · score: 4 (2 votes) · EA · GW

You might try the East Bay EA/Rationality Housing Board

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-27T16:21:50.491Z · score: 2 (1 votes) · EA · GW

labor has an opportunity cost of $3 million per year

This seems really high. You could hire an experienced investment manager for a lot less than that. But the general structure of your analysis seems sound.

Another consideration is that you can probably reduce correlation to other altruists' investments (I wrote about this a bit here, and I'm currently writing something more detailed). Uncorrelated investments have much higher marginal utility of returns, at least until they become popular enough that they represent a significant percentage of the altruistic portfolio. And leveraging uncorrelated investments looks particularly promising. So you could get more than a 1% excess certainty equivalent return that way.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-26T17:17:33.679Z · score: 4 (2 votes) · EA · GW

Yeah, because adding leverage will increase taxes on dividends. My calculator correctly accounts for this, but I didn't account for it in my previous comment. But it doesn't lower the certainty-equivalent rate by much.

Also, do you happen to know how effortful and feasible tax loss harvesting might be for leveraged portfolios in taxable accounts?

It shouldn't be too hard, but I don't think you'd get much benefit from it. I'm not sure though, I'm not too familiar with the mechanics of tax loss harvesting.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-26T17:12:49.057Z · score: 8 (2 votes) · EA · GW

5% is the geometric mean return, the Samuelson share formula uses the arithmetic mean on the numerator (see here. So the correct formula is (5% + 0.16^2/2)/(0.16^2 * 1) = 2.45.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-24T19:22:13.091Z · score: 2 (1 votes) · EA · GW

Do you know if it possible to give to an EA Fund from a DAF?

That should definitely be possible.

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T21:22:31.866Z · score: 3 (2 votes) · EA · GW

We can estimate how valuable that would be by comparing the certainty-equivalent interest rates (I talked about this here).

Some quick analysis using leverage.py:

Historical long-run global equities have returned about 5% with a standard deviation of about 16% (source). Let's use that as a rough forward-looking estimate.

  • With a relative risk aversion (RRA) coefficient of 1 (= logarithmic utility), the certainty-equivalent interest rate of an un-leveraged portfolio is 5% (logarithmic utility doesn't care about standard deviation as long as geometric return is held constant). With optimal leverage (2.45:1), the certainty-equivalent rate is 7.7%. That means the ability to get leverage is as good as a guaranteed 2.7% extra return (= 7.7% - 5.0%).
  • With RRA=1.5, optimal leverage = 1.63:1, and the excess certainty-equivalent rate is 0.8%.
  • With RRA=2, optimal leverage = 1.22:1, and the excess certainty-equivalent rate = 0.14%.

I think altruistic RRA is probably somewhere around 1 to 1.5, so under these assumptions, the ability to use leverage is roughly as good as a guaranteed 1-3% extra return.

(FWIW I think you can also get better return by tilting toward the value and momentum factors, so if you're willing to do that, that makes the ability to invest flexibly look relatively more important.)

Comment by michaeldickens on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T20:57:48.442Z · score: 2 (1 votes) · EA · GW

I wonder if there's scope for circumventing this issue by setting up a registered charity that can take donations from a DAF and then forward on to wherever the donor desires. Even an existing charity like CEA could act as a middle man like this. Is this a completely silly idea or a promising one?

I'm not sure about this, but I don't think charities are allowed to give money to for-profits.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-23T20:52:58.867Z · score: 4 (2 votes) · EA · GW

My thinking is that donating during drawdowns might be particularly bad

This is true, and the standard deviation fully captures the extent to which drawdowns are bad (assuming isoelastic utility and log-normal returns). Increasing the standard deviation is bad because doing so increases the probability of both very good and very bad outcomes, and bad outcomes are more bad than good outcomes are good.

Is it actually the Sharpe ratio that should be maximized with isoelastic utility (assuming log-normal returns, was it?)?

Yes, if you also assume that you can freely use leverage. The portfolio with the maximum Sharpe ratio allows for the highest expected return at a given standard deviation, or the lowest standard deviation at a given expected return.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-22T18:18:41.289Z · score: 4 (2 votes) · EA · GW

Thank you, I appreciate the positive feedback, especially from someone as knowledgeable as you!

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:42:27.673Z · score: 4 (2 votes) · EA · GW

Worth remembering that, especially today, there are hundreds of thousands to millions of other highly intelligent and resourceful people trying to do the exact same thing. So you need to have a reason to believe you can do a better job than they can.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-19T17:39:00.637Z · score: 4 (2 votes) · EA · GW

I don't think that's how DAFs work? I believe the DAF legally owns the money and can do anything they want with it. You can ask them to donate the money to a different DAF that you created, but they have the right to refuse to do that.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:37:09.524Z · score: 7 (4 votes) · EA · GW

The answer sort of depends on what you mean by moonshot, but under one reasonable definition, it's actually the opposite: investing in potential moonshots would have resulted in worse performance than an index fund. Or to put it another way, boring companies tend to outperform exciting companies.

You can divide stocks into two types: growth stocks and value stocks. Value stocks are cheaply priced relative to their fundamentals (e.g., they have a low price to earnings or price to sales ratio) because the market expects these companies to be "boring" and not show good earnings growth. Growth stocks are priced expensively because the market expects them to grow. This sounds basically like what you're talking about with "moonshot" companies. If you wanted to systematically invest in moonshots, you could maybe buy the 10% most expensive stocks, because these are the ones the market believes have the most upside potential. But if you did that historically, you would've underperformed the market by a lot—something on the order of 5 percentage points per year. The seminal paper on this is Fama & French (1992), The Cross-Section of Expected Stock Returns.

In theory, savvy investors could identify the most promising publicly-traded growth companies and outperform the market by buying them. But studies on fund managers have found that pretty much nobody can do this.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:12:51.173Z · score: 4 (2 votes) · EA · GW

IMO the ulcer index is the best measure of volatility that matches what people intuitively care about. It essentially measures the frequency and severity of drawdowns (the linked page explains it in more detail).

I didn't discuss the ulcer index in this post because in theory, investors with isoelastic utility should care about standard deviation, not drawdowns, and I lean toward the belief that people's focus on drawdowns is somewhat irrational (although probably somewhat justified by the fact that most asset returns are left-skewed). But broadly speaking, if you use the ulcer index as your measure of risk, concentrating in a small number of assets looks even worse than if you use standard deviation, so the case for diversification is even stronger.

Comment by michaeldickens on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:09:01.868Z · score: 5 (3 votes) · EA · GW

Even if you can beat the market by buying a basket of houses (which I'm not sure is true), buying a single house has probably 2-3x the risk of the broad real estate market and 3-4x the risk of the global market portfolio, assuming real estate works similarly to stocks (which is probably a reasonable assumption). So it still seems like a bad idea, for the reasons discussed in the essay.

It might make sense if a bunch of individual EAs buy real estate such that the overall portfolio is well-diversified. I don't expect this to happen in practice, because EAs are geographically concentrated in a small number of cities, so if people own investment properties in the cities where they live, the overall EA real estate portfolio will be too concentrated in those cities.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-16T03:40:13.597Z · score: 2 (1 votes) · EA · GW

One concern I have with the Community Foundation in Boulder is it's not clear how committed they are to letting donors direct money however they want. Unlike the national DAF providers (Schwab, Vanguard, Fidelity), it seems like there's a decent chance they will at some point change their mind and decide you are only allowed to give to the causes that they like. How are you thinking about that risk?

Comment by michaeldickens on Linch's Shortform · 2020-10-15T03:27:43.611Z · score: 8 (2 votes) · EA · GW

I haven't really thought about it, but it seems to me that if an empirical claim implies an implausible normative claim, that lowers my subjective probability of the empirical claim.

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-15T03:24:54.794Z · score: 2 (1 votes) · EA · GW

Cool! Does that mean you're overweighting emerging markets?

Comment by michaeldickens on [Question] Pros/Cons of Donor-Advised Fund · 2020-10-12T22:42:53.874Z · score: 3 (2 votes) · EA · GW

How did you estimate the expected return in a DAF vs. unconstrained?

Comment by michaeldickens on Thomas Kwa's Shortform · 2020-10-11T18:41:00.324Z · score: 2 (1 votes) · EA · GW

Yes

Comment by michaeldickens on Thomas Kwa's Shortform · 2020-10-11T03:39:30.602Z · score: 2 (1 votes) · EA · GW

Probably the easiest way to do this is to give to a donor-advised fund, and then instruct the fund to give to the EA Fund. Even for charities that can accept stock, my experience has been that donating through a donor-advised fund is much easier (it requires less paperwork).

Comment by michaeldickens on niplav's Shortform · 2020-10-05T21:38:29.170Z · score: 10 (4 votes) · EA · GW

If there is a non-trivial possibility that a zero discount rate is correct, then the case with a zero discount rate dominates expected value calculations. See https://scholar.harvard.edu/files/weitzman/files/why_far-distant_future.pdf

Comment by michaeldickens on Denise_Melchin's Shortform · 2020-10-02T20:13:01.750Z · score: 5 (3 votes) · EA · GW

IMO the best type of positive comment adds something new on top of the original post, by extending it or by providing new and relevant information. This is more difficult than generic praise, but I don't think it's particularly harder than criticism.

Comment by michaeldickens on On GiveWell's estimates of the cost of saving a life · 2020-10-01T17:21:05.343Z · score: 7 (5 votes) · EA · GW

GiveWell's full cost-effectiveness calculations are available here: https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models

Comment by michaeldickens on Nathan Young's Shortform · 2020-09-26T03:55:18.374Z · score: 6 (4 votes) · EA · GW

Harris is a marmite figure - in my experience people love him or hate him.

My guess is people who like Sam Harris are disproportionately likely to be potentially interested in EA.

Comment by michaeldickens on Buck's Shortform · 2020-09-24T00:12:47.637Z · score: 10 (3 votes) · EA · GW

I pretty much agree with your OP. Regarding that post in particular, I am uncertain about whether it's a good or bad post. It's bad in the sense that its author doesn't seem to have a great grasp of longtermism, and the post basically doesn't move the conversation forward at all. It's good in the sense that it's engaging with an important question, and the author clearly put some effort into it. I don't know how to balance these considerations.

Comment by michaeldickens on MichaelDickens's Shortform · 2020-09-24T00:01:24.348Z · score: 14 (5 votes) · EA · GW

"Are Ideas Getting Harder to Find?" (Bloom et al.) seems to me to suggest that ideas are actually surprisingly easy to find.

The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing.

But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically with effort, or possibly even sub-logarithmically. If effort is growing exponentially, we'd expect to see linear or sub-linear growth in ideas. But instead we see exponential growth in ideas.

I don't have a great understanding of the math used in this paper, so I might be misinterpreting something.

Comment by michaeldickens on antimonyanthony's Shortform · 2020-09-19T21:22:14.352Z · score: 1 (2 votes) · EA · GW

This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.

It seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.

Comment by michaeldickens on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-18T21:16:52.043Z · score: 10 (6 votes) · EA · GW

Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty?

I think so, because the article includes some statements like,

"How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?"

and

"[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’."

Maybe instead of "make decisions under uncertainty", I should have said "make decisions that are informed by uncertain empirical forecasts".

Comment by michaeldickens on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-18T18:20:42.429Z · score: 49 (22 votes) · EA · GW

Agreed with Mathias that the authors have a good grasp of what EA is and what causes EAs prioritize, and I appreciate how respectful the article is. Also like Mathias, I feel like I have some pretty fundamental worldview differences from the authors, so I'm not sure how well I can explain my disagreements. But I'll try my best.

The article's criticism seems to focus on the notion that EA ignores power dynamics and doesn't address the root cause of problems. This is a pretty common criticism. I find it a bit confusing, and I don't really understand what the authors consider to be root causes. For example, efforts to create cheap plant-based or cultured meat seem to address the root cause of factory farming because, if successful, they will eliminate the need to farm and kill sentient animals. AI safety work, if successful, could eliminate the root causes of all suffering and bring about an unimaginably good utopia. But the authors don't seem to agree with me that these qualify as "addressing root causes". I don't understand how they distinguish between the EA work that I perceive as addressing root causes and the things they consider to be root causes. Critics like these authors seem to want EAs to do something that they're not doing, but I don't understand what it is.

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

It seems to me that if rich people come to terms with the origins of their wealth, they might conclude that they don't "deserve" it any more than poor people in Kenya, and decide to distribute the money to them (via GiveDirectly) instead of spending it on themselves. Isn't that ultimately the point? What outcome would the authors like to come out of this self-reflection, if not using their wealth to help disadvantaged people?

EAs spend more time than any other group I know talking about how they are among the richest people in the world, and they should use their wealth to help the less fortunate. But this doesn't seem to count in the authors' eyes.


This article argues that EAs fixate too much on "doing the most good", and then appears to argue that they believe people should focus on addressing root causes/grassroots activism/power dynamics/etc. because it will do the most good—or maybe I'm misinterpreting the article because I'm seeing it from an EA lens. Sometimes it seems like the authors disagree with EAs about fundamental principles like maximizing good, and other times it seems like they just disagree about what does the most good. I wasn't clear on that.

If they do agree in principle that we should do as much good as possible, then I would like to see a more rigorous justification for why the authors' favored causes do more good than EA causes. I realize they're not amenable to cost-effectiveness analysis than GiveWell's top charities, but I would like to see at least some attempt at a justification.

For example, many EAs prioritize existential risk. There's no rigorous cost-effective analysis of x-risk, but you can at least make an argument that it's more cost-effective than other things:

  1. Extinction is way worse than anything else.
  2. Extinction is not that unlikely.
  3. We can probably make significant progress on reducing extinction risk.

Bostrom basically makes this argument in Existential Risk Prevention as Global Priority.

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.


More broadly, I would have an easier time understanding articles like these if they gave more concrete examples of what they consider to be the best things to work on, and why—something more specific than "grassroots activism". For example (not saying I think the authors believe this, just that this is the general sort of thing I'd like to see):

We should support community groups that organize meetups where they promote the idea of the fundamental unfairness of global wealth inequality. We believe that once sufficiently many people worldwide are paying attention to this problem, people will develop and move toward a new system of government that will redistribute wealth and provide basic services to everyone. We aren't sure what this government structure will look like, but we're confident that it's possible because [insert argument here]. We also believe this plan has a good chance of getting broad support because [insert argument here], and that once it has broad support, it has a good chance of actually getting implemented, because [insert argument here].

Comment by michaeldickens on Pablo Stafforini’s Forecasting System · 2020-09-16T23:41:33.612Z · score: 3 (2 votes) · EA · GW

Pablo, is any of your custom Emacs code publicly available?

Comment by michaeldickens on Pablo Stafforini’s Forecasting System · 2020-09-16T23:40:02.875Z · score: 5 (3 votes) · EA · GW

Then I go back to Emacs, position the cursor anywhere on the Metaculus section, and press a shortcut. A whole new question is created as a to-do task.

This is an aside, but you can generate org-mode entries from templates from anywhere in Emacs using org capture—you don't have to position your cursor in the correct section. This is one of my favorite features of org mode.

Comment by michaeldickens on Formalizing longtermism · 2020-09-16T17:25:03.627Z · score: 9 (2 votes) · EA · GW

Do you have any notion as to the solution to this model (for some reasonable parameter values)? I've tried to solve models like this one and haven't succeeded, although I'm not good at differential equations.

It looks to me like it's unsolvable without some nonzero exogenous extinction risk, because otherwise there will be multiple parameter choices that result in infinite utility, so you can't say which one is best. But it's not clear what rate of exogenous x-risk to use, and our distribution over possible values might still result in infinite utility in expectation.

Perhaps you could simplify the model by leaving out the concept of improving technology, and just say you can either spend on safety, spend on consumption, or invest to grow your capital. That might make the model easier to solve, and I don't think it loses much explanatory power. (It would still have the infinity problem.)

Comment by michaeldickens on Keynesian Altruism · 2020-09-14T16:36:36.214Z · score: 11 (5 votes) · EA · GW

This article seems to be making two distinct claims:

  1. The standard arguments for giving later don't hold up.
  2. "Keynesian Altruism": It's better to give when the economy is weaker.

I believe these can be true or false independently. I want to expand a bit on the first claim.

You identify a lot of relevant concerns that I agree need to be addressed, and that often get ignored. I think that even after addressing them, giving later may still look better than giving now.

Are you familiar with the Ramsey equation (e.g., see this SEP entry)? The Ramsey equation states that, in an efficient market, where r is the risk-free rate, is the rate of time preference, is the rate of risk aversion, and g is the consumption (GDP) growth rate. The claim in RPTP Is a Strong Reason to Consider Giving Later is that most market actors use a value of that's too high, which pushes up interest rates, and therefore "patient" actors should prefer to invest. (Right now, it looks like r < g. I don't know how to explain this. I did a little bit of reading on the matter and my impression is economists believe it shouldn't be true and it's a bit of a puzzle as to why it's true currently, but there are some potential explanations.)

You point out that donors need to worry about taxes and expropriation. That basically means . This is true for both altruists and non-altruists. But as long as most people have a pure time preference and altruists don't, altruists will have a lower than most people, and therefore will relatively favor investing. (I made an attempt to estimate the philanthropic discount rate here.)

Another thing you brought up is that most people don't invest exclusively in risk-free assets. The Ramsey equation does use the risk-free rate, but there's an extended version of the equation that allows for risk. The extended Ramsey equation (taken from here) is where g follows a normal distribution with standard deviation , and r and g are perfectly correlated. When accounting for risk, the same basic theoretical argument holds: impatient actors will push up interest rates, making investing look more promising to patient actors.

Of course, there's a case to be made that this theoretical model doesn't hold up (e.g., current risk-free rates seem incompatible with a positive pure time preference). I haven't seriously studied economics but my impression is economists generally believe this is a good model.

Comment by michaeldickens on Thoughts on patient philanthropy · 2020-09-09T22:52:07.713Z · score: 2 (1 votes) · EA · GW

Patient philanthropists might want to wait for hundreds or even thousands of years before deploying their capital. 30 years is nothing compared to the possible future of civilization.

Comment by michaeldickens on Giving Now vs. Later for Existential Risk: An Initial Approach · 2020-09-09T20:42:33.999Z · score: 2 (1 votes) · EA · GW

I have now updated the post.

Comment by michaeldickens on Thoughts on patient philanthropy · 2020-09-08T16:38:03.232Z · score: 3 (2 votes) · EA · GW

Risk-free interest rates are currently very low. Therefore, patient philanthropy can only work with risky assets, such as stocks.

This isn't necessarily true. If you expect risk-free rates to increase in the future, then the long-term average interest rate could still be high enough to justify investing.

If impatient actors dominate the market, then the risk-free rate will always be high enough such that patient actors prefer to invest. This is true regardless of what the risk-free rate is currently. Although I don't know how to reconcile a positive pure time preference with the fact that real risk-free rates are currently negative or extremely low.

Comment by michaeldickens on Is shareholder activism worth it? · 2020-09-05T00:02:07.766Z · score: 4 (3 votes) · EA · GW

Good question! I haven't seriously thought about this issue, but this is my first impression:

  • Shareholder activism seems more likely to be effective than divestment.
  • There probably aren't any shareholder activism mutual funds/ETFs that focus on causes that EAs tend to prefer.
  • If you're wealthy enough to conduct shareholder activism on your own, then doing so might be a better idea than investing in index funds. It seems likely that you can persuade a company to do $1 of good by giving up less than $1 in risk-adjusted return. I suspect this is true mainly because companies are generally willing to make very large dollar-value changes on the basis of relatively small activist efforts.
Comment by michaeldickens on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-04T22:00:20.471Z · score: 3 (2 votes) · EA · GW

I interpreted it as the balance of happiness minus suffering.

Comment by michaeldickens on Giving Now vs. Later for Existential Risk: An Initial Approach · 2020-09-04T18:57:01.223Z · score: 4 (2 votes) · EA · GW

Correction: I originally wrote that, under the permanent benefits model, it is optimal to spend our whole budget at once. This is not true. I arrived at this result by making incorrect assumptions in my proof. Rather than proving that utility of spending is non-concave, I should have determined the concavity of the utility of spending at time t_0 minus the utility of spending at time t_1. This function is concave. I don't yet have a full picture of how this affects optimal behavior under this model (I will update the post once I do), but I wanted to point out the error as soon as possible.

Comment by michaeldickens on Please take a survey on the quality/impact of things I've written · 2020-09-02T17:26:36.297Z · score: 2 (1 votes) · EA · GW

Off-topic, but how was this comment posted in August 29 if the post wasn't published until September 1?

Comment by michaeldickens on GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics · 2020-09-02T14:55:17.525Z · score: 2 (1 votes) · EA · GW

I don't have a particularly good understanding of population ethics and I haven't read Broome (2005) yet, so I could be off base here. But it seems to me that when GiveWell recommends AMF as a top charity, this requires claiming that AMF is in principle comparable to other charities, which requires completeness (or, at least, completeness over the set of charities being compared).

I could also argue that rejecting completeness seems borderline nonsensical, but that's more complicated to argue, and I don't really have anything original to contribute on the subject.

Comment by michaeldickens on Forum update: New features (August 2020) · 2020-08-28T20:18:30.865Z · score: 11 (6 votes) · EA · GW

It seems weird to me that posts don't appear on the front page unless they're explicitly promoted by a moderator. I would prefer if my posts automatically appeared on the front page. What is the reason for this behavior? (Unless the new tag system will automatically put posts on the front page by default? I'm confused about this.)

Comment by michaeldickens on Should We Prioritize Long-Term Existential Risk? · 2020-08-26T15:32:29.941Z · score: 6 (3 votes) · EA · GW

The passage you quoted was just an example, I don't actually think we should use exponential discounting. The thesis of the essay can still be true when using a declining hazard rate.

If you accept Toby Ord's numbers of a 1/6 x-risk this century and a 1/6 x-risk in all future centuries, then it's almost certainly more cost-effective to reduce x-risk this century. But suppose we use different numbers. For example, say 10% chance this century and 90% chance in all future centuries. Also suppose short-term x-risk reduction efforts only help this century, while longtermist institutional reform helps in all future centuries. Under these conditions, it seems likely that marginal work on longtermist institutional reform is more cost-effective. (I don't actually think these conditions are very likely to be true.)

(Aside: Any assumption of fixed <100% chance of existential catastrophe runs into the problem that now the EV of the future is infinite. As far as I know, we haven't figured out any good way to compare infinite futures. So even though it's intuitively plausible, we don't know if we can actually say that an 89% chance of extinction is preferable to a 90% chance (maybe limit-discounted utilitarianism can say so). This is not to say we shouldn't assume a <100% chance, just that if we do so, we run into some serious unsolved problems.)

Comment by michaeldickens on Should We Prioritize Long-Term Existential Risk? · 2020-08-26T14:50:45.546Z · score: 6 (3 votes) · EA · GW

In addition to what Michael A. said, a 1 in 3 chance that cause A is more effective than cause B means even though we should generally prefer cause B, there could be high value to doing more prioritization research on A vs. B, because it's not too unlikely that we decide A > B. So "The EA community generally underrates the significance of long-term x-risk reduction" could mean there's not enough work on considering the expected value of long-term x-risk reduction.

Comment by michaeldickens on Should We Prioritize Long-Term Existential Risk? · 2020-08-26T00:38:19.215Z · score: 2 (1 votes) · EA · GW

This pretty much captures what I was thinking.

Comment by michaeldickens on The case of the missing cause prioritisation research · 2020-08-24T20:18:05.637Z · score: 4 (2 votes) · EA · GW

I meant to distinguish between long-term efforts and reducing x-risk in the relatively near future (the second case on your list), sorry that was unclear.

Comment by michaeldickens on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T21:33:42.955Z · score: 2 (1 votes) · EA · GW

Suppose you think efforts to reduce long-term risk are more effective than reducing short-term risk, but you don't know what to do. Then it makes more sense to invest rather than spending your money on the less effective cause, because future people will probably figure out what to do, and then they can spend your investment on the more effective cause.

Comment by michaeldickens on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T16:37:55.426Z · score: 5 (3 votes) · EA · GW

It seems to me that this line of reasoning more favors investing rather than trying to reduce short-term x-risk. If we expect long-term x-risk reduction is more cost-effective but we don't know how to do it, then the best thing to do is to invest so that future generations can use our resources to reduce long-term x-risk once they figure it out.

Comment by michaeldickens on Should We Prioritize Long-Term Existential Risk? · 2020-08-20T16:35:45.170Z · score: 3 (2 votes) · EA · GW

Thanks for the questions!

  1. I don't have strong beliefs about what could reduce long-term x-risk. Longtermist institutional reform just seemed like the best idea I could think of.
  2. As I said in the essay, the lower the level of x-risk, the more valuable it is to reduce x-risk by a fixed proportion. The only way you can claim that reducing short-term x-risk matters more is by saying that it will become too intractable to reduce x-risk below a certain level, and that we will reach that level at some point in the future (if we survive long enough). I think this claim is plausible. But simply claiming that x-risk is currently high is not sufficient to prioritize reducing current x-risk over long-term x-risk, and in fact argues in the opposite direction.
  3. I mentioned this in my answer to #2—I think it's more likely that reducing x-risk by a fixed proportion becomes more difficult as x-risk gets lower. But others (e.g., Yew-Kwang Ng and Tom Sittler) have used this assumption that reducing x-risk by a fixed proportion has constant difficulty.
Comment by michaeldickens on What (other) posts are you planning on writing? · 2020-08-19T22:30:11.255Z · score: 23 (8 votes) · EA · GW

I have about 60 EA-related ideas right now. This list includes some of the most promising ones, broken down by category. I am interested in feedback on which ideas people like the best.

Plus signs indicate how well thought-out an idea is:

  • + = idea seems interesting, but I have no idea what to say about it
  • ++ = partially formed concept, but still a bit fuzzy
  • +++ = fully-formed concept, just need to figure out the details/actually do it

Fundamental problems

  • "Pascal's Bayesian Prior Mugging": Under "longtermist-friendly" priors, if a mugger asks for $5 in exchange for an unspecified reward, you should give the $5 ++
  • If causes differ astronomically in EV, then personal fit in career choice is unimportant ++
  • EAs should focus on fundamental problems that are only relevant to altruists (e.g., infinity ethics yes, explore/exploit no) +++
  • The case for prioritizing "philosophy of priors" ++
  • How quickly do forecasting estimates converge on reality? (use Metaculus API) +++

Investing for altruists

  • Alternate version of How Much Leverage Should Altruists Use? that assumes EMH +++
  • How risk-averse should altruists be (and how does it vary by cause)? +
  • Can patient philanthropists take advantage of investors' impatience? +

Giving now vs. later

  • Reverse-engineering the philanthropic discount rate from observed market rates +++
  • Optimal behavior in extended Ramsey model that allows spending on cash transfers or x-risk reduction +++
  • If giving later > now, what does that imply for talent vs. funding constraints? +
  • Is movement-building an expenditure or an investment? +
  • Fermi estimate of the cost-effectiveness of improving the EA spending rate +++
  • Prioritization research might need to happen now, not later ++

Long-term future

  • If technological growth linearly increases x-risk but logarithmically increases well-being, then we should stop growing at some point ++
  • Estimating P(existential catastrophe) from a list of near-catastrophes +++
  • Thoughts on doomsday argument +
  • Value of the future is dominated by worlds where we are wrong about the laws of physics ++
  • If x-risk reduction is permanent and people aren't longtermist, we should give later +++

Other

  • How should we expect future EA funding to look? +
  • Can we use prediction markets to enfranchise future generations? (Predict what future people will want, and then the government has to follow the predictions) +
  • Altruistic research might have increasing marginal utility ++
  • "Suspicious convergence" is not that suspicious because people seek out actions that look good across multiple assumptions +++