Posts

Prioritization when size matters: Value of information 2022-01-07T05:16:50.324Z
Prioritization when size matters 2022-01-05T22:11:24.999Z
Prioritization when size matters: Model 2021-12-17T16:16:47.390Z
Important ideas for prioritizing ambitious funding opportunities 2021-12-03T18:31:11.570Z
EA-Aligned Impact Investing: Mind Ease Case Study 2021-11-15T15:57:20.191Z
Seeking feedback on new EA-aligned economics paper 2021-10-21T21:19:30.097Z
Event-driven mission correlated investing and the 2020 US election 2021-06-14T15:06:43.364Z

Comments

Comment by jh on Estimating the Philanthropic Discount Rate · 2022-01-02T00:06:01.031Z · EA · GW

This is a nice post that touches on many important topics. One little note for future reference: I think the logic in the section 'Extended Ramsey model with estimated discount rate' isn't quite right. To start it looks like the inequality is missing a factor of 'b' on the lefthand side. More importantly, the result here depends crucially on the context. The one used is log utility with initial wealth equal to 1. This leads to the large, negative values for small delta. It also makes cost-effectiveness become infinitely good as delta become small.  This all makes it much more difficult to think intuitively about the results. I think the more appropriate context is with large initial wealth. The larger the initial wealth (and the larger the consumption each year) the less important delta becomes, relatively. For large initial wealth, it is probably correct to focus on improving 'b' (i.e. what the community does currently) over delta.  My point here is not to argue either way, but simply that the details of the model matter - it's not clear that delta has to be super important.

Comment by jh on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-12-23T16:03:09.225Z · EA · GW

I'm still not sure I understand your point(s). The payment of the customers was accounted for as a negligible (negative) contribution to the net impact per customer.

To put it another way: Think of the highly anxious customers each will get $100 in benefits from the App plus 0.02 DALYs averted (for themselves) on top of this. The additional DALYs being discounted for the potential they could use another App.

Say the App fee is $100 dollars. This means to unlock the additional DALYs the users as a group will pay $400 million over 8 years.

The investor puts in their $1 million to increase the chances the customers have the option to spend the $400m. In return they expect a percentage of the $400m (after operating costs, other investors shares, the founders shares). But they are also having a counterfactual effect on the chance the customers have/use this option.

This is basically a scaled up version of a simple story where the investor gives a girl called Alice a loan so he can get some therapy. The investor would still hope Alice repays them with interest. But they also believe that without their help to get started she would have been less like to get help for herself. Should they have just paid for her therapy? Well, if she is a well-off, western iPhone user who comfortably buys lattes everyday, then that's surely ineffective altruism. Unless she happens to be the investor's daughter or something, so it makes sense for other reasons.

I think the message of this post isn't that compatible with general claims like "investing is doing good, but donating is doing more good". The message of the post is that specific impact investments can pass a high effectiveness bar (i.e. $50 / DALY). If the investor thinks most of their donation opportunities are around $50/DALY, then they should see Mind Ease as a nice way to add to their impact.

If their bar is $5/DALY (i.e. they see much more effective donation opportunities) then Mind Ease will be less attractive. It might not justify the cost of evaluating it and monitoring it. But for EAs who are investment experts the costs will be lower. So this is all less an exhortation for non-investor EAs to learn about investing, and more a way for investor EAs to add to their impact.

Overall, the point of the post is a meta-level argument that we can compare donation and investment funding opportunities in this way. But the results will vary from case to case.

Comment by jh on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-12-20T06:05:04.237Z · EA · GW

Thanks for this comment and question, Paul.

It's absolutely true that the customer's wallets are worth potentially considering. An early reviewer of our analysis also made a similar point. In the end we are fairly confident this turns out to not be a key consideration. The key reason is that mental health is generally found to be a service for which people's willingness to pay is far below the actual value (to them). Especially for likely paying customer markets of e.g. high-income country iPhone users, the subscription costs were judged to be trivial compared to changes in their mental health. This is why, if I remember correctly, this consideration didn't feature more prominently in Hauke's report (on the potential impacts on the customers). Since it didn't survive there, it also didn't make it into the investment report.

I'm not quite sure I understand the point about the customer donating to the BACO instead. That could definitely be a good thing. But it would mean an average customer with anxiety choosing to donate to a highly effective charity (presumably instead of not buying the App). This seems unlikely. More importantly, it doesn't seem like the investor can influence it?...

In short, since the expected customers are reasonably well off non-EAs, concerns about customer wallet or donations didn't come into play.

Comment by jh on Important ideas for prioritizing ambitious funding opportunities · 2021-12-06T17:25:03.054Z · EA · GW

Thanks Alex.

On Angel Investing, in case you haven't seen it, there is this case study. But much more to discuss.

On Technology Deployment, are there any links you can share as examples of what you have in mind?

Comment by jh on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-11-19T10:19:02.677Z · EA · GW

Hi Derek, hope you are doing well. Thank you for sharing your views on this analysis that you completed while you were at Rethink Priorities.

The difference between your estimates and Hauke's certainly made our work more interesting.

A few points that may be of general interest:

  • For both analysts we used 3 estimates, an 'optimistic guess', 'best guess' and 'pessimistic guess'.
  • For users from middle-income countries we doubled the impact estimates. Without reviewing our report/notes in detail, I don't recall the rationale for the specific value of this multiplier. The basic idea is that high-income countries are better served, more competitive markets, so apps are more likely to find users with worse counterfactuals in middle income countries.
  • The estimates were meant to be conditional on Mind Ease achieving some degree of success. We simply assumed the impact of failure scenarios is 0. Hauke's analysis seems to have made more clear use of this aspect. Not only is Hauke's reading of the literature more optimistic, but he is more optimistic about how much more effective a successful Mind Ease will be relative to the competition.
  • Indeed the values we used for Derek's analysis, for high income countries, were all less than 0.01. We simplified the 3 estimates, doing a weighted average across the two types of countries, into the single value of 0.01 for Derek's analysis after rounding up (I think the true number may be more like 0.006). The calculations in the post use rounded values so it is easier for a reader to follow. Nevertheless, the results are in line with our more detailed calculations in the original report.
  • Similar to this point of rounding, we simplified the explanation of the robustness tilt we applied. It wasn't just about Derek vs Hauke. It was also along the dimensions of the business analysis (e.g. success probabilities). We simplified the framing of the robustness tilt both here and in a 'Fermi Estimate' section of the original report because we believed that it is conceptually clearer to only talk about the one dimension.
  • What would I suggest to someone who would like to penalize the estimate more or less for all the uncertainty? Adjust the impact return.
  • How can you adjust the impact return in a consistent way? Of course, to make analyses like this useful you would want to do them in a consistent fashion. There isn't a golden standard for how to control the strength of the robustness tilts we used. But you can think of the tilt we applied (in the original report) as being like being told a coin is fair (50/50) and then assuming it is biased to 80% heads (if heads is the side you don't want). This is an expression of how different our tilted probability distribution was from the distribution in the base model (the effect on the impact estimate was more severe; 1-0.02/(0.25/2+0.01/2)=85%). There is a way of assessing this "degree of coin-equivalent tilt" for any tilt of any model. So if you felt another startup had the same level of uncertainty as Mind Ease you could tilt your model of it until you get the same degree of tilt. This would give you some consistency and not make the tilts based purely in analyst intuition (though of course there is basically no way to avoid some bias). If a much better way to consistently manage these tilts was developed, we would happily use it.
  • Overall, this analysis is just one example of how one might deal with all the things that make such assessments difficult including impact uncertainty, business uncertainty, and analyst disagreement. The key point really being a need to summarize all the uncertainty in a way that is useful to busy, non-technical decision makers who aren't going to look at the underlying distributions. We look forward to seeing how techniques in this regard evolve as more and more impact assessments are done and shared publicly.
Comment by jh on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-11-17T18:54:18.873Z · EA · GW

Just to add that in the analysis we only assumed Mind Ease has impact on 'subscribers'. This meanings paying users in high income countries (and active/committed users in low/middle income countries). We came across this pricing analysis while preparing our report. It has very little to do with impact but it does a) highlight Brendon's point that Headspace/Calm are seen as meditation apps, and b) that anxiety reduction looks to be among the highest Willingness To Pay / high value to the customer segments into which Headspace/Calm could expand (e.g. by relabeling their meditations as useful for anxiety). The pricing analysis doesn't even mention depression (which Mind Ease now addresses following the acquisition of Uplift). Perhaps because they realize it is a more severe mental health condition.

Comment by jh on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-11-16T21:39:34.583Z · EA · GW

Just to add, for the record, that we released most of Hauke's work because it was a meta-analysis that we hope contributes to the public good. We haven't released either Hauke or Derek's analyses of Mind Ease's proprietary data. Though, of course, their estimates and conclusions based on their analyses are discussed at a high level in the case study.

Comment by jh on EA-Aligned Impact Investing: Mind Ease Case Study · 2021-11-16T21:15:11.982Z · EA · GW

To add two additional points to Brendon's comment.

The 1,000,000 active users is cumulative over the 8 years. So, just for example, it would be sufficient for Mind Ease to attract 125,000 users a year each year. Still very non-trivial, but not quite as high a bar as 1,000,000 MAU.

We were happy we the 25% chance of success primarily because of the base rates Brendon mentioned. In addition this can include the possibility that Mind Ease isn't commercially viable for reasons unconnected to its efficacy, so the IP could be spun out into a non-profit. We didn't put much weight on this, but it does seem like a possibility. I'm mentioning it mostly because it's an interesting consideration with impact investing that could be even more important in some cases.

Comment by jh on X-Risk, Anthropics, & Peter Thiel's Investment Thesis · 2021-10-27T11:36:35.688Z · EA · GW

Thought provoking post, thanks Jackson.

You humbly note that creating an 'EA investment synthesis' is above your pay grade. I would add that synthesizing EA investment ideas into a coherent framework is a collective effort that is above any single person's pay grade. Also, that I would love to see more people from higher pay grades, both in EA and outside the community, making serious contributions to this set of issues. For example, top finance or economics researchers or related professionals. Finally, I'd also say that any EA with an altruistic strategy that relates to money (i.e. isn't purely about direct work) has a stake in these issues and could benefit from further research on some of the topics you highlighted. So there's a lot of things to discuss and a lot of reasons to keep the discussion going.

Comment by jh on Seeking feedback on new EA-aligned economics paper · 2021-10-23T16:34:00.398Z · EA · GW

Yes, Watson and Holmes definitely discuss other approaches which are more like explicitly considering alternative distributions. And I agree that the approach I've described does have that benefit that it can uncover potentially unknown biases and work for quite complicated models/simulations. Hence why I've found it useful to apply to my portfolio optimization with altruism paper (and actually to some practical work). Along with using common sense exploration of alternative models/distributions.

Comment by jh on Seeking feedback on new EA-aligned economics paper · 2021-10-23T14:12:43.943Z · EA · GW

Great question and thanks for looking into this section. I've now added a bit on this to the next version of the paper I'll release.

Watson and Holmes investigate this issue :)

They propose several heuristic methods that use simple rules or visualization to rule out values where the robust distribution becomes 'degenerate' (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.

It seems to me that what seem like different techniques, like cross validation, are ultimately trying to solve the same problem. If so, I wonder if the machine learning community has already found better techniques for 'setting '?

Comment by jh on Seeking feedback on new EA-aligned economics paper · 2021-10-23T14:09:55.319Z · EA · GW

Great question and thanks for looking into this section. I've now added a bit on this to the next version of the paper I'll release.

[Watson and Holmes](https://projecteuclid.org/journals/statistical-science/volume-31/issue-4/Approximate-Models-and-Robust-Decisions/10.1214/16-STS592.full) investigate this issue :) 

They propose several heuristic methods that use simple rules or visualization to rule out $\psi$ values where the robust distribution becomes 'degenerate' (that is, puts an unreasonable amount of weight on a small set of scenarios). How to improve on these heuristics seems to be an open problem.

It seems to me that what seem like different techniques, like cross validation, are ultimately trying to solve the same problem. If so, I wonder if the machine learning community  has already found better techniques for 'setting $\psi$'?

Comment by jh on Seeking feedback on new EA-aligned economics paper · 2021-10-22T15:26:14.345Z · EA · GW

Great points. You've inspired me to look at ways to put more emphasis on these ideas in the discussion section that I haven't yet added to the model paper.

Developing a stream of the finance literature that further develops and examines ideas from the EA community is one of the underlying goals with these papers.  I believe these ideas are valid and interesting enough to attract top research talent. Also, that there is plenty of additional work to do to flesh these ideas out so having more researchers working on these topics would be valuable.

In this context I see these papers are setting out a framework for further work. I could see a paper follow from specifying E(log(EA wealth)) as the utility function then examining the implications. Exactly as you've outlined above. It would surely need something more to make it worth a whole academic paper (e.g. examining alternative utility functions, examining relevant empirical data, estimating the size of the altruistic benefits gained by optimizing for this utility versus following a naive/selfish portfolio strategy). I would be excited to see papers like this get written and excited to collaborate on making it happen.

Directly on the points in your comment, I'm curious to what extent you've seen these ideas being action guiding in practice? e.g. Are you aware of smaller donors setting up DAFs and taking much more risk than they otherwise would (tax considerations, by the way, are another important thing I've abstracted away in my current papers). Are you aware of people specifically taking steps to reduce their correlations with other donors?

As in my papers, I'd split the implications you discussed above into buckets of risk-aversion and mission-correlation. If a smaller donor's utility depends on log(EA wealth) then of course it makes sense for them to have very little risk aversion in regards to their own wealth. But then they should have the mission-correlation effect of being averse to correlations with major donors. It seems reasonable to me to think of the major donor portfolio as approximately being a global diversified portfolio, i.e. the market (perhaps with some overweights on FB, MSFT, BRK). Just intuitively, I'd say that this means their aversion to market risk should be about equal to what it would be if they were selfish. Which means we're back to square one of just defaulting to a normal portfolio. That is, the (mission-correlated) risk the altruist sees in most investments will be about equal to the (selfish) market risk most investors see. So their optimal portfolios will be about the same. 

Of course, mission-correlated risk aversion could have different implications from normal risk aversion if it is easier to change the covariance of your portfolio with major donors than it is to change the variance of your portfolio. But that's my point in the above paragraph - the driver of both these variances is going to be your market risk exposure. And quickly reviewing Michael's post, I'd say all the ideas he mentions are also plausibly good ideas for mainstream investors looking to optimize their portfolios. If this is the case, then we need something more to imply altruists should deviate from following standard, even if advanced, financial advice (e.g. Hauke's example of crypto could be such a special case, or other investments that are correlated with government policy shifts, or technological shifts that change the altruistic opportunities that are available). 

Interested to hear your thoughts on this. I would be particularly excited to see more EA research on a) the expected trajectories of effectiveness over time in different cause areas, and b) the amount of diminishing returns to money in each area. On a), I'd note Founders Pledge has done some good, recent work on this with their Investing to Give and Climate research. Would be great to see more. On b), I think there is tons of thinking out there on this and I feel like it would be great if someone organized this collective wisdom to establish what the current consensus views are (e.g. like 'global health has low diminishing returns', 'AI safety research has relatively high diminishing returns right now').

Comment by jh on Seeking feedback on new EA-aligned economics paper · 2021-10-21T23:05:00.782Z · EA · GW

Thanks Madhav. I'm a big fan of using simple language most of the time. In this case all of those words are pretty normal for my target audience.

Comment by jh on Event-driven mission correlated investing and the 2020 US election · 2021-09-08T13:23:55.072Z · EA · GW

@Neel Nanda. Quick update: I've now discussed this offline with a bunch of people who are considering potential strategies of this nature. It seems to me that 'mission-correlated investing' is a better umbrella term for these strategies that work with financial-mission correlations to enhance expected value. 'Mission hedging' strategies would be the subset of mission-correlated strategies that both increase expected value and reduce the variance of outcomes.

Comment by jh on Event-driven mission correlated investing and the 2020 US election · 2021-06-18T20:16:13.643Z · EA · GW

Thanks Sjir. Interesting thought to muse on.

Just quickly riffing on the example in this post, if you have a great business idea that will only work under one politician you might bet on them. Or if you think one politician will be good for your current job, but the other could make it optimal for you to retrain and change jobs, then bet on the other. Or if one will make you want to leave the country, then bet on them to help with your moving costs.

Comment by jh on Event-driven mission correlated investing and the 2020 US election · 2021-06-15T21:53:47.442Z · EA · GW

Great point and perhaps more interesting than you might have expected.

To repeat back what I think you meant, what I've called the mission hedging strategy for this case makes the two possible outcomes 15 vs 0. While for just donating the possible outcomes are 10 vs 1. So actually the variance of outcomes is higher. It's more like anti-hedging.

First, this depends on how happy you are about Biden v Trump for other reasons. If a Biden win is worth +100 in utility for you and Trump -100, then the mission hedging outcomes are 115 & -100, whereas for simply donating the outcomes are 110 & -99. However, if a Biden win is a -100 for you, and Trump +100, then mission hedging outcomes are -85 & 100, whereas simply donating gets outcomes of -90 & 101. So, in the latter case, the spread between outcomes is actually lower for the 'mission hedging' strategy. 

Next, assuming you prefer a world with Biden as president, then this is absolutely correct. This strategy is the opposite of a hedge in the sense that it increases the variance of outcomes. So, perhaps a better term would be 'mission leveraging'. Or mission anti-hedging.

Nevertheless, I'd propose that 'Mission hedging' is useful as an umbrella term. One that captures both hedging and 'anti-hedging'. The broad category of 'mission hedging' may sometimes involve actual 'hedges' (such as investing in evil, like in Hauke's post) and other times the optimal strategy looks more  like 'mission leveraging'.

The literature on these ideas seems to be at an early stage and small enough that if we agreed on another name then we could run with it. But the main reasons I'd propose 'Mission hedging' as an umbrella term are:

1. Practical precedent. Practically, this is just like some hedge funds actually hedge and others make highly leveraged, unhedged bets (and these aren't actually mutually exclusive). But the umbrella term is 'hedge' funds.

2. Academic context. Mathematically, this class of strategies arise from second-order terms related to the covariance between financial returns and the marginal utility (of giving). Sometimes these terms will push to variance reducing 'hedges'. Other times they will push in favour of investments that are more like leverage. The direction this goes depends on the correlation between returns and the background world state (e.g. for this post, how does Biden/Trump change your utility aside from climate?). Either way these terms define a category of strategies that is about social-financial correlation and not just about first-order things like increasing returns or effectiveness. So, it is nice to be able to refer to these terms with an umbrella term like mission hedging. The original 'mission hedging' paper by Roth-Tran uses this term.

3. Existing usage within EA. Last but not least, the distinction between hedging and anti-hedging doesn't seem to be made in most existing EA usage (this doesn't mean the distinction shouldn't be made, but it's not currently). For example, I would say that Holden's recent public discussion of mission hedging didn't involve a claim about whether transformative AI happening sooner is inherently good or bad for the world, just that Open Philanthropy would like to have more money sooner in worlds where this occurs. Whether or not investing in AI as a 'mission hedge' is truly a 'hedge' or an 'anti-hedge' depends on whether or not transformative AI happening sooner is bad or good.

So, I think the point about 'anti-hedging' is really interesting, at least philosophically. And if someone has an idea for a better umbrella term than 'mission hedging', then I'd be happy to use it. But, for the reasons above, I think it may be here to stay.

Comment by jh on Event-driven mission correlated investing and the 2020 US election · 2021-06-15T10:53:32.777Z · EA · GW

Thank you jackva. Great points on this specific example.

In general, suppose we didn't think this was a special moment. Then essentially this means we think 'investing to give' also presents a good opportunity. If 'investing to give' is also 10x CCF under Trump, then indeed you would want to just wait and either give under Biden or invest to give. But if 'investing to give' is only 5x CCF, then we're in the scenario I discussed under 'More general context'. So, fair point, I have added a sentence to the main post to explicitly rule out 'investing to give' being a consideration.

I'd be most interested to see people's objections conditional on accepting a scenario where mission hedging seems like a valuable opportunity. Like the one I have tried to illustrate around last year's election. Are there somehow more fundamental intuitions for why you would not pursue such a strategy?