How Can Donors Incentivize Good Predictions on Important but Unpopular Topics?

2019-02-03T01:11:09.991Z · score: 27 (13 votes)

Should Global Poverty Donors Give Now or Later? An In-Depth Analysis

2019-01-22T04:45:56.500Z · score: 22 (7 votes)

Why Do Small Donors Give Now, But Large Donors Give Later?

2018-10-28T01:51:56.710Z · score: 11 (5 votes)
Comment by michaeldickens on RPTP Is a Strong Reason to Consider Giving Later · 2018-10-06T22:18:39.361Z · score: 4 (3 votes) · EA · GW

I'm not clear on how RPTP fits into a general understanding of financial returns on investment. Clearly your RPTP matters, and if you have a lower RPTP than most people, that makes investing look relatively better for you. But why don't, say, financial advisors ever talk about this? Advisors largely make investment recommendations based on clients' risk tolerance, which is unrelated to RPTP.

Comment by michaeldickens on RPTP Is a Strong Reason to Consider Giving Later · 2018-10-06T17:17:50.164Z · score: 3 (2 votes) · EA · GW

I don't have a direct source on the argument that you said Elie Hassenfeld made, but I do have a quote from Scott Alexander (http://slatestarcodex.com/2013/04/05/investment-and-inefficient-charity/) who went to a live event in which Elie made this argument:

[I]n the 1960s, the most cost-effective charity was childhood vaccinations, but now so many people have donated to this cause that 80% of children are vaccinated and the remainder are unreachable for really good reasons (like they’re in violent tribal areas of Afghanistan or something) and not just because no one wants to pay for them. In the 1960s, iodizing salt might have been the highest-utility intervention, but now most of the low-iodine areas have been identified and corrected. While there is still much to be done, we have run out of interventions quite as easy and cost-effective as those. And one day, God willing, we will end malaria and maybe we will never see a charity as effective as the Against Malaria [Foundation] again.
Comment by michaeldickens on Additional plans for the new EA Forum · 2018-09-11T02:52:34.879Z · score: 5 (5 votes) · EA · GW

Another feature that could help people find old posts is to display a few random old posts on a sidebar. For example, on any of Jeff Kaufman's blog posts, five old posts display on the sidebar. I've found lots of interesting old posts on Jeff's blog via this feature.

Comment by michaeldickens on EA Forum 2.0 Initial Announcement · 2018-07-24T01:24:00.280Z · score: 0 (2 votes) · EA · GW

I think there's another downside there: we should be wary of implementing a system that doesn't have a track record. There are lots of forums that don't have voting, and reddit-style voting has a long track record as well (plus Hacker News-style, which is similar but not quite the same as reddit-style). As you start introducing extra complexity, you don't know what's going to happen. Most possible designs are bad, and most designs we come up with a priori will probably be bad, so my inclination would be to stick close to a system that has a proven track record.

That said, having multiple types of upvotes could look something like Facebook which now has multiple types of likes, and we have at least some idea of what that would look like. So it might be a good idea.

Comment by michaeldickens on EA Forum 2.0 Initial Announcement · 2018-07-23T04:57:43.249Z · score: 7 (11 votes) · EA · GW

I'm concerned with the plans to make voting/karma more significant; I would prefer to make them less significant than the status quo rather than more. Voting allows everyone's biases to influence discussion in bad ways. For example, people's votes tend to favor:

  1. things they agree with over things they disagree with, which makes it harder to voice dissenting opinions
  2. entertaining content over important but less-entertaining content
  3. agreeable content without much substance over niche or disagreeable content with lots of substance
  4. posts that raise easy questions and give strong answers over posts that raise hard questions and give weak answers

Sorting the front page by votes, and giving high-karma users more voting power, only does more to incentivize bad habits. I think the current voting system is more suited to something like reddit which is meant for entertainment, so it's reasonable for the most popular posts to appear first. If the idea is to have "all of EA’s top researchers posting and commenting regularly", I don't think votes should be such a strong driver of the UX.

About a year ago I essentially stopped making top-level posts on the EA Forum because the voting system bothers me too much, and the proposed change sounds even worse. Maybe I'm an outlier, but I'd prefer a system that more closely resembled a traditional forum without voting where all posts have equal status. That's probably not optimal and it has its own problems (the most obvious being that low-quality content doesn't get filtered out), but I'd prefer it to the current or proposed system.

Comment by michaeldickens on How to improve EA Funds · 2018-04-19T02:18:37.852Z · score: 0 (2 votes) · EA · GW

Almost all typical assets--bonds, stocks, commodities--are highly liquid, in the sense that if you decide to sell them, you can convert them into cash in a few minutes max. So even a well diversified portfolio can still be liquid. The main exceptions are real estate and private equity, but I see no reason why EA Funds need to hold those.

Comment by michaeldickens on Where Some People Donated in 2017 · 2018-02-15T03:14:23.130Z · score: 1 (1 votes) · EA · GW

I don't know, the link to Zvi's writeup works for me. But here is the URL: https://thezvi.wordpress.com/2017/12/17/i-vouch-for-miri/

Where Some People Donated in 2017

2018-02-11T21:55:09.730Z · score: 18 (18 votes)
Comment by michaeldickens on Four Organizations EAs Should Fully Fund for 2018 · 2017-12-12T17:21:19.882Z · score: 12 (12 votes) · EA · GW

I haven't yet gotten around to writing up where I plan on donating in 2018 (I already maxed out my 2017 donations in February), but I've been thinking along the same lines. Recently I've been leaning toward donating to these smaller, riskier organizations because I see a lot of value in helping new orgs grow and learning what they can accomplish--especially because the established charities that I like best have gotten a lot of funding recently and have room to scale up before they start to hit the limits of their funding.

Comment by michaeldickens on Discussion: Adding New Funds to EA Funds · 2017-06-03T06:46:47.334Z · score: 4 (6 votes) · EA · GW

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment by michaeldickens on Discussion: Adding New Funds to EA Funds · 2017-06-03T06:39:32.901Z · score: 2 (4 votes) · EA · GW

RE #1, organizations doing cause prioritization and not EA community building: Copenhagen Consensus Center, Foundational Research Institute, Animal Charity Evaluators, arguably Global Priorities Project, Open Philanthropy Project (which would obviously not be a good place to donate, but still fits the criterion).

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

Comment by michaeldickens on Expected value estimates we (cautiously) took literally - Oxford Prioritisation Project · 2017-05-30T14:42:42.202Z · score: 0 (0 votes) · EA · GW

Suppose it's 10 years in the future, and we can look back at what ACE and MIRI have been doing for the past 10 years. We now know some new useful information, such as:

  • Has ACE produced research that influenced our understanding of effective charities?
  • Has MIRI published new research that moved us closer to making AI safe?
  • Has ACE moved more money to top animal charities?

But even then, we still don't know nearly as much as we'd like. We don't know if ACE really moved money, or if that money would have been donated to animal charities anyway. Maybe MIRI took funding away from other research avenues that would have been more fruitful. We still have no idea how (dis)valuable the far future will be.

Comment by michaeldickens on Expected value estimates we (cautiously) took literally - Oxford Prioritisation Project · 2017-05-29T23:34:34.906Z · score: 1 (1 votes) · EA · GW

I'm still undecided on the question of whether quantitative models can actually work better than qualitative analysis. (Indeed, how can you even ever know which works better?) But very few people actually use serious quantitative models to make decisions--even if quantitative models ultimately don't work as well as well-organized qualitative analysis, they're still underrepresented--so I'm happy to see more work in this area.

Some suggestions on ways to improve the model:

Account for missing components

Quantitative models are hard, and it's impossible to construct a model that accounts for everything you care about. I think it's a good idea to consider which parts of reality you expect to matter most for the impact of a particular thing, and try to model those. Whatever your model is missing, try to figure out which parts of that matter most. You might decide that some things are too hard to model, in which case you should consider how those hard-to-model bits will likely affect the outcome and adjust your decision accordingly.

Examples of major things left out:

  • 80K model only considers impact in terms of new donations to GWWC based on 80K's own numbers. It would be better to use your own models of the effectiveness of different cause areas and account for how many people 80K moves into/away from these cause areas using your own effectiveness estimates for different causes.
  • ACE model only looks at the value from moving money among top charities. My own model includes money moved among top charities plus new money moved to top charities plus the value of new research that ACE funds.

Sensitivity analysis

The particular ordering you found (80K > MIRI > ACE > StrongMinds) depends heavily on certain input parameters. For example, for your MIRI model, "expected value of the far future" is doing tons of work. It assumes that the far future contains about 10^17 person-years; I don't see any justification given. What if it's actually 10^11? Or 10^50? This hugely changes the outcome. You should do some sensitivity analysis to see which inputs matter the most. If any one input matters too much, break it down into less sensitive inputs.

Comment by michaeldickens on Update on Effective Altruism Funds · 2017-04-28T01:10:04.289Z · score: 1 (1 votes) · EA · GW

Alternatively, you could have global poverty and animal welfare funds that are unmanaged and just direct money to GiveWell/ACE top charities (or maybe have some light management to determine how to split funds among the top charities).

Comment by michaeldickens on Update on Effective Altruism Funds · 2017-04-24T03:11:23.452Z · score: 4 (4 votes) · EA · GW

There's no shortage of bad ventures in the Valley

Every time in the past week or so that I've seen someone talk about a bad venture, they've given the same example. That suggests that there is indeed a shortage of bad ventures--or at least, ventures bad enough to get widespread attention for how bad they are. (Most ventures are "bad" in a trivial sense because most of them fail, but many failed ideas looked like good ideas ex ante.)

Comment by michaeldickens on Update on Effective Altruism Funds · 2017-04-24T03:09:17.240Z · score: 4 (8 votes) · EA · GW

Not sure if this is the right place to say this, but on effectivealtruism.org where it links to "Donate Effectively," I think it would make more sense to link to GiveWell and ACE ahead of the EA Funds, because GiveWell and ACE are more established and time-tested ways of making good donations in global poverty and animal welfare.

(The downside is this adds complexity because now you're linking to two types of things instead of one type of thing, but I would feel much better about CEA endorsing GiveWell/ACE as the default way to give rather than its own funds, which are controlled by a single person and don't have the same requirement (or ability!) to be transparent.)

Comment by michaeldickens on Selecting investments based on covariance with the value of charities · 2017-02-04T18:09:32.116Z · score: 7 (9 votes) · EA · GW

I'm glad you're thinking about this. Investing is an important issue and I believe there's room for more discussion of the topic.

[I]t is commonly accepted by now that altruists should generally be less financially risk averse than other people. This implies that we shouldn't worry too much about diversification, but only about expected value.

False. By diversifying, you can increase your risk at any given level of return, which also means you can increase your return at any given level of risk. (These are dual optimization problems).) You should also be concerned about correlation with other altruistic investors, and most investors put way too much money in their home country (so mostly the US and UK).

I don't know that you are claiming this, but you sort of imply it, so to be clear: you should not believe that US stocks have higher expected returns than any other country. If anything, you should believe that the US market will perform worse than most other countries because it's substantially more expensive. Right now the US has a CAPE ratio of 26, versus 21 for non-US developed markets and 14 for emerging markets. CAPE ratio strongly predicts 10-year future market returns.

On the covariance-with-charities issue: I'm doubtful that this consideration matters enough to substantially change how you should invest. If your investments can perform 2 percentage points better by investing in emerging markets rather than developed markets (which they probably can), I would expect this to outweigh any benefits from increased covariance. I would need to see some sort of quantitative analysis to be convinced otherwise.

I'm also not convinced that we should actually want to increase covariance rather than decreasing it. By increasing covariance you increase expected value by expanding the tails, but I don't believe we should be risk-neutral at a global scale because marginal money put into helping the world has diminishing utility.

Similar concerns apply to investing in companies that are correlated with AI development. AI companies tend to be growth stocks, which underperform the market in the long run compared to value stocks.

Comment by michaeldickens on Why donate to 80,000 Hours · 2016-12-24T20:38:08.720Z · score: 13 (15 votes) · EA · GW

I'm glad that you write this sort of thing. 80K is one of the few organizations that I see writing "why you should donate to us" articles. I believe more organizations should do this because they generally know more about their own accomplishments than anyone else. I wouldn't take an organization's arguments as seriously as a third party's because they're necessarily biased toward themselves, but they can still provide a useful service to potential donors by presenting the strongest arguments in favor of donating to them.

I have written before about why I'm not convinced that I should donate to 80K (see the comments on the linked comment thread). I have essentially the same concerns that I did then. Since you're giving more elaborate arguments than before, I can respond in more detail about why I'm still not convinced.

My fundamental concern with 80K is that the evidence it its favor is very weak. My favorite meta-charity is REG because it has a straightforward causal chain of impact, and it raises a lot of money for charities that I believe do much more good in expectation than GiveWell top charities. 80K can claim the latter to some extent but cannot claim the former.

Below I give a few of the concerns I have with 80K, and what could convince me to donate.

Highly indirect impact. A lot of 80K's claims to impact rely on long chains such that your actual effect is pretty indirect. For example, the claim that an IASPC is worth £7500 via getting people to sign the GWWC pledge relies on assuming:

  • These people would not have signed the pledge without 80K.
  • These people would not have done something similarly or more valuable otherwise.
  • The GWWC pledge is as valuable as GWWC claims it is.

I haven't seen compelling evidence that any of these is true, and they all have to be true for 80K to have the impact here that it claims to have.

Problems with counterfactuals.

When someone switches from (e.g.) earning to give to direct work, 80K adds this to its impact stats. When someone else switches from direct work to earning to give, 80K also adds this to its impact stats. The only way these can both be good is if 80K is moving people toward their comparative advantages, which is a much harder claim to justify. I would like to see more effort on 80K's part to figure out whether its plan changes are actually causing people to do more good.

Questionable marketing tactics.

This is somewhat less of a concern, but I might as well bring it up here. 80K uses very aggressive marketing tactics (invasive browser popups, repeated asks to sign up for things, frequent emails) that I find abrasive. 80K justifies these by claiming that it increases sign-ups, and I'm sure it does, but these metrics don't account for the cost of turning people off.

By comparison, GiveWell does essentially no marketing but has still attracted more attention than any other EA organization, and it has among the best reputations of any EA org. It attracts donors by producing great content rather than by cajoling people to subscribe to its newsletter. For most orgs I don't believe this would work because most orgs just aren't capable of producing valuable content, but like GiveWell, 80K produces plenty of good content.

Perhaps 80K's current marketing tactics are a good idea on balance, but we have no way of knowing. 80K's metrics can only observe the value its marketing produces and not the value it destroys. It may be possible to get better evidence on this; I haven't really thought about it.

Past vs. future impact.

80K has made a bunch of claims about its historical impact. I'm skeptical that the impact has been as big as 80K claims, but I'm also skeptical that the impact will continue to be as big. For example, 80K claims substantial credit for about a half dozen new organizations. Do we have any reason to believe that 80K will cause more organizations to be created, and that they will be as effective as the ones it contributed to in the past? 80K's writeup claims that it will but doesn't give much justification. Similarly, 80K claims that a lot of benefit comes from its articles, but writing new articles has diminishing utility as you start to cover the most important ideas.


In summary, to persuade me to donate to 80K, you need to convince me that it has sufficiently high leverage that it does more good than the single best direct-work org, and it has higher leverage than any other meta org. More importantly, you need to find strong evidence that 80K actually has the impact it claims to have, or better demonstrate that the existing evidence is sufficient.

Comment by michaeldickens on Should donors make commitments about future donations? · 2016-12-23T04:31:09.891Z · score: 2 (2 votes) · EA · GW

I am not donating any money this year, but I did promise GFI that I would donate $25,000 to it early next year. I discussed this with GFI and we agreed that this was about as good as donating the money immediately.

Comment by michaeldickens on 2016 AI Risk Literature Review and Charity Comparison · 2016-12-19T00:51:52.386Z · score: 2 (4 votes) · EA · GW

This article is long enough that it would be helpful to put a table of contents at the top.

Comment by michaeldickens on A Different Take on President Trump · 2016-12-10T05:07:05.183Z · score: 5 (5 votes) · EA · GW

I don't believe people should vote on posts based on whether they believe the posts do net benefit or net harm. That's what a naive utilitarian approach would suggest, but I don't think we should take a naive utilitarian approach. Instead we should vote based on how meaningfully the post contributes, even if we believe the conclusion is wrong.

I disagree with your claim that we should censor "bad" opinions and I believe this sort of behavior damages healthy discourse in the long run. I'm not downvoting your comment because that would go against my beliefs about how people ought to vote on things. Actually I'm upvoting it because you're saying something relatively novel and it made me think about things in a way I hadn't before.

I do think that we need to push for more intellectual diversity in the EA movement, but there are much better ways to do this than entertain this sort of discussion.

I'd be interested in knowing what ways you think would be better.

Comment by michaeldickens on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2016-12-09T16:04:09.084Z · score: 4 (4 votes) · EA · GW

This is enough to make me discount its value by perhaps one-to-two orders of magnitude.

So you'd put the probability of CEV working at between 90 and 99 percent? 90% seems plausible to me if a little high; 99% seems way too high.

But I have to give you a lot of credit for saying "the possibility of CEV discounts how valuable this is" instead of "this doesn't matter because CEV will solve it"; many people say the latter, implicitly assuming that CEV has a near-100% probability of working.

Comment by michaeldickens on A Different Take on President Trump · 2016-12-09T15:43:37.534Z · score: 7 (7 votes) · EA · GW

I'm sorry you're getting downvoted--I'm glad that you're providing a different perspective from the usual political opinions we see on the EA Forum.

Comment by michaeldickens on A Different Take on President Trump · 2016-12-09T15:38:31.286Z · score: 7 (7 votes) · EA · GW

The concerns about US/Russian relations appear particularly important, and it's something that most people seem to overlook. It's plausible to me that a Trump administration has lower risk of causing an extinction-level event than a Clinton administration, and I've never heard a compelling argument for why other concerns matter more.

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-12-08T22:04:36.229Z · score: 2 (2 votes) · EA · GW

Nick Beckstead and I have agreed to bet $1000 at even odds on the proposition

By the end of 2021, at least one restaurant regularly serves cultured animal tissue for human consumption.

Comment by michaeldickens on Donor lotteries: demonstration and FAQ · 2016-12-08T03:41:26.063Z · score: 1 (1 votes) · EA · GW

Going short on an asset has the same variance as going long, but with opposite expected value (actually slightly lower because of borrowing costs).

Comment by michaeldickens on CEA is Fundraising! (Winter 2016) · 2016-12-07T15:25:31.279Z · score: 5 (5 votes) · EA · GW

1-2 posts per year seems arguably reasonable; one post per month (as CEA has been doing) is excessive.

Comment by michaeldickens on CEA is Fundraising! (Winter 2016) · 2016-12-07T05:10:38.335Z · score: 1 (17 votes) · EA · GW

I don't believe organizations should post fundraising documents to the EA Forum. As a quick heuristic, if all EA orgs did this, the forum would be flooded with posts like this one and it would pretty much kill the value of the forum.

It's already the case that a significant fraction of recent content is CEA or CEA-associated organizations talking about their own activities, which I don't particularly want to see on the EA Forum. I'm sure some other people will disagree but I wanted to contribute my opinion so you're aware that some people dislike these sorts of posts.

Comment by michaeldickens on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-07T03:24:12.644Z · score: 1 (1 votes) · EA · GW

The time-relative interest view is a type of person-affecting view, so if PAV breaks transitivity or independence of irrelevant alternatives then so does TRIV.

Comment by michaeldickens on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-07T03:22:21.390Z · score: 3 (3 votes) · EA · GW

I've spent 2-3 hours going over GiveWell's cost-effectiveness spreadsheet, so don't expect to understand it immediately. GiveWell has a video explaining how the 2015 spreadsheet works. I haven't much looked at the 2016 spreadsheet but it looks a lot better designed so it shouldn't take as long to understand.

Comment by michaeldickens on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-06T16:07:20.539Z · score: 1 (1 votes) · EA · GW

Well I discuss related issues here, but I'm not the first person to notice. Population ethicists have raised these issues many times before. I don't have any good references on hand because I learned about these issues from classes and discussions, not from reading papers; but here are some search results to get you started.

Edit: clarification

Comment by michaeldickens on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-06T15:53:22.289Z · score: 3 (3 votes) · EA · GW

What is the difference between the deprivationist view and the QALY-equivalent of saving a 5-year old's life?

It sounds like you're slightly misunderstanding me. GiveWell's 2015 estimate said that the value of saving a 5-year old's life was ~36 QALYs, which is a time-discounted estimate of the number of quality-adjusted years of life the 5-year old will now have. In the 2016 estimate, employees explicitly input how valuable they think it is to save a 5-year old in terms of QALYs--on the spreadsheet, look at the "Bed Nets" tab in the row "DALYs averted per death of an under-5 averted — AMF". The median value is 8.25, and estimates range from 3 to 26. The highest estimate, 26, is still lower than last year's estimate of 36, which suggests that none of the employees who filled this out adopt the deprivationist view.

And yeah, I'm was just following you when you said there was a 'GiveWell view'. I know in your post you explain how it's a composition of staff views.

Last year GiveWell's cost-effectiveness estimate used 36 QALYs per life saved, which implies a deprivationist view. That's not a composite of staff views, that's the result implied by GiveWell's reported cost-effectiveness numbers. It now appears that no GiveWell employees (or at least none who contributed to this cost-effectiveness analysis) actually hold a deprivationist view.

Comment by michaeldickens on A new reference site: Effective Altruism Concepts · 2016-12-06T04:33:39.050Z · score: 3 (3 votes) · EA · GW

Some of the articles seem like they emphasize weird things. First example I noticed was the page on consuming animal products has three links to fairly specific points related to eating animals but no links to articles that present an actual case for veg*anism, and the article itself does not contain a case. This post is the sort of thing I'm talking about.

Comment by michaeldickens on A new reference site: Effective Altruism Concepts · 2016-12-06T04:28:20.586Z · score: 1 (1 votes) · EA · GW

Your bulleted list is not formatted correctly which makes it really hard to read; can you fix it by putting two newlines before it?

Comment by michaeldickens on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-06T04:10:36.188Z · score: 1 (1 votes) · EA · GW

I think AMF still looks like the best charity if you (a) are highly skeptical of interventions with relatively weak evidence and (b) adopt a "common sense" view of population ethics (which looks something like the time-relative interest account). But I do think these assumptions are both pretty unreasonable, and therefore their conjunction is even more unreasonable.

If you strongly discount interventions based on strength of evidence, that defeats life extension and deworming. I don't think it makes sense to care so much about strength of evidence that you prefer malaria to deworming, but it's possible to consistently prefer AMF.

I really, really don't think anyone should adopt the "common sense" view of population ethics (although obviously most people do in fact adopt it), because it's self-contradictory. If you do adopt the time-relative interest view, to avoid internal contradiction, you have to do something really weird like reject independence of irrelevant alternatives[1] or reject the transitivity of moral preferences[2]. I haven't explored these possibilities, but they probably have strong implications about which charities you should donate to, and it seems likely that AMF would not look best under them.

[1] Independence of irrelevant alternatives: If you have options A and B and you prefer A to B, then it is also the case that when you have options A, B, and C, you prefer A to B.

[2] Transitivity: If you prefer A to B and you prefer B to C, then you prefer A to C.

Comment by michaeldickens on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-06T03:58:18.326Z · score: 6 (6 votes) · EA · GW

Note: You quote me as claiming that GiveWell adopts the deprivationist account. GiveWell's 2015 cost-effectiveness estimate for AMF implies a deprivationist view, but the 2016 estimate explicitly calculates the QALY-equivalent value of saving a 5-year old's life. This means there's not a single "GiveWell view" because the reported cost-effectiveness estimate takes the median of about a dozen GiveWell employees' individual estimates, but most employees appear to follow the time-relative interest account while a few adopt the deprivationist account.

Largely because of this change, GiveWell now claims that AMF is 4x as cost-effective as GiveDirectly, not 11x.

Comment by michaeldickens on Contra the Giving What We Can pledge · 2016-12-06T03:05:49.817Z · score: 5 (7 votes) · EA · GW

I disagree with this reasoning. The point of a commitment device is to, you know, commit you. If you can break a pledge whenever you want, it's not actually a pledge. If you commit yourself to something, it's because you think there's a possibility that you will change your mind in the future and you want to prevent that from happening. So the commitment serves no purpose if it doesn't actually prevent you from changing your mind.

Perhaps there's value in publicly registering "I plan on donating 10%" without explicitly committing to it, in which case it shouldn't be framed as a commitment.

Comment by michaeldickens on Contra the Giving What We Can pledge · 2016-12-05T16:04:37.484Z · score: 0 (0 votes) · EA · GW

If you take the reference class as people reading the EA Forum rather than people who've taken the GWWC pledge, Alyssa could be right. So it depends on whether the question is "should people who are reading this take the pledge" or "should the pledge exist/should we try really hard to promote it".

Comment by michaeldickens on Contra the Giving What We Can pledge · 2016-12-05T15:59:59.470Z · score: 5 (5 votes) · EA · GW

Tip: If you put a greater-than symbol ">" before a text block, it will turn into a quote. That's much easier to read than using quotation marks for long quotes.

> this is quoted text

turns into

this is quoted text

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-11-16T18:29:21.803Z · score: 1 (1 votes) · EA · GW

Buck Shlegeris has agreed to bet his $2800 against my $2000 on the proposition

By the end of 2021, a restaurant regularly sells an item primarily made of a cultured animal product with a menu price less than $100.

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-11-16T03:35:01.504Z · score: 2 (2 votes) · EA · GW

The title of the conference was Second International Conference on Cultured Meat.

Related article: https://www.clearlyveg.com/blog/2016/11/13/reflections-the-second-international-conference-cultured-meat

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-11-15T03:23:26.179Z · score: 0 (0 votes) · EA · GW

Video is not available, although I heard it might be at some point in the future.

Comment by michaeldickens on The Best of EA in 2016: Nomination Thread · 2016-11-12T05:10:06.006Z · score: 3 (5 votes) · EA · GW

This is a pretty difficult test to pass. Some things I read that did cause me to do noticeably more good include:

  • Peter Singer's All Animals Are Equal, because it played a significant role in me becoming vegetarian (and later vegan) and taking animal welfare seriously
  • GiveWell's writeup on VillageReach because it taught me that finding good charities is hard and you shouldn't rely on naive cost-effectiveness estimates
  • GiveWell's suggested questions to ask when evaluating charities (I don't know if this is still on the site)
  • Brian Tomasik's The Importance of Wild Animal Suffering because it convinced me that wild animal suffering is important
  • Brian Tomasik's cost-effectiveness analysis on factory farming interventions
  • The book The Intelligent Asset Allocator, which ostensibly has nothing to do with doing good, but helped me learn how to better manage my investments which indirectly enables me to do a lot more good
  • Alexei Andreev's Maximizing Your Donations via a Job

None of these are from 2016 so they're not eligible. As far as I can remember, the only things I've read in 2016 that caused me to do substantially more good were charities' writeups about their own activities.

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-11-10T15:13:07.791Z · score: 2 (2 votes) · EA · GW

This year I spoke with three charities (ACE, GFI, and REG). I narrowed down to a list of finalists using only public information, and I didn't feel the need to speak to my other finalist, MFA. The three I spoke with are unusually transparent, and I don't believe a random sample of charities would have the same level of forthrightness that these did.I asked them all similar questions, so I don't know how sensitive they are to the wording of the messages.

Comment by michaeldickens on The Best of EA in 2016: Nomination Thread · 2016-11-09T15:06:28.607Z · score: 1 (3 votes) · EA · GW

I've spent enough time on forums to know that you can't stop people from voting politically by asking them politely. I think a better solution is to automatically detect mass-downvoting and nullify those votes in the source code. https://github.com/tog22/eaforum/issues/47

Comment by michaeldickens on The Best of EA in 2016: Nomination Thread · 2016-11-09T14:49:16.363Z · score: 4 (4 votes) · EA · GW

I've thought about this some more and I have some idea of the kind of process I would use if I were trying to curate the best content in EA.

I don't trust myself to make intuitive judgments about which posts are best--I'm going to end up picking the ones that were the most fun to read. I believe I could mitigate this by creating an explicit checklist of the things I would want in a "best of" post, and then look for posts matching it.

Actually the #1 thing I'd look for in a post is, did I do substantially more good as a result of reading this post? Sometimes it's obvious how something you read helps you do good and sometimes it's more vague, but you should at least be able to say why a post substantially benefited you if you're going to nominate it as a "best of" post.

Comment by michaeldickens on The Best of EA in 2016: Nomination Thread · 2016-11-09T03:25:43.572Z · score: 4 (4 votes) · EA · GW

This is the sort of post I was talking about in my other comment--fun to read and easy to agree with, and therefore popular, but not particularly important.

Comment by michaeldickens on The Best of EA in 2016: Nomination Thread · 2016-11-08T03:42:49.003Z · score: 7 (9 votes) · EA · GW

I believe that when people describe content as "best", what they usually mean is "most fun to read", which is probably not what you want. People naturally like things better when they're fun to read, or when they "feel" insightful. People enjoy reading motivational blogs, even though they're basically useless; people do not enjoy reading statistics textbooks, even though they're extremely useful. I don't believe I personally can do a good job of separating posts/articles that are important to read and ones that I enjoyed reading.

On the other hand, I cannot think of a better strategy for curating good content than asking people to submit the posts they like best. Maybe something like peer review would work better, where you get a small group of people who consciously optimize for finding valuable articles, not necessarily interesting ones?

Comment by michaeldickens on Dedicated Donors May Not Want to Sign the Giving What We Can Pledge · 2016-11-07T15:36:19.541Z · score: 0 (0 votes) · EA · GW

I can see a possibility that I would donate less after making a 10% pledge than I would with no pledge, because the 10% would anchor my donations downward. I would prefer to make no commitment than to commit 10%. Hopefully future me is self-aware enough to avoid this sort of anchoring, but it's a pretty strong bias that happens even when you know it's happening.

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-11-06T16:34:05.613Z · score: 1 (1 votes) · EA · GW

For visibility: Bruce Friedrich from GFI replied here.

Comment by michaeldickens on Where I Am Donating in 2016 · 2016-11-06T16:32:56.806Z · score: 0 (0 votes) · EA · GW

If we develop cost-competitive clean meat within the next 5 years, it will probably take another 5-10 years before fast food chains start serving them (and may take longer in the US because the USDA may have to approve it first, which could take a long time). So I don't think there's a high probability that fast food chains will adopt clean meat by 2021, although this has little to do with my beliefs about when it will achieve cost-competitiveness. Even if it were cost-competitive as of right now, I still wouldn't expect to see clean meat in fast food chains within 5 years.

Betting on fast food chains increases more dependencies in the bet--instead of just betting on when clean meat will be cost-competitive, we're betting on how quickly it will achieve widespread acceptance and production will scale up to a national level. I would prefer to make a simpler bet that's purely about cost-effectiveness.

Where I Am Donating in 2016

2016-11-01T04:10:02.389Z · score: 16 (22 votes)

Dedicated Donors May Not Want to Sign the Giving What We Can Pledge

2016-10-30T03:26:44.215Z · score: 14 (18 votes)

Altruistic Organizations Should Consider Counterfactuals When Hiring

2016-09-11T04:19:39.164Z · score: 1 (7 votes)

Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering

2016-08-26T02:08:53.190Z · score: 21 (29 votes)

Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply)

2016-06-10T21:35:50.236Z · score: 8 (8 votes)

A Complete Quantitative Model for Cause Selection

2016-05-18T02:17:28.769Z · score: 19 (23 votes)

Quantifying the Far Future Effects of Interventions

2016-05-18T02:15:07.240Z · score: 8 (8 votes)

GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics

2016-05-17T01:51:15.218Z · score: 26 (28 votes)

On Priors

2016-04-26T22:35:14.359Z · score: 9 (9 votes)

How Should a Large Donor Prioritize Cause Areas?

2016-04-25T20:46:38.304Z · score: 13 (13 votes)

Expected Value Estimates You Can (Maybe) Take Literally

2016-04-06T15:11:59.359Z · score: 19 (23 votes)

Are GiveWell Top Charities Too Speculative?

2015-12-21T04:05:07.675Z · score: 15 (19 votes)

More on REG's Room for More Funding

2015-11-16T17:31:40.493Z · score: 9 (11 votes)

Cause Selection Blogging Carnival Conclusion

2015-10-05T20:16:43.945Z · score: 7 (7 votes)

Charities I Would Like to See

2015-09-20T15:22:43.083Z · score: -5 (25 votes)

My Cause Selection: Michael Dickens

2015-09-15T23:29:40.701Z · score: 28 (28 votes)

On Values Spreading

2015-09-11T03:57:55.148Z · score: 6 (6 votes)

Some Writings on Cause Selection

2015-09-08T21:56:01.033Z · score: 4 (4 votes)

EA Blogging Carnival: My Cause Selection

2015-08-16T01:07:22.005Z · score: 11 (11 votes)

Why Effective Altruists Should Use a Robo-Advisor

2015-08-04T03:37:13.789Z · score: 9 (9 votes)

Stanford EA History and Lessons Learned

2015-07-02T03:36:56.688Z · score: 25 (25 votes)

How We Run Discussions at Stanford EA

2015-04-14T16:36:05.363Z · score: 13 (13 votes)

Meetup : Stanford THINK

2014-10-23T02:10:42.641Z · score: 1 (1 votes)