Posts

Towards effective entrepreneurship: what makes a startup high-impact? 2017-11-26T17:01:59.212Z · score: 5 (5 votes)
Towards effective entrepreneurship: Good Technology Project post-mortem 2017-11-26T16:56:08.288Z · score: 24 (20 votes)
EA should invest more in exploration 2017-02-05T17:11:34.506Z · score: 23 (25 votes)

Comments

Comment by michael_pj on We're Rethink Priorities. AMA. · 2019-12-16T16:23:11.704Z · score: 11 (8 votes) · EA · GW

Upvoted for being honest about status-related desires. Good to keep an eye on them but I think they can be useful motivators when they're pointing in the right direction!

Comment by michael_pj on Comparative advantage in the talent market · 2018-04-12T20:46:53.343Z · score: 1 (1 votes) · EA · GW

This is a good point, although talent across time is comparatively harder to estimate. So "act according to present-time comparative advantage" might be a passable approximation in most cases.

We also need to consider the interim period when thinking about trades across time. If C takes the ops job, then in the period between C taking the job and E joining the movement, we get better ops coverage. It's not immediately obvious to me how this plays out, might need a little bit of modelling.

Comment by michael_pj on Towards effective entrepreneurship: Good Technology Project post-mortem · 2018-04-12T20:42:00.939Z · score: 0 (0 votes) · EA · GW

I've DM'd you.

Comment by michael_pj on Could I have some more systemic change, please, sir? · 2018-01-24T00:08:42.301Z · score: 0 (0 votes) · EA · GW

Sure - I don't think "systematic change" is a well-defined category. The relevant distinction is "easy to analyze" vs "hard to analyze". But in the post you've basically just stipulated that your example is easy to analyze, and I think that's doing most of the work.

So I don't think we should conclude that "systematic changes look much more effective" - as you say, we should look at them case by case.

Comment by michael_pj on Could I have some more systemic change, please, sir? · 2018-01-23T21:03:33.028Z · score: 0 (0 votes) · EA · GW

There's a sliding scale of what people consider "systematic reform". Often people mean things like "replace capitalism". I probably wouldn't even have classed drug policy reform or tax reform as "systematic reform", but it's a vague category. Of course the simpler ones will be easier to analyze.

Comment by michael_pj on Could I have some more systemic change, please, sir? · 2018-01-23T21:01:28.363Z · score: 0 (0 votes) · EA · GW

Frame it as a matter of degree if you like: I think we're drastically more clueless about systematic reform than we are about atomic interventions.

Comment by michael_pj on Could I have some more systemic change, please, sir? · 2018-01-23T18:57:02.956Z · score: 2 (2 votes) · EA · GW

You seem to be assuming that the "bad case" for systematic reform is that it's, say, 1/500th of the benefit of the GD average effort. But I don't think that's the bad case for most systematic reforms: the bad case is that they're actively harmful.

For me, at least, the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think the ceiling cost estimate is a nice way of framing the comparison, but I agree with Milan that the hard bit is working out the expected effect.

Comment by michael_pj on How to get a new cause into EA · 2018-01-18T21:53:58.890Z · score: 1 (1 votes) · EA · GW

"Mental Health and Happiness Research". Coin your own meaningless acronym if you don't like it :)

Comment by michael_pj on How to get a new cause into EA · 2018-01-13T12:32:59.495Z · score: 4 (4 votes) · EA · GW

I think you're right that having "an organization" talking about X is necessary for X to reach "full legitimacy", but it's worth pointing out that many pioneers in new areas within EA just started their own orgs (ACE, MIRI etc.) rather than trying to persuade others to support them.

Having even a nominal "project" allows you to collaborate more easily with others and starts to build credibility that isn't just linked to you. I think perhaps you should just start MH&HR.

Comment by michael_pj on Towards effective entrepreneurship: Good Technology Project post-mortem · 2017-12-01T22:36:45.093Z · score: 0 (0 votes) · EA · GW

I think the easiest way to understand is by analogy to carbon offsets, which are a kind of limited form of CoI that currently exist.

Carbon offsets are generally certified by an organization to say that they actually correspond to what happened, and that they are not producing too many. I don't think there's a fundamental problem with allowing un-audited certificates in the market, but they'd probably be worth a lot less!

I think the middle man making money is exactly what you want to happen. The argument is much the same as for investment and speculation in the for-profit world: people who're willing to take on risk by investing in uncertain things can profit from it, thus increasing the supply of money to people doing risky things.

Here's a concrete example: suppose I want to start a new bednet -distributing charity. I think I can get them out for half the price of AMF. Currently, there is little incentive for me to do this, and I have to go and persuade grantmakers to give me money.

In the CoI world, I can go to a normal investor, and point out that AMF sells (audited) bednet CoIs for $X, and that I can produce them at half the cost, allowing us to undercut AMF. I get investment and off we go. So things behave just like they do in the for-profit world (which you may or may not think is good).

What you do need is for people to want to buy these and "consume" them. I think that's the really hard problem, getting people to treat them like "real" certificates.

Happy to talk about this more - PM me if you'd like to have a chat.

Comment by michael_pj on Towards effective entrepreneurship: what makes a startup high-impact? · 2017-11-27T20:05:17.998Z · score: 1 (1 votes) · EA · GW

Let me illustrate my argument. Suppose there are two opportunities, X and Y. Each of them contributes some value at each time step after they've been taken.

In the base timeline, A is never taken, and B is taken at time 2.

Now, it is time 1 and you have the option of taking A or B. Which should you pick?

In one sense, both are equally neglected, but in fact taking A is much better, because B will be taken very soon, whereas A will not.

The argument is that new technology is more likely to be like B, and any remaining opportunities in old technology is more likely to be like A (simply because if it were easy to do, we would have expected someone to do it already).

So even if most breakthroughs occur at the cutting edge, so long as we expect other people to do them soon, and they are not so big that we really want even a small speedup, then it can be better to find things that are more "persistently" neglected. (I used to use "persistent neglectedness" and "temporary neglectedness" for these concepts, but I thought it was confusing)

Comment by michael_pj on Towards effective entrepreneurship: what makes a startup high-impact? · 2017-11-27T19:58:02.231Z · score: 1 (1 votes) · EA · GW

Yes - I should have clarified but this is deliberately not addressing the "earning to give through entrepreneurship" route. I should have mentioned it because it's quite important: I think for a lot of people it's going to be the best route.

Aside: if I think earning to give is so great, why have I been spending so much time talking about direct work? Because I think we need to do more exploration.

Comment by michael_pj on Towards effective entrepreneurship: what makes a startup high-impact? · 2017-11-27T19:54:56.528Z · score: 1 (1 votes) · EA · GW

I think it's worth trying to have a toy model of this, even if it's mostly big boxes full of question marks. Going down to the gears level can be very helpful.

For example, it can help you answer questions like "how much good does doing X for one person have to do for this to be worth it?", or "how many people do we need to reach for this to be worth it?". You might also realise that all your expected impact comes from a certain class of thing, and then try and do more of that or measure it more carefully.

Which externalities to include is a tough question! In most examples I think there are a few that are "obviously" the most important, but that's just pumping my intuition and probably missing some things. I think often this is a case of building out your "informal model" of the project: presumably you think it will be good, but why? What is it about the project that could be good (or bad)? If you can answer those questions you have at least a starting point.

One final thing: when I say "negative externality" I mean something that's actively bad. It seems unlikely that people using your platform for ineffective projects is bad, but rather neutral (since we think they're not very effective). What might be bad could be e.g. reputational damage from being associated with such things.

Comment by michael_pj on Towards effective entrepreneurship: Good Technology Project post-mortem · 2017-11-27T19:43:29.382Z · score: 1 (1 votes) · EA · GW

I'm pretty interested in blockchain-based tools as platforms for improved institutions. For example, I'd love to see a well-thought-out implementation of certificates of impact.

I think that there is an important distinction between problems and solutions, so I'm optimistic that it could be possible to make a useful breakdown of problems without having to (or being able to) say much about solutions. However, I'm largely speculating.

Comment by michael_pj on Towards effective entrepreneurship: Good Technology Project post-mortem · 2017-11-27T19:38:53.114Z · score: 2 (2 votes) · EA · GW

It's true that often entrepreneurs aren't cause-neutral. They're may be working in an area because it interests them, or they care about it.

But often they're more constrained by what they can do. Typically that's where they start from, and then they look for things that they can fix in that area.

The second problem is, I think, more fundamental. We can try and convince people to be more cause-neutral (although the fewer population-level changes we need to make, the better!), but it's just hard, per-individual work to move expertise and knowledge.

Comment by michael_pj on Causal Networks Model I: Introduction & User Guide · 2017-11-18T12:13:35.448Z · score: 1 (1 votes) · EA · GW

I am excited about this! I have some technical questions, but I'll save them until I've read part II.

Comment by michael_pj on In defence of epistemic modesty · 2017-10-30T22:36:58.050Z · score: 2 (2 votes) · EA · GW

Great post!

I think the question of "how do we make epistemic progress" needs a bit more attention.

Continuing the analogy with the EMH (which I like), I think the usual answer is that there are some $20 bills on the floor, and that some individuals are either specially placed to see that, or have a higher risk tolerance and so are willing to take the risk that it's a prank.

This suggests similar cases in epistemics: some people really are better situated (the experts), but perhaps we should also consider the class of "epistemic risk takers", who hold riskier beliefs in the attempt to get "ahead of the curve". However, as with startup founders, we should take such people with more than a pinch of salt. We may want to "fund" the ecosystem as a whole, because on average the one successful epistemic entrepreneur pays for the rest, but any individual is still likely to be worse than the consensus.

So that suggests that we should encourage people to think riskily, but usually discount their "risky" beliefs when making practical decisions until they have proven themselves. And this actually seems to reflect our behaviour: people are much more epistemically modest when the money is on the table. Being more explicit about which beliefs are "speculative" seems like it would be an improvement, though.

Finally, we also have to ask how people become experts. Again, in the economic analogy, people end up well situated to start businesses often through luck, sometimes through canny positioning, and sometimes through irrational pursuit of an idea. Similarly, to become an expert in X one has to invest a lot of time and effort, but we may want people to speculate in this domain too, and become experts in unlikely things so that we can get good credences on topics that may, with low probability, turn out to be important.

(Meta: I was confused about whether to comment on LW2 or here. Cross-posting comments seems silly... or is it?)

Comment by michael_pj on In defence of epistemic modesty · 2017-10-30T22:20:07.801Z · score: 2 (2 votes) · EA · GW

Concur that the distinction between "credence by lights" and "credence all things considered" seems very helpful, possibly deserving of it's own post.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T01:36:51.054Z · score: 23 (23 votes) · EA · GW

Easy money: https://userstyles.org/styles/150270/effective-altruism-form-anti-kibitzer

I'd tell you to keep it or donate it, but I want to encourage the norm that such offers represent a real cost, so I hereby commit to use this money entirely on hedonistic pleasures.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T18:45:21.491Z · score: 6 (6 votes) · EA · GW

Whether a discussion proceeds as collaborative or combative depends on how the participants interpret what the other parties say. This is all heavily contextual, but as with many things involving conversational implicature, you can often spend some effort to clarify your implicature.

The internet is notoriously bad for conveying the unconscious signals that we usually use to pick up on implicature, and I think this is one of the reasons that internet discussions often turn hostile and combative.

So it's worth putting in more signals of your intent into the text itself, since that's all you have.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T17:55:26.798Z · score: 6 (6 votes) · EA · GW

I'm assuming people think discussion of inclusion issues is a terrible idea. Assuming that is what I've been downvoted for, that makes me feel disappointed in the online EA community and increases my belief this is a problem.

This seems like a lot to infer from some downvotes.

FWIW I didn't downvote your comment but it annoyed me. It was this:

I think it weird, given there's so much mainstream discussion of inclusion, that it hasn't seemed to penetrate into EA.

I feel like I've seen quite a lot of discussion of diversity in EA, and I don't think it's been overly unsophisticated. This comment therefore feels frustrating, like the "why doesn't EA talk about systemic change?" comments. I would guess this is a common feeling, given the positive response to http://effective-altruism.com/ea/1g3/why_how_to_make_progress_on_diversity_inclusion/c7n . That might explain the downvotes. On the other hand, this

My subjective view is that this topic is under-discussed relative to how much I feel it should be discussed.

feels much more positive to me. Okay, Michael Plant thinks we need to have a lot of discussion about this for some reason. Fair enough.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T17:41:16.635Z · score: 2 (2 votes) · EA · GW

I do think that there should be higher bars for overtly signalling collaborativeness online, because so many other cues are missing.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:53:04.835Z · score: 7 (7 votes) · EA · GW

Maybe voters on the EA forum should be blinded to the author of a post until they've voted!

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:50:57.544Z · score: 5 (5 votes) · EA · GW

As I said, I'm totally in favour of collaborative discussions, i.e. this stuff

they don't raise their voices, go ad hominem, tear apart one aspect of an argument to dismiss the rest, or downvote comments that signal an identity that theirs is constructed in opposition to

(except possibly raised voices), but I wanted to argue that sometimes things that look like combative discussion aren't. Imagine:

A:

B: I think that's a pretty bad argument because . seems much better.

A: No, you didn't understand what I'm saying, I said .

This could be a snippet of a tense combative argument, or just a vigorous collaborative brainstorming session. A might feel unfairly dismissed by B, or might not even notice it. If we were trying to combat combtiveness by calling out people abruptly shooting down other people's ideas, then we might prevent people from doing this particular style of rapid brainstorming.

(Sorry, this stuff is hard to talk about because it's very contextual. I should probably have picked a better example :))

What I'm trying to say is that we just need to be a little bit careful how we shoot for our goals.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:41:15.931Z · score: 4 (4 votes) · EA · GW

Sorry, that was me being unclear! The situation I'm envisaging is:

  • We want more X.
  • We can't detect X directly, so we'll pick some marker for X that looks like X (that's what I was going at with "formal", "relating to the form of"), and then aim for that.
  • Oops our markers don't capture X, or even exclude some important bits of X.
Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T23:23:49.236Z · score: 14 (14 votes) · EA · GW

I'd like to move towards an inclusive community that doesn't damage the valuable aspects of EA. I think this post mostly did a good job of suggesting things in that vein (I was heartened to see "don't stop being weird" as an item), but I'd like to push on the point a bit more.

For example, I'm hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and "you are not your ideas" is common knowledge. If we ban this, then we make some parts of our discourse worse. Overly zealous pursuit of formalized markers can destroy a lot of value.

Of course, the solution is "don't do that", but the most obvious approach to "have more X" is "pick some formal markers of X and select for them". Doing better is harder, perhaps something like "have workshops/talks on good disagreement", "praise people who're known for being excellent at this" etc.


I agree with others that there are too many suggestions in this post. They're also a bit of a grab bag. I can see a few categories:

  • Miscellaneous criticisms, many of which seem plausible, but aren't obviously any more important for diversity than for their other benefits (collaborative discussions, humility, less hero-worship, better interpersonal interactions etc.).
  • Larger-scale shifts of uncertain effect (head vs heart, jargon, caution over "free speech", etc.). A lot of these are unclear to me, and I think we'd want to take a clear-headed look at the costs and benefits.
  • More specific diversity-boosting measures (female speakers, try to counteract bias, mentor people etc.). These seem clearest to me, and hopefully we can look and see what's worked well in other places vs the costs.

I think the miscellaneous improvements could (and should!) go stand on their own; the larger-scale shifts are perhaps best discussed individually; and what I think a diversity criticism is uniquely placed to bring is more of the third kind of thing.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T22:59:01.387Z · score: 8 (8 votes) · EA · GW

I think this is a big deal, unfortunately. I try to talk about EA very carefully when talking to people who're "A first", but people can sense any implicit criticism a mile off. It's really hard to avoid some variant of "So you think I've been wasting my time, then?"

Strangely, "E first" people may be easier to reach because they're less likely to be already invested in something.

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T22:56:51.377Z · score: 3 (3 votes) · EA · GW

My gut reaction is that most of the people who have stuck around are "E first", but I think there's probably a higher base rate of those amongst early adopters, so hard to say.

It seems like we could gather some data on this, though. It's a vague question, but I suspect most people would be able to answer some variant of "Were you E first or A first? E/A/Other". Then we could see if that had any relationship to tenure in the community, or anything else. Perhaps an item for the next Effective Altruism survey?

Comment by michael_pj on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T22:48:31.224Z · score: 1 (1 votes) · EA · GW

Thank you so much for writing this.

Comment by michael_pj on Lessons from a full-time community builder. Part 1 of 4. Impact assessment · 2017-10-14T21:44:53.067Z · score: 1 (1 votes) · EA · GW

This is great stuff! Really appreciate the effort you put into measuring things.

Comment by Michael_PJ on [deleted post] 2017-10-14T21:41:40.118Z

Thanks for this, detailed post-mortems like this are very valuable!

Some thoughts:

  1. I considered getting involved in the project, but was somewhat put off by the messaging. Somehow it came across as a "learning exercise for students" rather than "attempt to do actually new research". Not sure exactly why that was (the grant size may have been a part, see below), and I now regret not getting more involved.

  2. You describe the grant amount of £10,000 as "substantial". This is surprising to me, since my reaction to the grant size was that it was too small to bother with. I think this corroborates your thoughts about grant size: any size of grant would have had most of the beneficial effects that you saw, but a much larger grant would have been needed to make it seem really "serious".

  3. I think that the project goal was too ambitious. Global prioritization is much harder than more restricted prioritization, but also vaguer and more abstract. Usually when we're learning to deal with vague and abstract problems we start out by becoming very adept with simple, concrete versions to build skills and intuitions before moving up the abstraction hierarchy (easier, better feedback, more motivating, etc.). If I wanted to train up some prioritization researchers I would probably start by getting them to just do lots of small, concrete prioritization tasks.

  4. As Michael Plant says below, I think the project was in a bit of an awkward middle ground. The costs of participation (in terms of work and "top-of-mind" time) were perhaps a bit too high for either students or otherwise-busy community members (like myself), and the perceived benefits (in terms of expected quality of research produced) were perhaps too low for the professionals. (To elaborate on why engaging felt like it would be substantial work for me: in order to provide good commentary on one of your posts, I would have had to: read the post; probably read some prior posts; think hard about it; possibly do some research myself; condense that into a thoughtful reply. That could easily take up an evening of my time, for not a huge perceived reward.) I think your suggestion of running such a project as a week-long retreat is a good idea - it would get a committed block of time from people, and prevents inefficiencies due to repeated time spent "re-loading" the background information.

  5. Agree that quantitative modelling is great and under-utilised. I think a course which was more or less How To Measure Anything applied to EA with modern techniques and technologies would be a fantastic starter for prioritization research, and give people generally useful skills too.

  6. I would have preferred less, higher-quality output from the project. My reaction to the first few blog posts was that they were fine but not terribly interesting, which meant I largely didn't read much of the rest of the content until the models started appearing, which I did find interesting.

  7. Even if you think the project was net-negative, I hope this doesn't put you off starting new things. Exploration is very valuable, even if the median case is a failure.

Comment by michael_pj on Effective Altruism Grants project update · 2017-09-30T11:15:47.386Z · score: 8 (4 votes) · EA · GW

Interesting! Is there a plan to evaluate the grant projects after they reach some kind of "completion" point?

Comment by michael_pj on The Turing Test · 2017-09-17T19:17:52.945Z · score: 1 (1 votes) · EA · GW

Is there any way to make it available without using iTunes?

Comment by michael_pj on How should we assess very uncertain and non-testable stuff? · 2017-08-17T17:56:12.229Z · score: 2 (2 votes) · EA · GW

I have a lot of thoughts on cause search, but possibly at a more granular level. One of the big challenges when you switch from an assessing to a generating perspective is finding the right problems to work on, and it's not easy at all.

Comment by michael_pj on Why I think the Foundational Research Institute should rethink its approach · 2017-07-22T00:03:09.631Z · score: 2 (1 votes) · EA · GW

If they're isomorphic, then they really are the same for mathematical purposes. Possibly if you view STV as having a metaphysical component then you incur some dependence on philosophy of mathematics to say what a mathematical structure is, whether isomorphic structures are distinct, etc.

Comment by michael_pj on Why I think the Foundational Research Institute should rethink its approach · 2017-07-21T23:58:20.609Z · score: 6 (6 votes) · EA · GW

Interesting that you mention the "waterfall"/"bag of popcorn" argument against computationalism in the same article as citing Scott Aaronson, since he actually gives some arguments against it (see section 6 of https://arxiv.org/abs/1108.1791). In particular, he suggests that we can argue that a process P isn't contributing any computation when having a P-oracle doesn't let you solve the problem faster.

I don't think this fully lays to rest the question of what things are performing computations, but I think we can distinguish them in some ways, which makes me hopeful that there's an underlying distinction.

There's always going to be a huge epistemic problem, of course. The homomorphic encryption shows that there will always be computations that we can't distinguish from noise (I just wrote a blog post about this - curse Scott and his beating me to the punch by years). But I think we can reasonably expect such things to be rare in nature.

Comment by michael_pj on The marketing gap and a plea for moral inclusivity · 2017-07-12T09:43:49.803Z · score: 1 (1 votes) · EA · GW

Hm, I'm a little sad about this. I always thought that it was nice to have GWWC presenting a more "conservative" face of EA, which is a lot easier for people to get on board with.

But I guess this is less true with the changes to the pledge - GWWC is more about the pledge than about global poverty.

That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.

Comment by Michael_PJ on [deleted post] 2017-04-25T22:51:38.048Z

This looks pretty similar to a model I wrote with Nick Dunkley way back in the 2012 (part 1, part 2). I still stand by that as a reasonable stab at the problem, so I also think your model is pretty reasonable :)

Charity population:

You're assuming a fixed pool of charities, which makes sense given the evidence gathering strategy you've used (see below). But I think it's better to model charities as an unbounded population following the given distribution, from which we can sample.

That's because we do expect new opportunities to arise. And if we believe that the distribution is heavy-tailed, a large amount of our expected value may come from the possibility of eventually finding something way out in the tails. In your model we only ever get N opportunities to get a really exceptional charity - after that we are just reducing our uncertainty. I think we want to model the fact that we can keep looking for things out in the tails, even if they maybe don't exist yet.

I do think that a lognormal is a sensible distribution for charity effectiveness. The real distribution may be broader, but that just makes your estimate more conservative, which is probably fine. I just did the boring thing and used the empirical distribution of the DCP intervention cost-effectivenss (note: interventions, not charities).

Evidence gathering strategy:

You're assuming that the evaluator does a lot of evaluating: they evaluate every charity in the pool in every round. In some sense I suppose this is true, in that charities which are not explicitly "investigated" by an evaluator can be considered to have failed the first test by not being notable enough to even be considered. However, I still think this is somewhat unrealistic and is going to drive diminishing returns very quickly, since we're really just waiting for the errors for the various charities settle down so that the best charity becomes apparent.

I modelled this as the process as the evaluator sequentially evaluating a single charity, chosen at random (with replacement). This is also unrealistic, because in fact an evaluator won't waste their time with things that are obviously bad, but even with this fairly conservative strategy things turned out pretty well.

I think it's interesting to think what happens when model the pool more explicitly, and consider strategies like investigating the top recommendation further to reduce error.

Increasing scale with money moved:

Charity evaluators have the wonderful feature that their effectiveness scales more or less linearly with the amount of money they move (assuming that the money all goes to their top pick). This is a pretty great property, so worth mentioning.

The big caveat there is room for more funding, or saturation of opportunities. I'm not sure how best to model this. We could model charities as rather "deposits" of effectiveness that are of a fixed size when discovered, and can be exhausted. I don't know how that would change things, but I'd be interested to see! In particular, I suspect it may be important how funding capacity co-varies with effectiveness. If we find a charity with a cost-effectiveness that's 1000x higher than our best, but it can only take a single dollar, then that's not so great.

Comment by michael_pj on Effective altruism is self-recommending · 2017-04-24T22:53:05.521Z · score: 2 (2 votes) · EA · GW

I found the analogy with confidence games thought-provoking, but it could have been a bit shorter.

Comment by michael_pj on Effective altruism is self-recommending · 2017-04-24T22:51:54.736Z · score: 7 (7 votes) · EA · GW

The point I was trying to make is that while GiveWell may not have acted "satisfactorily", they are still well ahead of many of us. I hadn't "inferred" that GiveWell had audited themselves thoroughly - it hadn't even occurred to me to ask, which is a sign of just how bad my own epistemics are. And I don't think I'm unusual in that respect. So GiveWell gets a lot of credit from me for doing "quite well" at their epistemics, even if they could do better (and it's good to hold them to a high standard!).

I think that making the final decision on where to donate yourself often offers only an illusion of control. If you're getting all your information from one source you might as well just be giving them your money. But it does at least keep more things out in the open, which is good.

Re-reading your post, I think I may have been misinterpreting you - am I right in thinking that you mainly object to the marketing of the EA Funds as the "default choice", rather than to their existence for people who want that kind of instrument? I agree that the marketing is perhaps over-selling at the moment.

Comment by michael_pj on Effective altruism is self-recommending · 2017-04-24T22:42:40.271Z · score: 2 (2 votes) · EA · GW

Yes, in case it wasn't clear, I think I agree with many of your concrete suggestions, but I think the current situation is not too bad.

Comment by michael_pj on Effective altruism is self-recommending · 2017-04-23T23:33:33.568Z · score: 23 (23 votes) · EA · GW

1) I think you're missing one important way in which GiveWell and OpenPhil have demonstrated their credibility, which is by showing us many of the outputs of their decision-making processes and letting us judge their quality.

Having evidence that GiveWell's recommendations had a track record of high impact would give us an absolute recommendation: if you follow their advice, you can expect to do this well. Having evidence that they are good at making decisions (by whatever standard you subscribe to) gives you a relative recommendation: if you follow their advice, you can expect to do better than you would do yourself.

In this sense, GiveWell's confidence is not "loaned", it has been earned by continuing to provide evidence of (what the community thinks is) good decision making.

Of course, how well this works depends on how well we can recognize good decision-making. Only judging recommendations by whether they seem sensible to the community renders us vulnerable to groupthink, and untethers us from evidence. Good retrospectives on past recommendations would help us judge whether the decisions that are being made really are good, as well as being indicative of good tendencies within these organizations. So I think it would be great to do more of those (and, indeed, having the resources to run such retrospectives could be one of the advantages of having slightly more "centralised" institutions).

2) I do think that the pre-eminence of GiveWell and OpenPhil in the EA research space is a little unfortunate. Diversity of opinion is good, and in an ideal world I'd like to see several large institutions critiquing and evaluating each others' work. This is one of the reasons I was sad that GWWC stopped doing charity evaluation research. Even if they think that GiveWell simply does it better, having an independent set of opinions is quite valuable.

3) I don't quite see what now makes EA "self-recommending". Previously we said "give your money to these charities", now we say "give your money to this fund, and we'll give it to these charities". I don't see a significant difference there: in both cases we're claiming greater expertise than the donors, and asking them to defer to our judgement. It's just that one of them is more systematized.

What would be worrying is if we were advertising a fund as "the most effective way to donate" and then channeling all the money to EA orgs. That looks like a scam. But the EA Community fund is clearly separate from the others. If you donate to the Global Development fund, your money will be spent on Global Development.

4) It's good to keep us on our toes about how we sell things. It's always tempting to oversell, particularly with the recent increasing focus on outreach. But I think we can and should do better than that, so thanks for bringing this stuff up!

Comment by michael_pj on Concrete project lists · 2017-03-27T04:37:46.267Z · score: 3 (3 votes) · EA · GW

Fair!

I think I'm thinking of funding even earlier than 80k got money, though. 80k had presumably had very many hours of volunteer labour before it got to that point - we might want to fund things earlier than that.

Comment by michael_pj on Concrete project lists · 2017-03-26T18:35:05.810Z · score: 2 (2 votes) · EA · GW

One important thing to remember is that important projects may not look very credible initially. Any early-stage EA funding body needs to ask itself "would we fund an early-stage 80k?".

Comment by michael_pj on Introducing CEA's Guiding Principles · 2017-03-08T16:19:51.167Z · score: 5 (7 votes) · EA · GW

This is pretty much exactly what I was hoping for! Thank you!

Comment by michael_pj on Use "care" with care. · 2017-02-09T12:52:23.615Z · score: 2 (2 votes) · EA · GW

Thanks for this - I'm pretty sure I'm guilty of doing this carelessly, and I agree that it's actually not great.

Comment by michael_pj on Anonymous EA comments · 2017-02-08T00:06:17.160Z · score: 3 (3 votes) · EA · GW

There's a lot of EA outside the Bay! The Oxford/London cluster in particular is quite nice (although I live there, so I'm biased).

Comment by michael_pj on Anonymous EA comments · 2017-02-07T23:58:51.703Z · score: 0 (0 votes) · EA · GW

I agree that we're in danger of having picked all the low-hanging fruit. But I think there's room to fix this.

Comment by michael_pj on EA should invest more in exploration · 2017-02-06T23:37:16.066Z · score: 0 (0 votes) · EA · GW

I think this is a case where we're unlikely to be able to offer anything beyond what the academic community is going to do. I think the best way to improve exploration around schistosomiasis prevention would probably be to just fund some more PhD students!

Comment by michael_pj on EA should invest more in exploration · 2017-02-06T23:35:08.809Z · score: 0 (0 votes) · EA · GW

1) I nearly added a section about whether exploration is funiding- or talent-constrained! In short, I'm not sure, and I suspect it's different in different places. It sounds like OPP is probably talent-constrained, but other orgs may differ. In particular, if we wanted to try some of my other suggestions for improving exploration, like building institutions to start new orgs, then that's potentially quite funding-intensive.

2) I'm not sure whether multi-armed bandits actually model our situation, since I'm not sure if you can incorporate situations where you can change the efficiencies of your actions. What does "improving exploration capacity" look like in a multi-armed bandit? There may also be complications because we don't even know the size of the option set.