Comment by Michael_Wiebe on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-17T19:55:41.980Z · score: 2 (2 votes) · EA · GW

How is this different from EA Grants?

Comment by Michael_Wiebe on Why we look at the limiting factor instead of the problem scale · 2019-02-18T02:24:50.321Z · score: 3 (2 votes) · EA · GW

Good post!

First, tractability is not really currently used in this way. Right now, lots of claims are being made along the lines of “cause X should be focused on more due to it having a huge problem size” with no further reference to tractability.

Charitably, this is an "other things equal" claim. But I agree, it seems like people have just forgotten about tractability.

Comment by Michael_Wiebe on The Need for and Viability of an Effective Altruism Academy · 2019-02-16T19:30:16.367Z · score: 4 (3 votes) · EA · GW
The school could then be sustained by a Lambda school-style deal (Income Share Agreement): If you get employed outside EA, you owe 10% of your salary for 2 years once you make over $50,000. If you work for a think tank or inside EA, the cost is waived.

I can't imagine anyone signing this contract.

Comment by Michael_Wiebe on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-10T23:45:32.960Z · score: 8 (5 votes) · EA · GW
we don't really know how worried to be [about instability risk from AI]. These risks really haven't been researched much, and we shouldn't really take it for granted that AI will be destabilizing. It could be or it couldn't be. We just basically have not done enough research to feel very confident one way or the other.

This makes me worry about tractability. The problem of instability has been known for at least five years now, and we haven't made any progress?

Comment by Michael_Wiebe on Hit Based Giving for Global Development · 2019-02-07T01:05:34.784Z · score: 3 (3 votes) · EA · GW

How should we think about the big players in the field (World Bank, IMF, DFID, etc) — are they doing hits-based giving?

Comment by Michael_Wiebe on Tactical models to improve institutional decision-making · 2019-01-11T20:32:07.680Z · score: 2 (2 votes) · EA · GW

This post gives a nice framework, but the article should be half as long.

Also, I wonder how much can be learned from an abstract understanding here. Consider economists studying firms: they can learn some general principles, but they're not in a position to go run a business ("if you're so smart, why aren't you rich?"). Similarly, my prior is that studying institutional decision-making is not going to produce actionable knowledge that can be used in the real world. That would require learning about the specific problems facing (say) DFID.

Comment by Michael_Wiebe on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-11-01T21:52:46.255Z · score: 0 (2 votes) · EA · GW

My understanding of Myers Briggs is that 'thinking' and 'feeling' are mutually exclusive, at least on average, in the sense that being more thinking-oriented means you're less feeling-oriented. The E vs. A framing is different, and it seems you could have people who score high in both. Is there any personality research on this?

Comment by Michael_Wiebe on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-11-01T21:22:41.780Z · score: 2 (4 votes) · EA · GW

In all likelihood men just hide their emotions better than women

I think citing this article weakens your overall argument. The study has n=30 and is likely more of the same low-quality non-preregistered social psychology research that is driving the replication crisis. Your argument is strong enough (to think about examples of men being snarky, insulting others, engaging in pissing contests) without needing to cite some flimsy study. Otherwise, people start questioning whether your other citations are trustworthy.

Comment by Michael_Wiebe on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-31T22:12:03.536Z · score: 2 (2 votes) · EA · GW

Is it true that men score higher than women in 'thinking' vs 'feeling'? If so, the EA community (being dominated by men) might be structured in ways that appeal to 'thinkers' and deter 'feelers'. To reduce the gender gap in EA, we would have to make the community be more appealing to 'feelers' (if women are indeed disproportionately 'feelers').

Comment by Michael_Wiebe on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-23T16:37:06.165Z · score: 0 (0 votes) · EA · GW

For example, the system of animal agriculture and animal product consumption is pretty complex, but ACE have done a great job

But they didn't use complex systems theory, did they? They just used the regular EA framework of impact/tractability/neglectedness.

Comment by Michael_Wiebe on Effective Altruism Paradigm vs Systems Change Paradigm (post 2/3) · 2017-10-23T16:28:38.408Z · score: 0 (0 votes) · EA · GW

For what it's worth, I currently think the solution requires modelling the Earth as a complex system, clarifying top-level metrics to optimise the system for, and a probability weighted theory of change for the system as a whole.

I'd be interested in seeing this. Do you have anything written up?

Comment by Michael_Wiebe on Why to Optimize Earth? (post 1/3) · 2017-10-23T16:21:18.308Z · score: 0 (0 votes) · EA · GW

'Spillover' is a common term in economics, and I'm using it interchangeably with externalities/'how causes affect other causes'.

'Spill-over' suggests that impact can be neatly attributed to one cause or another, but in the context of complex systems (i.e. the world we live in), impact is often more accurately understood as resulting from many factors, including the interplay of a messy web of causes pursued over many decades.

Spillovers can be simple or complex; nothing in the definition says they have to be "neatly attributed". But you're right, long-term flow-through effects can be massive. They're also incredibly difficult to estimate. If you're able to improve on our ability to estimate them, using complexity theory, then more power to you.

Comment by Michael_Wiebe on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-23T16:07:37.758Z · score: 2 (2 votes) · EA · GW

And if your disagreement is with the scale/tractability/neglectedness framework, then argue against that directly.

Comment by Michael_Wiebe on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-22T18:20:47.200Z · score: 2 (4 votes) · EA · GW

In general, a cause needs to score high on each of impact, tractability, and neglectedness to be worthwhile. Getting two out of three is no better than zero out of three. You've listed causes with high impact, but they're generally not tractable. For example, changing the political system is highly intractable.

Overall, I think that EA has already incorporated the key insights from systems change, and there's no need to distinguish it as being separate from EA.

Comment by Michael_Wiebe on Effective Altruism Paradigm vs Systems Change Paradigm (post 2/3) · 2017-10-22T17:48:25.512Z · score: 4 (4 votes) · EA · GW

I think the marginal vs. total distinction is confused. Maximizing personal impact, while taking into account externalities (as EAs do), will be equivalent to maximizing collective impact.

An Effective Altruist, by focusing on impact at the margin, may ask questions such as: What impact will my next $100 donation make in this charity vs that charity?

It seems you're trying to set up a distinction between EA focusing on small issues, and systems change focusing on big issues. But this is a strawman. Even if an individual makes a $100 donation, the cause they're donating to can still target a systemic issue. In any case, there are now EAs making enormous donations: "What if you were in a position to give away billions of dollars to improve the world? What would you do with it?"

This approach invites sustained collective tolerance of deep uncertainty, in order to make space for new cultural norms to emerge. Linear, black-and-white thinking risks compromising this creative process before desirable novel realities have fully formed in a self-sustaining way.

This is pretty mystical.