Posts

How effective were vote trading schemes in the 2016 U.S. presidential election? 2020-03-02T23:15:39.321Z · score: 11 (3 votes)
Do impact certificates help if you're not sure your work is effective? 2020-02-12T14:13:25.689Z · score: 21 (10 votes)
What analysis has been done of space colonization as a cause area? 2019-10-09T20:33:27.473Z · score: 12 (8 votes)
What actions would obviously decrease x-risk? 2019-10-06T21:00:24.025Z · score: 22 (12 votes)
How effective is household recycling? 2019-08-29T06:13:46.296Z · score: 7 (5 votes)
What is the current best estimate of the cumulative elasticity of chicken? 2019-05-03T03:27:57.603Z · score: 22 (10 votes)
Confused about AI research as a means of addressing AI risk 2019-02-21T00:07:36.390Z · score: 31 (10 votes)
[Offer, Paid] Help me estimate the social impact of the startup I work for. 2019-01-03T05:16:48.710Z · score: 7 (3 votes)

Comments

Comment by reallyeli on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-14T08:34:14.469Z · score: 4 (4 votes) · EA · GW

I don't think the focus here should be only on suffering. Sometimes, I seek out art/media that depicts human flourishing, out of a desire to increase my altruistic motivation by reminding myself just what it is that we're working to protect + create.

Obviously a ton of art/media contains "people being happy," but when I'm looking for this, I look specifically for depictions of people who are very different from each other and from me, that show these people as being unique and weird and not at all how you thought they would be. Good examples are the tv show High Maintenance and the documentary In Jackson Heights. It's a certain aesthetic that increases my altruistic motivation because it reminds me, by showing me more of it than I normally see, of what a vast expanse human experience really is.

(For animals, it's more socially acceptable to just watch them intently for long periods of time.)

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-05-27T02:49:29.697Z · score: 1 (1 votes) · EA · GW

I suppose an example would be that increasing economic growth in a country doesn't matter if the country later gets blown up or something.

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-17T17:42:05.081Z · score: 2 (2 votes) · EA · GW
Like how would I know if the world was more absorber-y or more sensitive to small changes?

I'm not sure; that's a pretty interesting question.

Here's a tentative idea: using the evolution of brains, we can conclude that whatever sensitivity the world has to small changes, it can't show up *too* quickly. You could imagine a totally chaotic world, where the whole state at time t+(1 second) is radically different depending on minute variations in the state at time t. Building models of such a world that were useful on 1 second timescales would be impossible. But brains are devices for modelling the world that are useful on 1 second timescales. Brains evolved; hence they conferred some evolutionary advantage. Hence we don't live in this totally chaotic world; the world must be less chaotic than that.

It seems like this argument gets less strong the longer your timescales are, as our brains perhaps faced less evolutionary pressure to be good at prediction on timescales of like 1 year, and still less to be good at prediction on timescales of 100 years. But I'm not sure; I'd like to think about this more.

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-17T17:29:28.667Z · score: 2 (2 votes) · EA · GW

Hey, glad this was helpful! : )

To apply this to conception events - imagine we changed conception events so that girls were much more likely to be conceived than boys (say because in the near-term that had some good effects eg. say women tended to be happier at the time). My intuition here is that there could be long-term effects of indeterminate sign (eg. from increased/decreased population growth) which might dominate the near-term effects. Does that match your intuition?

Yes, that matches my intuition. This action creates a sweeping change a really complex system; I would be surprised if there were no unexpected effects.

But I don't see why we should believe all actions are like this. I'm raising the "long-term effects don't persist" objection, arguing that it seems true of *some* actions.

Comment by reallyeli on What would a pre-mortem for the long-termist project look like? · 2020-04-13T03:33:35.713Z · score: 1 (1 votes) · EA · GW

Makes sense!

Comment by reallyeli on What would a pre-mortem for the long-termist project look like? · 2020-04-12T14:39:36.439Z · score: 4 (3 votes) · EA · GW
I'd maybe give a 10% probability to long-termism just being wrong.

What could you observe that would cause you to think that longtermism is wrong? (I ask out of interest; I think it's a subtle question.)

Comment by reallyeli on What are some historical examples of people and organizations who've influenced people to do more good? · 2020-04-12T14:28:32.070Z · score: 3 (3 votes) · EA · GW

Florence Nightingale? Martin Luther King Jr. ? Leaders of social movements? It seems to me that a lot of "standard examples of good people" are like this; did you have something else in mind?

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-12T14:24:10.456Z · score: 2 (2 votes) · EA · GW

Sweet links, thanks!

Comment by reallyeli on If you value future people, why do you consider near term effects? · 2020-04-10T02:47:34.391Z · score: 8 (3 votes) · EA · GW

(Focusing on a subtopic of yours, rather than engaging with the entire argument.)

All actions we take have huge effects on the future. One way of seeing this is by considering identity-altering actions. Imagine that I pass my friend on the street and I stop to chat. She and I will now be on a different trajectory than we would have been otherwise. We will interact with different people, at a different time, in a different place, or in a different way than if we hadn’t paused. This will eventually change the circumstances of a conception event such that a different person will now be born because we paused to speak on the street.

I'm not so sure "all actions we take have huge effects on the future." It seems like a pretty interesting empirical question. I don't find this analogy supremely convincing; it seems that life contains both "absorbers" and "amplifiers" of randomness, and I'm not sure which are more common.

In your example, I stop to chat with my friend vs. not doing so. But then I just go to my job, where I'm not meeting any new people. Maybe I always just slack off until my 9:30am meeting, so it doesn't matter whether I arrive at 9am or at 9:10am after stopping to chat. I just read the Internet for ten more minutes. It looks like there's an "absorber" here.

Re: conception events — I've noticed that discussion of this topic tends to use conception as a stock example of an amplifier. (I'm thinking of Tyler Cowen's Stubborn Attachments.) Notably, it's an empirical fact that conception works that way (e.g. with many sperm, all with different genomes, competing to fertilize the same egg). If conception did not work that way, would we lower our belief in "all actions we take have huge effects on the future" ? What sort of evidence would cause us to lower our beliefs in that?

Now, when the person who is conceived takes actions, I will be causally responsible for those actions and their effects. I am also causally responsible for all the effects flowing from those effects.

Sure, but what about the counterfactual? How much does it matter to the wider world what this person's traits are like? You want JFK to be patient and levelheaded, so he can handle the Cuban Missile Crisis. JFK's traits seem to matter. But most people aren't JFK.

You might also have "absorbers," in the form of selection effects, operating even in the JFK case. If we've set up a great political system such that the only people who can become President are patient and levelheaded, it matters not at all whether JFK in particular has those traits.

Looking at history with my layman's eyes, it seems like JFK was groomed to be president by virtue of his birth, so it did actually matter what he was like. At the extreme of this, kings seem pretty high-variance. So affecting the conception of a king matters. But now what we're doing looks more like ordinary cause prioritization.

Comment by reallyeli on What is the average EA salary? · 2020-04-09T18:03:13.513Z · score: 1 (1 votes) · EA · GW

I don't know — sounds like you might have stronger views on this than me! : )

Comment by reallyeli on What is the average EA salary? · 2020-04-05T06:18:11.528Z · score: 5 (4 votes) · EA · GW

This is gonna vary a lot because there's not a "typical EA organization" — salary is determined in large part by what the market rate for a position is, so I'd expect e.g. a software engineer at an EA organization to be paid about the same as a software engineer at any organization.

Is there a more specific version of your question to ask? Why do you want to know / what's the context?

Comment by reallyeli on Effective Altruism and Free Riding · 2020-04-02T16:10:02.497Z · score: 7 (5 votes) · EA · GW

Gotcha. So your main concern is not that EA defecting will make us miss out on good stuff that we could have gotten via the climate change movement deciding to help us on our goals, but rather that it might be bad if EA-type thinking became very popular?

Comment by reallyeli on Effective Altruism and Free Riding · 2020-04-01T23:37:36.305Z · score: 22 (10 votes) · EA · GW

I don't buy your example on 80k's advice re: climate change. You want to cooperate in prisoner's dilemmas if you think that it will cause the agent you are cooperating with to cooperate more with you in the future. So there needs to a) be another coherent agent, which b) notices your actions, c) takes actions in response to yours, and d) might plausibly cooperate with you in the future. In the climate change case, what is the agent you'd be cooperating with here and does it meet these criteria?

Is it the climate change movement? It doesn't seem to me that "the climate change movement" is enough of a coherent agent to do things like decide "let's help EA with their goals."

Or is it individual people who care about climate change? Are they able to help you with your goals? What is it you want from them?

Comment by reallyeli on Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics · 2020-03-20T16:35:58.274Z · score: 1 (1 votes) · EA · GW

I'm interested in the $10 million per minute number. What is the model? Is that for the whole world?

Quick check is that U.S. GNP for one year is $10^12 ( source: https://www.google.com/search?q=us+gnp ), $10 million = $10^7 and there are about 10^6 minutes in a year, so we're saying that the shutdown would be equivalent to turning off the entire US economy.

Comment by reallyeli on How effective were vote trading schemes in the 2016 U.S. presidential election? · 2020-03-04T04:17:57.509Z · score: 1 (1 votes) · EA · GW

Sweet, better than I could have hoped for!

Any sense of what organizations/people are working on it this year? I wasn't able to find an email address for Steve Hull so I posted an issue — https://github.com/sdhull/strategic_voting/issues/20 — no response yet.

I'll also contact Ben.

Comment by reallyeli on How effective were vote trading schemes in the 2016 U.S. presidential election? · 2020-03-03T04:55:24.051Z · score: 1 (1 votes) · EA · GW

Thanks. I realized it should have been a Question but too late — was there a way for me to upgrade it myself after posting?

Comment by reallyeli on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T14:19:03.977Z · score: 2 (2 votes) · EA · GW

Thanks for the pointer to "independence of irrelevant alternatives."

I'm curious to know how you think about "some normative weight." I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?

Comment by reallyeli on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-22T04:15:31.664Z · score: 4 (4 votes) · EA · GW

Link to discussion on Facebook: https://www.facebook.com/groups/eahangout/permalink/2845485492205023/

Comment by reallyeli on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T17:00:22.215Z · score: 9 (6 votes) · EA · GW

I think this math is interesting, and I appreciate the good pedagogy here. But I don't think this type of reasoning is relevant to my effective altruism (defined as "figuring out how to do the most good"). In particular, I disagree that this is an "argument for utilitarianism" in the sense that it has the potential to convince me to donate to cause A instead of donating to cause B.

(I really do mean "me" and "my" in that sentence; other people may find that this argument can indeed convince them of this, and that's a fact about them I have no quarrel with. I'm posting this because I just want to put a signpost saying "some people in EA believe this," in case others feel the same way.)

Following Richard Ngo's post https://forum.effectivealtruism.org/posts/TqCDCkp2ZosCiS3FB/arguments-for-moral-indefinability, I don't think that human moral preferences can be made free of contradiction. Although I don't like contradictions and I don't want to have them, I also don't like things like the repugnant conclusion, and I'm not sure why the distaste towards contradictions should be the one that always triumphs.

Since VNM-rationality is based on transitive preferences, and I disagree that human preferences can or "should" be transitive, I interpret things like this as without normative weight.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-21T16:29:49.447Z · score: 2 (2 votes) · EA · GW

What is meant by "not my problem"? My understanding is that what is meant is "what I care about is no better off if I worry about this thing than if I don't." Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don't.

It sounds like you're saying that, if I'm working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is "not my problem" in this sense. Here obviously I care about things other than $$.

This doesn't seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.

Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.

So this is an argument that if everyone collectively agrees to change their incentives, we'd get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I'm sure I agree with that, I just haven't thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-15T23:51:46.456Z · score: 1 (1 votes) · EA · GW

I'm saying we need to specify more than, "The chance that the full stack of individual propositions evaluates as true in the relevant direction." I'm not sure if we're disagreeing, or ... ?

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-15T02:48:09.832Z · score: 1 (1 votes) · EA · GW

Suppose you're in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?

There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.

The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that's why Buck's number seems crazy high to Gordon.

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-13T22:40:48.527Z · score: 13 (5 votes) · EA · GW

I agree with this intuition. I suspect the question that needs to be asked is "14% chance of what?"

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T22:34:08.242Z · score: 3 (2 votes) · EA · GW

I'm deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don't actually think work at A is effective, but they trade their impact certificates to other folks who do. I don't know these other folks.

Based on the theory that it's important to know who you're trusting, this is bad.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T22:29:45.121Z · score: 3 (2 votes) · EA · GW

"The sense in which employees are deferring to their employer's views on what to do" sounds fine to me, that's all I meant to say.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T17:00:03.593Z · score: 1 (1 votes) · EA · GW

Sure, I agree that if they're anonymous forever you can't do much. But that was just the generating context; I'm not arguing only against anonymity.

I'm arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it's important to know you're doing that and to have a decent understanding of why you differ.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T00:28:18.286Z · score: 1 (1 votes) · EA · GW

I agree with this. I wasn't trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than 'means' and 'ends' here.)

I think you've clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That's the situation I'm concerned about.

For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).

Comment by reallyeli on Short-Term AI Alignment as a Priority Cause · 2020-02-12T16:55:39.871Z · score: 8 (5 votes) · EA · GW

I think this is an interesting topic. However, I downvoted because if you're going to claim something is the "greatest priority cause," which is quite a claim, I would at least want to see an analysis of how it fares against other causes on scale, tractability + neglectedness.

(Basically I agree with MichaelStJules's comment, except I think the analysis need not be quantitative.)

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-12T15:59:08.749Z · score: 1 (1 votes) · EA · GW

Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.

Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)

Some reasons I can imagine:

- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I'd want to be clear that I'm deferring.

- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I'd want a robust market, probably better than PredictIt.

Comment by reallyeli on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-18T16:21:25.190Z · score: 3 (3 votes) · EA · GW

I had the same reaction (checking in my head that a 10% chance still merited action).

However I really think we ought to be able to discuss guesses about what's true merely on the level of what's true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we're unable to do so, that will make the difficult task of finding truth even more difficult.

Comment by reallyeli on In praise of unhistoric heroism · 2020-01-08T15:38:19.845Z · score: 4 (4 votes) · EA · GW

Ha, no I am an unrelated Eli.

Comment by reallyeli on In praise of unhistoric heroism · 2020-01-07T20:41:42.449Z · score: 5 (5 votes) · EA · GW

I like this point

But you are way more likely to end up being Dorothea.

because it emphasizes that the reason to have this mindset is a fact about the world. Sometimes, when I encounter statements like this, it can be easy for them to "bounce off" because I object "oh, of course it's adaptive to think that way... but that doesn't mean it's actually true." It was hard for this post to "bounce off" me because of the force of this point.

Comment by reallyeli on EA Hotel Fundraiser 5: Out of runway! · 2019-10-28T21:37:11.806Z · score: 18 (13 votes) · EA · GW

(I think the tone of this comment is the reason it is being downvoted. Since we all presumably believe that EA should be evidential, rational and objective, stating it again reads as a strong attack, as if you were trying to point out that no assessment of impact had been done, even though the original post links to some.)

Comment by reallyeli on EA Hotel Fundraiser 5: Out of runway! · 2019-10-28T18:41:40.154Z · score: 55 (24 votes) · EA · GW

I donated $1000 since it seems to me that something like the EA Hotel really ought to exist, and it would be really sad if it went under.

I'm posting this here so that, if you're debating donating, you have the additional data point of knowing that others are doing so.

Comment by reallyeli on What analysis has been done of space colonization as a cause area? · 2019-10-12T16:01:42.937Z · score: 4 (3 votes) · EA · GW

FWIW, I don't find it at all surprising when people's moral preferences contradict themselves (in terms of likely implications, as you say). I myself have many contradictory moral preferences.

Comment by reallyeli on What analysis has been done of space colonization as a cause area? · 2019-10-11T21:36:28.185Z · score: 1 (1 votes) · EA · GW

Awesome, I should have checked Kelsey Piper first. Thank you!

Comment by reallyeli on Off-Earth Governance · 2019-10-11T21:35:40.362Z · score: 1 (1 votes) · EA · GW

A fictional treatment of these issues you might be interested in is the book https://en.wikipedia.org/wiki/2312_(novel) by Kim Stanley Robinson. Spacefarers are genetically distinct from Earth-dwelling humans; each planet is its own political entity.

To me, determining what will happen in the future seems less and less possible the farther we go out, to the point where I think there are no arguments that would give me a high degree of confidence in a statement about the far future like the one you put up here. For any story that supports a particular outcome, it seems there is an equally compelling story that argues against it. :)

Comment by reallyeli on What analysis has been done of space colonization as a cause area? · 2019-10-11T21:24:50.955Z · score: 4 (3 votes) · EA · GW

Thanks for the perspective on dissenting views!

Comment by reallyeli on Andreas Mogensen's "Maximal Cluelessness" · 2019-10-06T21:29:04.254Z · score: 3 (2 votes) · EA · GW

By accumulating resources for the future, we give increased power to whatever decision-makers in the future we bequeath these resources. (Whether these decision-makers are us in 20 years, or our descendants in 200 years.)

In a clueless world, why do we think that increasing their power is good? What if those future decision makers make a bad decision, and the increased resources we've given them mean the impact is worse?

In other words, if we are clueless today, why will we be less clueless in the future? One might hope cluelessness decreases monotonically over time, as we learn more, but so does the probability of a large mistake.

Comment by reallyeli on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-26T03:40:18.072Z · score: 1 (1 votes) · EA · GW

I found this very helpful.

Comment by reallyeli on Arguments for moral indefinability · 2019-08-30T19:24:47.013Z · score: 4 (3 votes) · EA · GW

This post really turned on a lightbulb for me, and I have thought about it consistently in the months since I read it.

Comment by reallyeli on How effective is household recycling? · 2019-08-29T18:19:01.891Z · score: 1 (1 votes) · EA · GW

Thank you!

Comment by reallyeli on How do you, personally, experience "EA motivation"? · 2019-08-29T05:17:48.578Z · score: 5 (3 votes) · EA · GW

I'm very happy to see this being discussed, and have enjoyed reading others' answers.

Upon reflection, I seem to have a few different motivations: this was a surprise to me, as I expected to find a single overarching one.

a) Imagining another person's experience, leading to imagining what it is like to experience some particular suffering that I can see they are experiencing. Imagining "what it is like" involves focusing on details of the experience and rejecting generalities (not "I have cancer" but "I am trying to reach down in the shower in the morning but can't and the water is too hot"). Soon my train of thought goes to a more objective or detached place, and I think about how there is no real difference between me and the other person, that except blind circumstance there is no reason they should suffer when I do not.

There is an erasure of self involved. I imagine the core of my consciousness, the experiencing self, inhabiting the other person's body and mind. From this one example I generalize; of course I should treat another person's suffering the same as my own, because in the final analysis there is no difference between me and other people. That's the altruism; desire for effectiveness is secondary and instrumental, not terminal.

b) Zooming out and imagining the whole of the world leads to imagining all the evil in the world. (Where "evil" is a broad term including suffering due to carelessness, due to misaligned incentives, due to lack of coordination, due to accident, etc.) It's overwhelming; there's a sense of perverse wonder. "The works of Moloch are as many and burn as cruelly as the white-hot stars." This leads to a powerful feeling of being "fed-up" with the bad things. The desire for them to stop is like a very strong version of the desire to clean up an untidy room. It's abstract and not connected to any one person's suffering. This tends to be a stronger motivating force than a); if a) is empathy, this is anger.

Eliezer's fiction is particularly good at conjuring this mind-state for me: for example, the "Make it stop" scene in http://yudkowsky.net/other/fiction/the-sword-of-good .

This mind-state seems more inherently connected to effectiveness than a), though effectiveness is still instrumental and not terminal. I want us to be making a strong/effective stand against the various bad things; when we're not doing that, I am frustrated. I am less willing to tolerate "weakness"/ineffectiveness because I conceptualize us as in a struggle with high stakes.

Comment by reallyeli on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-08-14T22:01:10.330Z · score: 3 (3 votes) · EA · GW

This strikes me as incredibly good advice.

Comment by reallyeli on Key points from The Dead Hand, David E. Hoffman · 2019-08-10T00:56:16.741Z · score: 7 (6 votes) · EA · GW

Just wanted to say I thought this post was great and really appreciate you writing it! I have a hard-to-feed hunger to know what the real situation with nuclear weapons is like, and this is one of the only things to touch it in the past few years. Any other resources you'd recommend?

I'm surprised and heartened to hear some evidence against the "Petrov singlehandedly saved the world" narrative. Is there somewhere I can learn about the other nuclear 'close calls' described in the book? (should I just read the book?)

Comment by reallyeli on Four practices where EAs ought to course-correct · 2019-07-31T21:24:36.857Z · score: 0 (2 votes) · EA · GW

Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn't constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.

I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you're saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.

Comment by reallyeli on Four practices where EAs ought to course-correct · 2019-07-31T04:41:11.204Z · score: 5 (5 votes) · EA · GW

Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.

For example, "True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success - it would just partially counterbalance them." My intuition says the opposite of this. I don't think it's at all clear (whether increasing the capability of the U.S. military is a good or bad thing).

I agree that object-level progress is to be preferred over meta-level progress on methodology.

Comment by reallyeli on Effective Altruism is an Ideology, not (just) a Question · 2019-07-06T15:40:13.921Z · score: 4 (4 votes) · EA · GW

I gave this post a strong upvote. It articulated something which I feel but have not articulated myself. Thank you for the clarity of writing which is on display here.

That said, I have some reservations which I would be interested in your thoughts on. When we argue about whether something is an ideology or not, we are assuming that the word "ideology" is applied to some things and not others, and that whether or not it is applied tells us useful things about the things it is applied to.

I am convinced that on the spectrum of movements, we should put effective altruism closer to libertarianism and feminism than the article you're responding to would indicate. But what is on the other end of this spectrum? Is there a movement/"ism" you can point to that you'd say we should put on the other side of where we've put EA -- **less** ideological than it?

Comment by reallyeli on Effective Altruism is an Ideology, not (just) a Question · 2019-07-06T15:22:18.195Z · score: 1 (1 votes) · EA · GW
I wish I could triple-upvote this post.

You can! :P. Click-and-hold for "strong upvote."

Comment by reallyeli on Doing good while clueless · 2019-06-02T22:53:07.307Z · score: 1 (1 votes) · EA · GW
Therefore, we ought to prioritize interventions that improve the wisdom, capability, and coordination of future actors.

If we operate under the "ethical precautionary principle" you laid out in the previous post (always behave as if there was another crucial consideration yet to discover), how do we do this? We might think that some intervention will increase the wisdom of future actors, based on our best analysis of the situation. But we fear a lurking crucial consideration that will someday pounce and reveal that actually the intervention did nothing, or did the opposite.

In other words, don't we need to be *somewhat* clueful already in order to bootstrap our way into more cluefulness?