next page (older posts) →
William_MacAskill · 2011-11-25T05:00:04.000Z · score: 0 (0 votes) · comments (2)
That's true, thanks for your comment. I didn't say this exactly, but some of the policies proposed above are suggested in what I think is the same spirit. E.g., adding the submajority delay rule or age quotas to these upper houses would plausibly make them more longtermist. If you have other specific ideas about ways of reforming legislative houses that make them more longtermist I would be quite interested to hear them.jpaddison on Update on CEA's EA Grants Program
You could also try reforming a legislative house to focus on future generations. The House of Lords (UK)/Senate (Canada) is already meant to take a more long-term 'sober second thought' on legislation, and there's widespread discontent about the current function of both. They could be ripe for reform.jc_mourrat on Assumptions about the far future and cause priority
Ok, I understand your point better now, and find that it makes sense. To summarize, I believe that the art of good planning to a distant goal is to find a series of intermediate targets that we can focus on, one after the other. I was worried that your argument could be used against any such strategy. But in fact your point is that as it stands, health interventions have not been selected by a "planner" who was actually thinking about the long-term goals, so it is unlikely that the selected interventions are the best we can find. That sounds reasonable to me. I would really like to see more research into what optimizing for long-term growth could look like (and what kind of "intermediate targets" this would select). (There is some of this in Christiano's post, but there is clearly room for more in-depth analysis in my opinion.)edoarad on Some Modes of Thinking about EA
Even though I agree that presenting EA as Utilitarianism is alienating and misleading, I think that it is a useful mode of thinking about EA in some contexts. Many practices in EA are rooted in Utilitarianism, and many (about half from the respondents to the survey, if I recall correctly) of the people in EA consider themselves utilitarian. So, while Effective Utilitarianism is not the same as EA, I think that the confusion of the outsiders is sometimes justified.mario on The Logic of not eating meat
Thanks for sharing this.
One argument I come across is that apart from being symbolic, individual action does not help #1,2 and 3. Elaboration / links to articles that try to quantify the "real" individual effect could probably make the case more persuasive.
If the "real" impact is small then one could argue that the slight inconvenience in following a vegan benifit is comparable to the small benifit that individual action results in.
Are there any specific tools or methods you used to start an EA group in the Phillipines? I'm also interested in spreading the EA movement in emerging markets and would value any insight you have on this.gworley3 on Some Modes of Thinking about EA
One I was very glad not to see in this list was "EA as Utilitarianism". Although utilitarian ethics are popular among EAs, I think we leave out many people who would "do good better" but from a different meta-ethical perspective. One of the greatest challenges I've seen in my own conversations about EA is with those who reject the ideas because they associate them with Singer-style moral arguments and living a life of subsistence until not one person is in poverty. This sadly seems to turn them off of ways they might think about better allocating resources, for example, because they think their only options are either to do what they feel good about or to be a Singer-esque maximizer. Obviously this is not the case, there's a lot of room for gradation and different perspectives, but it does create a situation where people see themselves in an adversarial relationship to EA and so reject all its ideas rather than just the subset of EA-related ideas they actually disagree with because they got the idea that one part of EA was the whole thing.larks on Assumptions about the far future and cause priority
This is a really interesting post, thanks for writing it up.
I think I have two main models for thinking about these sorts of issues:
As a result, my version of Clara thinks of AI Safety work as reducing risk in the worlds that happen to matter the most. It's also possible that these are the worlds where we can have the most influence, if you thought that strong negative feedback mechanisms strongly limited action in the Straight Line world
Note that I was originally going to describe these as the inside and outside views, but I actually think that both have decent outside-view justifications.lauro-langosco on AI policy careers in the EU
thanks, fixed :)