Posts

Why making asteroid deflection tech might be bad 2020-05-20T23:01:02.444Z · score: 20 (11 votes)
Applying speciesism to wild-animal suffering 2020-05-17T23:11:32.335Z · score: 14 (9 votes)
The need for convergence on an ethical theory 2016-09-19T06:40:45.619Z · score: 0 (4 votes)
Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? 2016-09-14T12:36:55.861Z · score: 8 (8 votes)
(Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election 2016-06-24T09:35:51.537Z · score: -1 (5 votes)
The morality of having a meat-eating pet 2016-06-03T10:33:36.351Z · score: -2 (9 votes)
The great calculator 2016-03-26T03:48:26.405Z · score: 4 (8 votes)
Causality in altruism 2016-03-04T13:33:41.408Z · score: 4 (6 votes)
Opportunity to increase your giving impact through AMF 2016-02-24T09:48:30.937Z · score: 4 (4 votes)
Effective Altruism and ethical science 2016-01-26T04:36:02.411Z · score: -3 (7 votes)
Doing Good Better - Book review and comments 2015-12-26T01:52:10.869Z · score: 2 (2 votes)
Movement building - An online course 2015-10-16T06:10:14.025Z · score: 4 (4 votes)
How does fighting diarrhoea stack up to malaria in effectiveness? 2015-10-09T00:28:00.767Z · score: 2 (2 votes)
Low hanging fruit and 'quick wins' 2015-09-27T06:24:42.319Z · score: 4 (4 votes)

Comments

Comment by michaeldello on Why making asteroid deflection tech might be bad · 2020-05-31T08:38:11.584Z · score: 1 (1 votes) · EA · GW

When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.

Comment by michaeldello on Why making asteroid deflection tech might be bad · 2020-05-22T12:44:23.932Z · score: 2 (2 votes) · EA · GW

We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.

Comment by michaeldello on Why making asteroid deflection tech might be bad · 2020-05-22T12:42:20.050Z · score: 2 (2 votes) · EA · GW

Neat, I'll have to get in touch, thanks.

Comment by michaeldello on How should longtermists think about eating meat? · 2020-05-18T04:03:45.050Z · score: 3 (2 votes) · EA · GW

I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don't seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.

Comment by michaeldello on How should longtermists think about eating meat? · 2020-05-17T22:55:15.184Z · score: 3 (5 votes) · EA · GW

My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.

So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical impact of any kind, it would probably be $500 (with hindsight - cue the standard list of possible biases). This doesn't seem very high to me. My experience has been that most people who have become vegan have said that they vastly overestimated the sacrifice they thought was involved.

If one thought that there were diminishing returns for the sacrifice for being vegan over vegetarian, perhaps the calculus is better for being vegetarian over non-vegan, or for being vegan 99% of the time, say only when eating at your grandparents' house. I see too many people say 'well I can't be vegan because I don't want to upset my grandpa when he makes his traditional X dish'. Well, ok, so be vegan in every other aspect then. And as a personal anecdote, when my nonna found out she couldn't make her traditional Italian dishes for me anymore, she got over it very quickly and found vegan versions of all of them [off-topic, apologies!].

I also suspect that people are comfortable thinking about longtermism and sacrifice like this for non-humans but not for humans is because they may think that humans are still significantly more important. I think this is the case when you count flow-on effects, but not intrinsically (e.g. 1 unit of suffering for a human vs non-human).

I think the intrinsic worth ratio for most non-human animals is close to 1 to 1. I think the evidence suggests that their capacity for suffering is fairly close to ours, and some animals might arguably have an even higher capacity for suffering than us (I should say I'm strictly wellbeing/suffering based utilitarian in this).

I think the burden of proof should be on someone to show why humans are significantly more worthy of intrinsic moral worth. We all evolved from a common ancestor, and while there might be a sliding scale of moral worth from us to insects, it seems strange for there to be such a sharp drop off after humans, even within mammals. I would strongly err on the side of caution when applying this to my ethics, given our constantly expanding circle of moral consideration throughout history.

Comment by michaeldello on How should longtermists think about eating meat? · 2020-05-17T22:35:58.078Z · score: 14 (7 votes) · EA · GW

Self-plugging as I've written about animal suffering and longtermism in this essay:

http://www.michaeldello.com/terraforming-wild-animal-suffering-far-future/

To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.

People are already talking about introducing plants, insects and animals to Mars as a means of terraforming it. This would enormously increase the amount of wild-animal suffering. Even if we never leave our solar system, terraforming just one body, let alone several, could near double the amount of wild-animal suffering. There's also the possibility of bringing factory farms to Mars. I'm studying a PhD in space science and still get shut down when I try to say 'hey lets maybe think about not bringing insects to Mars'. This is far off from being a practical concern (maybe 100-1000 years) but it's never too early to start shifting social norms.

I'd call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.

I'd like to echo Peter's point as well. We don't ask these questions for a lot of other actions that would be unethical in the short term. There seems to be a bias in EA circles of asking this kind of question about non-human animal exploitation. I'm more arguing for consistency than saying we can't argue that a short term good has a long term bad resulting in net bad.

Comment by michaeldello on Save the Date for EA Global Boston and San Francisco · 2017-03-14T11:27:54.883Z · score: 0 (0 votes) · EA · GW

Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?

Comment by michaeldello on Ethical Reaction Time: What it is and why it matters · 2017-03-14T11:25:59.392Z · score: 2 (2 votes) · EA · GW

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

Comment by michaeldello on The asymmetry and the far future · 2017-03-14T11:18:38.467Z · score: 0 (2 votes) · EA · GW

Thanks for this John. I agree that even if you use some form of classical utilitarianism, the future might still plausibly be net negative in value. As far as I can tell, Bostrom and co don't consider this possibility when they argue the value of existential risk research, which I think is a mistake. They mostly talk about the expected number of human lives in the future if we don't succumb to X-risk, assuming they are all (or mostly) positive.

Comment by michaeldello on The asymmetry and the far future · 2017-03-14T11:15:39.179Z · score: 2 (2 votes) · EA · GW

Just to add to this, in my anecdotal experience, it seems like the most common argument amongst EAs for not focusing on X-risk or the far future is risk aversion.

Comment by michaeldello on Vote Pairing is a Cost-Effective Political Intervention · 2017-02-27T09:18:59.999Z · score: 3 (3 votes) · EA · GW

I have one concern about this which might reduce estimates of its impact. Perhaps I'm not really understanding it, and perhaps you can allay my concerns.

First, that this is a good thing to do assumes that you have a good certainty about which candidate/party is going to make the world a better place, which is pretty hard to do.

But if we grant that we did indeed pick the best candidate, there doesn't seem to be anything stopping the other side from doing the same thing. I wonder if reinforcing the norm of vote swapping just leads us to the zero sum game where supporters of candidate A are vote swapping as much as supporters of candidate B. So on the margin, engaging in vote swapping seems obviously good, but at a system level, promoting vote swapping seems less obviously good.

Does this make any sense?

Comment by michaeldello on The Map of Impact Risks and Asteroid Defense · 2017-02-19T23:05:34.135Z · score: 0 (0 votes) · EA · GW

Thanks for writing this. One point that you missed is that it is possible that, once we develop the technology to easily move the orbit of asteroids, the asteroids themselves may be used as weapons. Put another way, if we can move an asteroid out of an Earth-intersecting orbit, we can move it into one, and perhaps even in a way that targets a specific country or city. Arguably, this would be more likely to occur than a natural asteroid impact.

I read a good paper on this but unfortunately I don't have access to my drive currently and can't recall the name.

Comment by michaeldello on If you want to disagree with effective altruism, you need to disagree one of these three claims · 2016-09-27T23:04:09.684Z · score: 4 (4 votes) · EA · GW

I'd like to steelman a slightly more nuanced criticism of Effective Altruism. It's one that, as Effective Altruists, we might tend to dismiss (as do I), but non-EAs see it as a valid criticism, and that matters.

Despite efforts, many still see Effective Altruism as missing the underlying causes of major problems, like poverty. Because EA has tended to focus on what many call 'working within the system', a lot of people assume that is what EA explicitly promotes. If I thought there was a movement which said something like, 'you can solve all the world's problems by donating enough', I might have reservations too. They worry that EA does not pay enough credence to the value of building community and social ties.

Of course, articles like this (https://80000hours.org/2015/07/effective-altruists-love-systemic-change/) have been written, but it seems this is still being overlooked. I'm not arguing we should necessarily spend more time trying to convince people that EAs love systemic change, but it's important to recognise that many people have, what sounds to them, like totally rational criticisms.

Take this criticism (https://probonoaustralia.com.au/news/2015/07/why-peter-singer-is-wrong-about-effective-altruism/ - which I responded to here: https://probonoaustralia.com.au/news/2016/09/effective-altruism-changing-think-charity/). Even after addressing the author's concerns about EA focusing entirely on donating, he still contacted me with concerns that EA is going to miss the unintended consequences of reducing community ties. I disagree with the claim, but this makes sense given his understanding of EA.

Comment by michaeldello on Students for High Impact Charity: Review and $10K Grant · 2016-09-27T22:41:30.420Z · score: 4 (4 votes) · EA · GW

Thanks for this Peter, you've increased my confidence that supporting SHIC was a good thing to do.

A note regarding other social movements targeting high schools (more a point for Tee, who I will tell I've mentioned): I'm unsure how prevalent the United Nations Youth Association is in other countries, but in Australia it has a strong following. It has two types of member, facilitators (post high school) and delegates (high school students). The facilitators run workshops about social justice and UN related issues and model UN debates.

The model is largely self-sustaining, and students always look forward to the next weekend conference, which is full of fun activities.

At this point I don't have an idea for how such a model might be applied to SHIC, but it could be worth keeping in mind for the future.

An alternative might be to approach UNYA to get a SHIC workshop into their curriculum. I don't know how open they would be to this, but I'm willing to try through my contacts with UNYA in Adelaide.

Comment by michaeldello on The need for convergence on an ethical theory · 2016-09-25T22:45:56.638Z · score: 3 (2 votes) · EA · GW

This is a good point Dony, perhaps avoiding the worst possible outcomes is better than seeking the best possible outcomes. I think Foundational Research Institute has written something to this effect from a suffering/wellbeing in the far future perspective, but the same might hold for promoting/discouraging ethical theories.

Any thoughts on the worst possible ethical theory?

Comment by michaeldello on Review of EA Global 2016 Marketing · 2016-09-21T10:48:25.958Z · score: 0 (0 votes) · EA · GW

Thanks for this Kerry. I'm surprised that cold email didn't work, as I've had a lot of success using cold contact of various organisations in Australia to encourage people outside of EA to attend EA events. Would you mind expanding a little on what exactly you did here, e.g. what kinds of organisations you contacted?

Depending on the event, I've had a lot of success with university clubs (e.g. philosophy clubs, groups for specific charities like Red Cross or Oxfam, general anti-poverty clubs, animal rights/welfare clubs) and the non-profit sector generally. EA Sydney also had a lot of success promoting an 80K event partly by cold contacting university faculty heads asking them to share the workshop with their students (though I note Peter Slattery would be much better to chat to about the relative success of different promotional methods for this last one).

Could you please expand on what you mean by "Identify one “superhero” EA"? What is the purpose of this?

Comment by michaeldello on The need for convergence on an ethical theory · 2016-09-21T09:32:13.219Z · score: 2 (1 votes) · EA · GW

People have made some good points and they have shifted my views slightly. The focus shouldn't be so much on seeking convergence at any cost, but simply on achieving the best outcome. Converging on a bad ethical theory would be bad (although I'm strawmanning myself here slightly).

However, I still think that something should be done about the fact that we have so many ethical theories and have been unable to agree on one since the dawn of ethics. I can't imagine that this is a good thing, for some of the reasons I've described above.

How can we get everyone to agree on the best ethical theory?

Comment by michaeldello on The need for convergence on an ethical theory · 2016-09-21T09:28:06.440Z · score: 1 (1 votes) · EA · GW

Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!

I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions maximises pleasure and minimises pain). The answer may not be immediately clear, especially in tricky scenarios, and perhaps we can't be 100% certain about which action is best, but that doesn't mean there isn't an answer.

Regarding your last point about the downsides of taking utilitarianism to its conclusion, I think that (in theory at least) utilitarianism should take these into account. If applying utilitarianism harms your personal relationships and mental growth and ends up in a bad outcome, you're just not applying utilitarianism correctly.

Sometimes the best way to be a utilitarian is to pretend not to be a utilitarian, and there are heaps of examples of this in every day life (e.g. not donating 100% of your income because you may burn out, you may set an example that no one wants to reach... etc.).

Comment by michaeldello on The need for convergence on an ethical theory · 2016-09-19T22:41:39.420Z · score: 0 (0 votes) · EA · GW

Thanks Michael, some good points. I had forgotten about EMV, which is certainly applicable here. The trick would be convincing people to think in that way!

Your third point is well taken - I would hope that we converge on the best moral theory. Converging on the worst would be pretty bad.

Comment by michaeldello on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-17T23:46:30.504Z · score: 2 (2 votes) · EA · GW

I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn't win (and probably even if it does) I'll share it here.

I think it's a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I'm not sure that's totally obvious. When you throw wild animals and digital systems into the mix, things get scary.

Comment by michaeldello on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-14T23:55:52.462Z · score: 1 (1 votes) · EA · GW

Thanks, there are some good points here.

I still have this feeling, though, that some people support some causes over others simply for the reason that 'my personal impact probably won't make a difference', which seems hard to justify to me.

Comment by michaeldello on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-14T23:53:28.926Z · score: 3 (3 votes) · EA · GW

Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is.

Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.

Comment by michaeldello on Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering · 2016-08-26T05:21:56.246Z · score: 4 (4 votes) · EA · GW

Thanks for writing this. One small critique:

"For example, Brian Tomasik has suggested paying farmers to use humane insecticides. Calculations suggest that this could prevent 250,000 painful deaths per dollar."

I'm cautious about the sign of this. Given that insects are expected to have net negative lives anyway, perhaps speeding up their death is actually the preferable choice. Unless we think that an insect dying of pesticide is more painful than them dying naturally plus the pain throughout the rest of their life.

But overall, I would support the recommendation that OPP supports WAS research.

Comment by michaeldello on The Meat Eater Problem: Developing an EA Response · 2016-08-14T12:38:09.981Z · score: 0 (0 votes) · EA · GW

It looks like you're subscribing to a person-affecting philosophy, whereby you say potential future humans aren't worthy of moral consideration because they're not being deprived, but bringing them into existence would be bad because they would (could) suffer.

I think this is arbitrarily asymmetrical, and not really compatible with a total utilitarian framework. I would suggest reading the relevant chapter in Nick Beckstead's thesis 'On the overwhelming importance of shaping the far future', where I think he does a pretty good job at showing just this.

Comment by michaeldello on Earning to Give v. Pursuing your Passion/Direct Work · 2016-08-01T03:15:56.090Z · score: 2 (2 votes) · EA · GW

I did earning to give for 18 months in a job that I thought I would really enjoy but after 12 months realised I didn't. I'm now doing a PhD.

I think personal fit is pretty important, but at the end of the day it's still just another thing to consider, and not the be all end all. I think its a pretty valid point that you will perform better in a role that you enjoy and thus advance further and have more impact, but if you're really trying to maximise impact there are limits to that (e.g. Hurford's example about surfing, unless surfing to give can be a thing).

So you should probably pick a job that you enjoy, but it's unlikely that the career where you will have the greatest marginal impact is also the career that you most enjoy. If it is, you're very lucky indeed. Otherwise, I would suggest finding some kind of balance.

Comment by michaeldello on Month-long EA movement building experiment: Effective Altruism: Grow · 2016-07-09T13:06:20.402Z · score: 2 (2 votes) · EA · GW

I noticed there doesn't seem to be an option to nominate less than 5 people. Not sure if this is a feature but I wanted to just nominate a few people and was unable to.

Comment by michaeldello on Are GiveWell Top Charities Too Speculative? · 2016-07-07T00:42:30.901Z · score: 0 (0 votes) · EA · GW

I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say 'could' only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn't put us off too much, if at all.

Comment by michaeldello on (Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election · 2016-07-06T02:13:59.278Z · score: 0 (0 votes) · EA · GW

Happy to hear what they are Alex.

The final article had a title change and it was made clear numerous times that it was a personal analysis, not necessarily representing the views of Effective Altruism. In fact, we worked off the premise of voting to maximise wellbeing, not to further EA.

I posted it here and shared it with EAs because they are used to thinking about ways to maximise wellbeing, and I've never seen an analysis that looks at multiple parties and policies to try and select the 'best' party (many have agreed that this doesn't seem to have been done before). I figured the title including 'draft' would make it clear that this is by no means a final piece, but perhaps I could have been clearer. I think not making an attempt to select the best party at all is also problematic.

Here is the final piece if you are interested, although the election is over now.

http://www.michaeldello.com/?p=839

Comment by michaeldello on End-Relational Theory of Meta-ethics: A Dialogue · 2016-07-06T01:30:27.814Z · score: 0 (0 votes) · EA · GW

Regardless of whether or not moral realism is true, I feel like we should act as though it is (and I would argue many Effective Altruists already do to some extent). Consider the doctor who proclaims that they just don't value people being healthy, and doesn't see why they should. All the other doctors would rightly call them crazy and ignore them, because the medical system assumes that we value health. In the same way, the field of ethics came about to (I would argue) try and find the most right thing to do. If an ethicist comes out and says that the most right thing to is to kill whomever you like without justification (ignoring flow on effects of course) we should be able to say they are just crazy. One, because wellbeing is what we have decided to value, and two, because wellbeing is associated with positive brain states, and why value something if it has no link to conscious experience? What would the world be like if we accepted that these people just have different values and 'who are we to say they are wrong'?

Imagine the Worst Possible World of Sam Harris, full of near-infinite suffering for a near-infinite number of mind states. This is bad, if the word bad means anything at all. If you think this is not bad, then we probably mean different things by 'bad'. Any step to move away from this is therefore good. There are right and wrong ways to move from the Worst Possible World to the Best Possible World, and to an extent we can determine what these are.

I haven't fully formed this idea yet, but I'm writing a submission to Essays in Philosophy about this with Robert Farquharson. An older version of our take on this is here: http://www.michaeldello.com/?p=741

Comment by michaeldello on (Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election · 2016-06-28T10:15:00.654Z · score: 0 (0 votes) · EA · GW

Thanks for everyone's feedback. The article has now been published and is a living document (we will edit daily based on feedback) until the election.

http://www.michaeldello.com/?p=839

Comment by michaeldello on (Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election · 2016-06-28T08:38:18.202Z · score: 1 (1 votes) · EA · GW

Hey Kieran, a few more sections have been added since I did this post, including animal welfare. Check out the Google Document for the latest version.

Comment by michaeldello on (Draft & looking for feedback/review) How to vote like an EA in the Australian Federal election · 2016-06-24T09:40:17.197Z · score: 0 (0 votes) · EA · GW

Please note that this is not a final recommendation, and is not intended to be read as such. Please don't share this beyond EA circles yet unless there is someone who might be particularly suited to helping to make this more rigorous and/or useful.

Comment by michaeldello on The morality of having a meat-eating pet · 2016-06-05T08:35:06.625Z · score: 0 (0 votes) · EA · GW

Very true David, but then the same could be said of being vegan to a lesser extent.

This article was targeted more towards the vegan community in general, not just EAs (though I cross posted it here because I thought it might be useful). Most non-EAs wouldn't think about donations that way, and probably wouldn't donate the $20,000 if they didn't get a pet.

Comment by michaeldello on The morality of having a meat-eating pet · 2016-06-05T08:33:30.907Z · score: 1 (1 votes) · EA · GW

If you don't get your pets from a 'no-kill shelter', that might not be the case. In that situation, if you don't get the pet, they might just be put down.

Comment by michaeldello on The morality of having a meat-eating pet · 2016-06-05T08:32:01.082Z · score: 0 (0 votes) · EA · GW

Very true - I wasn't sure what the difference would be between non-by-product and by-product consumption. I suspect it's somewhere between what I stated and no effect, so this estimate could be an upper bound.

Comment by michaeldello on The morality of having a meat-eating pet · 2016-06-05T08:29:22.132Z · score: 0 (0 votes) · EA · GW

It would be interesting to see a study on this, it certainly seems plausible - a survey asking for the number of family pets throughout childhood and their current dietary choices might be illuminating.

In any case, I would still argue that this should be done with a non-meat-eating pet over a meat-eating one.

Comment by michaeldello on Global poverty could be more cost-effective than animal advocacy (even for non-speciesists) · 2016-06-04T07:34:44.594Z · score: 2 (2 votes) · EA · GW

"The biggest takeaway here is that animal charity research is a really good cause."

I agree - if we're highly certain we've found the best poverty interventions, or close to, and the best animal interventions might be ~250x as effective as the best poverty interventions, that should argue for increased animal charity research. But Peter is definitely right in that the higher robustness of existing human interventions (ignoring flow on effects like the poor meat eater problem) is a potentially valid reason to pick poverty interventions now over animal interventions now.

Comment by michaeldello on The morality of having a meat-eating pet · 2016-06-03T22:57:43.260Z · score: 0 (0 votes) · EA · GW

Sure, I think any way of reducing the population/proportion of meat eating pets would be, on the whole, a good thing.

I'd also predict a positive correlation between affluence and having a pet, which might mean that societies coming out of poverty results in more animal consumption than suggested by the 'poor meat eater problem'.

Comment by michaeldello on Advice Wanted on Expanding an EA Project · 2016-05-01T10:02:47.737Z · score: 0 (0 votes) · EA · GW

I wanted to take part in the essay competition and categorise the space related risks and solutions to food (related to my PhD in space science) though unfortunately didn't have time. Will this competition be recurring? If not, it's something I'd like to write about anyway.

Comment by michaeldello on Looking for Wikipedia article writers (topics include many of interest to effective altruists) · 2016-04-26T12:21:27.374Z · score: 0 (0 votes) · EA · GW

I'm interested in working on the animal welfare section. I'm intending to do my own research on this in the near future anyway. In particular I'm interested in trying to find evidence and arguments for the effectiveness of different approaches to animal activism.

Comment by michaeldello on Looking for Wikipedia article writers (topics include many of interest to effective altruists) · 2016-04-26T12:18:45.414Z · score: 0 (0 votes) · EA · GW

The ACE article got removed? Do you have any idea why? I only skimmed the article but it looked like a reasonable article.

Comment by michaeldello on New climate change report from Giving What We Can · 2016-04-23T07:00:04.282Z · score: 2 (2 votes) · EA · GW

I haven't read the articles yet, though I did study climate change as part of my undergraduate and externally, so I'll have a crack at answering your technical question (Q3).

The point of mitigation is to reduce greenhouse gas emissions (including carbon dioxide and methane) or to capture and store them (number of ways to do this, underground gas to liquid storage, growing trees etc.). CO2 actually has a much shorter residence time in the atmosphere, but it does then get stored in the ocean for up to centuries. Methane is also a big problem, because it has a larger impact on warming (around 20 times greater), but has a shorter residence time (around 8 years in the atmosphere). For this reason, some people propose that mitigation, at least in the short term, should focus on reducing methane gas, as it has a larger short term effect, and if we hypothetically cut methane to zero (not going to happen as there are still natural sources, but for arguments sake), we would see the impact of methane disappear quite quickly, compared to cutting carbon.

To answer the rest of your question, the fact that carbon is transferred from the atmosphere to the oceans, soil and organic matter over time means that there is some particular amount of greenhouse gas (GHG) that can be put in the atmosphere without warming, as the Earth as a whole can absorb it (assuming no man-made carbon capture). This point is called the equilibrium state. We have well and truly exceeded the equilibrium point through man-made emissions, which is why the planet is warming. If we could reduce emissions to the equilibrium emission rate and below that, the planet would, eventually, start to cool. Even if we expect we will only get below equilibrium in 100 years, any reduction in emissions now will mean the atmosphere will have warmed less by the time we get there.

Comment by michaeldello on The Poor Meat Investor Problem · 2016-04-22T11:35:13.437Z · score: 2 (2 votes) · EA · GW

Nice discussion, this is something I've thought about before but haven't put to paper.

As for the effectiveness of using animals to lift people out of poverty vs other methods, I have no grounds to comment. I can see why the well-being of animals wouldn't be considered in the economic equation (though disagree with it) for the very line of reasoning you've proposed about certain subsets of humanity not being considered in years past.

Even as a non-speciesist, from a utilitarian standpoint, I could still see the 'possibility' of animals as investment being a good option in that humans tend to have more flow on effects than animals. Increase the well-being of a human and bring them out of poverty and they might go on to develop their nations economy, reduce population growth (through the relationship between child mortality and pop. growth), and develop new technology. Increase the well-being of an animal and nothing really happens beyond that. Having said that, there are also negative flow on effects of reducing poverty, such as the poor meat eater problem and, I suspect, increased environmental damage.

Even if we accept that though, taking the long view on animal welfare means that we ought to search for viable and effective alternatives to animal exploitation for animal use to bring people out of poverty. If we eliminated animal use today, many billions of animals won't be exploited in the future, in a way somewhat reminiscent of the X-risk argument.

It comes back to the practical nature of the question though. Presumably animals are used to reduce poverty because they work. Some important work could be in either proving that this is not the case (if it's not the case), proving that something else is better (if that's the case) or in finding/developing a more effective solution that doesn't involve animal abuse. Fact of the matter is, for those people that don't mind animal exploitation, or place very low weight on it, they will do what is most effective for the humans involved. If that happens to not involve animals, all the better.

As an after thought, it's similar to the problem of global warming. In an ideal world, few people actually want fossil fuels to be the leading source of energy, it just happens to be the most cost-effective and easiest in the short term. Find a better solution and the market will switch (assuming no mass misleading of the populace), almost no question.

Comment by michaeldello on Guidelines on depicting poverty · 2016-04-09T09:52:02.403Z · score: 1 (1 votes) · EA · GW

I'm not sure that simplification and gamification should be intrinsically bad things. Situationally, both can be used to get a lot of good done. The Gaming for Good events, run by Bachir Boumaaza (Athene), can be described as 'gamification', but raised nearly $15 million US for Save the Children. Ignoring the fact that they are not GiveWell etc (let's imagine for sake of argument that they are, or the charity was AMF), would that outweigh any negative impacts of gamification?

Comment by michaeldello on The great calculator · 2016-03-30T10:51:17.471Z · score: 0 (0 votes) · EA · GW

Great comment, you've convinced me. Thanks for the link as well, it looks interesting.

Comment by michaeldello on The great calculator · 2016-03-29T09:47:15.668Z · score: 0 (0 votes) · EA · GW

Thanks for the feedback everyone. Lots of recurring themes, so I'll address them partly here.

The main point is this; the end market is not Effective Altruists. I don't think it's very likely at all that adding too much complexity for the sake of accuracy, at least on the front end, will result in any meaningful reduction in animal suffering. The point is not to be deceitful or to bias people, but simply to maximise the reduction in animal suffering.

As someone said at the EA Global 2015 conference in Melbourne, "Sometimes the best way to be a utilitarian is to pretend to not be a utilitarian", which I loosely take to mean that we should sometimes drop our perceived moral or analytical rigour in order to actually do more good.

Perhaps there could be two versions; one which is completely rigorous, contains elements of x-risk (as some people have suggested) and is targeted at existing EAs, while the other is targeted at the broader public.

On a related note, I'm yet to do the calculation but I'm of the mind that current estimates on animal welfare charities are actually underestimates as they don't factor in the long-run benefits of reducing the proportion of humanity that relies on subjecting animals to suffering. The earlier a society that doesn't inflict suffering on animals is brought about, the fewer future animals will suffer. I find this tends not to be even mentioned when people compare animal welfare orgs to x-risk orgs.

But I'm very open to continuing this discussion. As I've said these are early days for what was an idea I wanted to get in the public space.

Comment by michaeldello on The great calculator · 2016-03-29T09:46:40.779Z · score: 0 (0 votes) · EA · GW

Thanks for your comments, see my other responses, particularly around the question of rigour vs. impact.

Comment by michaeldello on The great calculator · 2016-03-29T09:46:22.506Z · score: 0 (0 votes) · EA · GW

Thanks for your comments, see my other responses, particularly around the question of rigour vs. impact.

Comment by michaeldello on The great calculator · 2016-03-29T09:44:40.197Z · score: 0 (0 votes) · EA · GW

Good idea, I've had a similar discussion with someone else about this. I think that would be a good idea for a separate calculator more targeted at EAs, but since this one is targeted at those who are non-EA aligned, I don't think that's a good idea for the following reasons:

  1. It would complicate the calculator, not the point of being too complicated to be accurate, but to be too complicated to resonate or be meaningful for someone from the vast non-EA population of people who don't find such rigour to be very compelling.

  2. I think it would be easier to get people thinking about their morals and effectiveness in present animal terms (human and non-human) first than to go straight for x-risk.

There could even be a follow-up calculator after this simple one (e.g. "Feeling compelled? Click here for an even more shocking calculator").

Comment by michaeldello on The great calculator · 2016-03-29T09:35:01.961Z · score: 0 (0 votes) · EA · GW

I completely agree, but see my other comments about the relationship of accuracy/rigour vs impact.