Cosmic EA: How Cost Effective Is Informing ET? 2017-12-31T08:37:10.422Z


Comment by TruePath on Integrity for consequentialists · 2021-04-29T07:32:17.175Z · EA · GW

Re your first point yup they won't try to recruit others to that belief but so what? That's already a bullet any utilitarian has to bite thanks to examples like the aliens who will torture the world if anyone believes utilitarianism is true or ties to act as of it is. There is absolutely nothing self defeating here.

Indeed if we define utilitarianism as simply the belief that ones preference relation on possible worlds is dictated by the total utility in then it follows by definition that the best act an agent can take are just the ones which maximize utility. So maybe the better way to phrase this is as: why care what the agent who pledges to utilitarianism in some way and wants to recruit others might need to do or act that's a distraction from the simple question of what in fact maximizes utility. If that means convincing everyone not to be utilitarians then so be it.


And yes re the rest of your points I guess I just don't see why it matters what would be good to do if other agents respond in some way you argue would be reasonable. Indeed, what makes consequentialism consequentialism is that you aren't acting based on what would happen if you imagine interacting with idealized agents like a Kantianesque theory might consider but what actually happens when you actually act.

I agree the caps were aggressive and I apologize for that and I agree I'm not trying to produce evidence which says that in fact how people respond to supposed signals of integrity tends to match what they see as evidence you follow the standard norms. That's just something people need to consult their own experience and ask themselves if, in their experience, thay tends to be true. Ultimately I think that it's just not true that a priori analysis of what should make people see you as trustworthy or have any other social reaction is a good guide to what they will do?

But I guess that is just going to return to point 1 and our different conceptions of what is utilitarianism requires.

Comment by TruePath on A mental health resource for EA community · 2021-04-29T07:07:13.205Z · EA · GW

Yes and reading this again now I think I was way too harsh. I should have been more positive about what was obviously an earnest concern and desire to help even if I don't think it's going to work out. A better response would have been to suggest other ideas to help but other than reforming how medical practice works so mental suffering isn't treated as less important than being physically debilitated (docs will agree to risky procedures to avoid physical loss of function but won't with mental illness ...likely because the family doesn't see the suffering from the inside but do see the loss in a death so are liable to sue/complain if things go bad).

Comment by TruePath on The Importance of Truth-Oriented Discussions in EA · 2021-04-29T07:00:04.853Z · EA · GW

I apparently wasn't clear enough that I absolutely agree and support things like icebreakers etc. But we shouldn't either expect them to or judge their effectiveness based on how much it increases female representation. Absolutely do it and do it for everyone who will benefit but just don't be surprised if even if we do that everywhere it doesn't do much to affect gender balance in EA.

I think if we just do it because it makes ppl more comfortable without the gender overlay not only will it be more effective and more widely adopted but avoid the very real risk of creep (we are doing this to draw in more women but we haven't seen a change so we need to adopt more extreme approaches). Let's leave gender out of it when we can and in this case we absolutely can because being welcoming helps lots of ppl regardless of gender.

Comment by TruePath on The Importance of Truth-Oriented Discussions in EA · 2021-04-29T06:53:51.271Z · EA · GW

No I didn't mean to suggest that. But I did mean to suggest that it's not at all obvious that this kind of Schelling style amplification of preferences is something that would be good to do something about. The archetypal example of Schelling style clustering is a net utility win even if a small one.

Comment by TruePath on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-04-29T06:46:07.955Z · EA · GW

I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).

IMO this concern is more persuasive than the risk of trying Geoengineering.

But I disagree that Geoengineering isnt going to happen soon. All the same reasons we aren't doing anything about global warming now are reasons that we'll flip on a dime when we start seeing real harms.

Comment by TruePath on Defusing the mitigation obstruction argument against geoengineering and carbon dioxide removal · 2021-04-29T06:39:55.802Z · EA · GW

I ultimately agree with you but I think you miss the best argument for the other side. I think it goes like this:

  1. Humans are particularly bad at coordinating to reduce harms that are distant in time or are small risks of large harms. In other words out of sight out of mind. We are much better at solving problems which we experience at least some current harm from but prefer to push off harms into the future or a low probability event.

The argument for this point is buttressed by the very fact that we aren't doing anything about warming right now.

  1. Geoengineering takes the continuous harms of from increasing temperatures and renders them discontinuous and increases the risk of sudden major negative effects.

The argument here is that Geoengineering let's us eliminate all negative effects as long as it is effective but if the Geoengineering mechanism ever fails we experience all the built up warming at once. Maybe we get hit by a bit solar flare and can't launch our sunshade or shoot our sulfur into the stratosphere.

Comment by TruePath on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T17:38:00.233Z · EA · GW

The parent post already responded to a number of these points but let me give a detailed reply.

First, the evidence you cite doesn't actually contradict the point being made. Just because women rate EA as somewhat less welcoming doesn't mean that this is the reason they return at a lower rate. Indeed, the alternate hypothesis that says it's the same reason women are less likely to be attracted to EA in the first place seems quite plausible.

As far as the quotes we can ignore the people simply agreeing that something should be done to increase diversity and talk about the specific reactions. I'll defer the one about reporting a sexist remark till the end and focus on the complaints about the environment. These don't seem to be complaints suggesting any particular animus or bad treatment of women or other underprivileged groups merely people expressing a distaste for the kind of interactions they associate with largely male groups. However, other people do like that kind of interaction so, like the question of what to serve for dinner or whether alcohol should be served, you can't please everyone. While it's true that in our society there is a correlation between male gender and a preference for a combative, interrupting challenging style of interaction there are plenty of women who also prefer this interaction style (and in my own experience at academic conferences gay men are just as likely as straight men to behave this way). Indeed, the argument that it's anti-woman to interact in a way that involves interrupting etc.. when some women do prefer this style is the very kind of harmful gender essentialism that we should be fighting against.

Of course, I think everyone agrees that we should do what we can to make EA more welcoming *when that doesn't impose a greater cost than benefit.* Ideally, there would be parts of EA that appeal to people who like every kind of interaction style but there are costs in terms of community cohesion, resources etc.. etc..

The parent was arguing, persuasively imo, that imposing many of the suggested reforms would impose substantial costs elsewhere not that it might not improve diversity or offer benefits to some people. I don't see you making a persuasive case that the costs cited aren't very real or that the benefits outweigh them.

This finally brings us to the complaint about where to report a sexist comment. While I think no one disagrees that we should condemn sexist comments creating an official reporting structure with disciplinary powers is just begging to get caught up in the moderators dilema and create strife and argument inside the community. Better to leave that to informal mechanisms.

Comment by TruePath on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-13T01:12:36.929Z · EA · GW

Also, your concern about some kind of disaster caused by wireheading addiction and resulting deaths and damage is pretty absurd.

Yes, people are more likely to do drugs when they are more available but even if the government can't limit the devices that enable wireheading from legal purchase it will still require a greater effort to put together your wireheading setup than it currently does to drive to the right part of the nearest city (discoverable via google) and purchasing some heroin. Even if it did become very easy to access it's still not true that most people who have been given the option to shoot up heroin do so and the biggest factor which deters them is the perceived danger or harm. If wireheading is more addictive/harmful it will discourage use.

Moreover, for wireheading to pose a greater danger than just going to buy heroin it would have to give greater control over brain stimulation (i.e. create more pleasure etc..) and the greater our control over the brain stimulation the greater the chance we can do so in a way that doesn't create damage.

Indeed, any non-chemical means of brain stimulation is almost certain to be crazily safe because once monitoring equipment detects a problem you can simply shut off the intervention without the concern of long-halflife drugs remaining in the system continuing the effect.

Comment by TruePath on Wireheading as a Possible Contributor to Civilizational Decline · 2018-11-13T01:04:29.735Z · EA · GW

You make a lot of claims here that seem unsupported and based on nothing but vague analogy with existing primitive means of altering our brain chemisty. For instance a key claim that pretty most of your consequences seem to depend on is this: "It is great to be in a good working mood, where you are in the flow and every task is easy, but if one feels “too good”, one will be able only to perform “trainspotting”, that is mindless staring at objects.

Why should this be true at all? The reason heroin abusers aren't very productive (and, imo, heroin isn't the most pleasurable existing drug) is because of the effects opiates have as depressants making them nod off etc.. The more control we achieve over brain stimulation the less likely wireheading will have the kind of side-effects which limit functioning. Now one might have a more subtle argument that suggests the ability of even a directly stimulated brain to feel pleasure will be limited and thus if we directly stimulate too much pleasure we will no longer have the appropriate rewards to incentivize work but it seems equally plausible that we will be able to seperate pleasure and motivation/effort and actually enhance our inclination to work while instilling great pleasure.

Comment by TruePath on Where can I donate to support insect welfare? · 2017-12-31T08:23:24.000Z · EA · GW

I'm disappointed that the link about which invertebrates feel pain doesn't go into more detail on the potential distinction between merely learning from damage signals and the actual qualitative experience of pain. It is relatively easy to build a simple robot or write a software program that demonstrates reinforcement learning in the face of some kind of damage but we generally don't believe such programs truly have a qualitative experience of pain. Moreover, the fact that some stimuli are both unpleasant yet rewarding (e.g. encourage repetition) indicates these notions come apart.

Comment by TruePath on Where can I donate to support insect welfare? · 2017-12-31T08:17:13.422Z · EA · GW

While this isn't an answer I suspect that if you are interested in insect welfare one first needs a philosophical/scientific program to get a grip on what that entails.

First, unlike other kinds of animal suffering it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population. Thus, unlike large animals, where one can find common ground between various consequentialist moral views it seems quite likely that whether a particular intervention is good or actually harmful for insects will often turn on subtle questions about one's moral views, e.g., average utility or total, does the welfare of possible future beings count, is the life of your average insect a net plus or minus.

As such simply donating to insect welfare risks doing (what you feel is) a great moral harm unless you've carefully considered these aspects of your moral view and chosen interventions that align with them.

Secondly, merely figuring out what makes insects better off is hard. While our intuitions can go wrong its not too unreasonable to think that we can infer other mammals and even vertebrates level of pain/pleasure based on analogies to our own experiences (a dog yelping is probably in pain). However, when it comes to something as different as an insect its unclear if its even safe to assume an insect's neural response to damage even feels unpleasant. After all, surely at some simple enough level of complexity, we don't believe those lifeform's response to damage manifests as a qualitative experience of suffering (even though the tissues in my body can react to damage and even change behavior to avoid further damage without interaction with my brain we don't think my liver can experience pain on its own). At the very least to figure out what kinds of events might induce pain/pleasure responses in an insect would require some philosophical analysis of what is known about insect neurobiology.

Finally, it is quite likely that it will be the indirect effects of any intervention on the wider insect ecosystem rather than any direct effect which will have the largest impact. As such, it would be a mistake to try and engage in any interventions without first doing some in depth research into the downstream effects.

Point of this all is that with respect to insects we need to support academic study and consideration more before actually engaging in any interventions.

Comment by TruePath on High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next · 2017-08-29T06:59:00.311Z · EA · GW

I'm a huge supporter of drug policy reform and try to advocate it as much as I can in my personal life. Originally, I was going to post here suggesting we need a better breakdown of particular issues which are particularly ripe for policy reform (say reforming how drug weights are calculated) and the relative effectiveness of various interventions (lobbying, ads, lectures etc..).

However, on reflection I think there might be good reasons not to get involved in this project.

Probably the biggest problem for both EA and drug policy reform is the perception that the people involved are just a bunch of weirdos (we're emotionally stunted nerds and they are a bunch of stoners). This perception reduces donations to EA causes (you don't get the same status boost if its weird) and stops people from listening to the arguments of people in dpr.

If EA is seen as being a big supporter of DPR efforts this risks making the situation worse for both groups. I can just imagine an average lesswrong contributor being interviewed on TV as to why he supports dpr and when the reporter asks him how this affects him personally he starts enthusiastically explaining his use of nootropics and the public dismisses the whole thing as just weird druggies trying to make it legal to get high. This doesn't mean those of us who believe in EA can't quietly donate to dpr organizations but it probably does prevent us from doing what EA does best, determining the particular interventions that work best at a fine grained level and doing that.

This makes me skeptical this is a particularly good place to intervene. If we are going to work in policy change at all we should pick an area where we can push for very particular effective issues without the risk of backlash (to both us and dpr organiations).

Comment by TruePath on Does Effective Altruism Lead to the Altruistic Repugnant Conclusion? · 2017-07-29T05:41:55.666Z · EA · GW

This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don't even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn't be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.

Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt description (a world with many people with a small amount of positive utility) presents the situation one way while the flavor text is phrased so as to trigger our empathetic/whats it like response as if it it didn't satisfy the overt description. For instance if we remove the flavor about it being a very highly overpopulated world and simply said consider a universe with many many beings each with a small amount of utility then finding that superior no longer seems particularly troubling. It just states the principle allowing addition of utilities in the abstract. However, sneak in the flavor text that the world is very overcrowded and the temptation is to imagine a world which is ACTIVELY UNPLEASANT to be in, i.e., one in which people have negative utility.

More generally, I find these kind of considerations far more compelling at convincing me I have very poor intuitions for comparing the relative goodness/badness of some kinds of situations and that I better eschew any attempt to rely MORE on those intuitions and dive into the math. In particular, the worst response I can imagine is to say: huh, wow I guess I'm really bad at deciding which situations are better or worse in many circumstances, indeed, one can find cases where A seems better than B better than C better than A considered pairwise, guess I'll throw over this helpful formalism and just use my intuition directly to evaluate which states of affairs are preferable.

Comment by TruePath on [deleted post] 2017-07-23T12:06:33.788Z

I'm wondering if it is technically possible to stop pyroclastic flows from volcanoes (particularly ones near population centers like vesuivious) by building barriers and if so if its an efficient use of resources. Not quite world changing but it is still a low risk and high impact issue and there are US cities that are near volcanoes.

I'm sure someone has thought of this before and done some analysis.

Comment by TruePath on The Philanthropist’s Paradox · 2017-06-26T07:33:29.797Z · EA · GW

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus's ship style worries is to shrug and say there isn't any fact of the matter but the presentist can't take that line because there are huge moral implications to where we draw the line for them.

Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?

Comment by TruePath on A mental health resource for EA community · 2017-05-11T07:59:50.086Z · EA · GW

Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that and think of ourselves as helping...either by making those suffering feel like failures/hopeless/misunderstood (just get help and it's all good) or causing us to believe we've done our part. Maybe this is just irrational bias I don't know.


While I like the motivation I worry that this article does more to make us feel better that 'something is being done' than it does anything for EA community members with these problems. Indeed, I worry that sharing what amounts to fairly obvious truisms that any google search would reveal actually saps our limited moral energy/consideration for those with mental illness (ohh good we've done our part).

Now I'm sure the poster would defend this piece by saying well maybe most EA people with these afflictions won't get any new information from this but some might not and it's good to inform them. Yes, if informing them were cost free it would. However, there is still a cost in terms of attention, time, pushing readers away from other issues. Indeed, unless you honestly believe that information about every mental illness ought to be posted on every blog around the world it seems we ought to analyze how likely this content on this site is to be useful. I doubt EA members suffer these diseases at a much greater rate than the population in general while I suspect they are informed about these issues at a much greater rate making this perhaps the least effect place to advertise this information.

I don't mean to downplay these diseases. They are serious problems and to the extent there is something we can do with a high benefit/cost ratio we should. So maybe a post identifying media that is particularly likely to serve afflicted individuals who would benefit from this and urging readers to submit this information would be helpful.

Comment by TruePath on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-11T07:37:31.670Z · EA · GW

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, Singer's purpose at TED isn't to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.. While those of us already in EA might find a statement like "We prevent 1 year of blindness for every 3 surgeries done which on average cost..." perfectly compelling the audience members who aren't yet persuaded simply tune out. After all it's just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.

Now it's important that we don't simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.

This isn't to say that your point here isn't important regarding people inside EA making estimates or givewell analysis or the like. I'm just pointing out that it's important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it's a victory.

Comment by TruePath on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T12:46:14.435Z · EA · GW

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others who profess to be amassing wealth/power/etc.. to do good

Once broken down this way it is clear that while 1 is obviously true 2 and 3 aren't. Lacking the ability to perfectly bind one's future self means there is always the risk that you will instead use your influence/power for bad ends. 2' raises further concerns as to whether what you believe to be good ends really are good ends. This risk is compounded in 3 by the possibility that the people are simply lying about the good ends.

Once we are precise in this way it is clear that it isn't the in principle approval of amassing power to do good that is at fault but rather the trustworthiness/accuracy of those who undertake such schemes that is the problem.

Having said this some degree of amassing power/influence as a precursor to doing good is probably required. The risks simply must be weighed against the benefits.

Comment by TruePath on Saving expected lives at $10 apiece? · 2017-01-12T12:00:29.932Z · EA · GW

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.

WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

Comment by TruePath on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T11:43:28.605Z · EA · GW

The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.

No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing).

If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing 'Give Now' impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices.

Moreover, there is nothing morally wrong with putting your organization's best foot forward or using standard charity/advertising tactics. Despite the joke it's not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it.

I mean it's important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It's unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries.

What EA charities should do, however, is pursue a market segmentation strategy. Avoid any falsehoods (as well as annoying behavior likely to result in substantial criticism) when putting a good face on their situation/effectiveness and make sure detailed truthful and complete data and analysis is available for those who put in the work to look for it.

Everyone is better off this way. No on is lied to. The charities get more money and can do more with it. The people who decide to give for impulsive or other less than rational reasons can feel good about themselves rather than feeling guilty they didn't put more time into their charitable decisions. The people who care about choosing the most effective evidence backed charitable efforts can access that data and feel good about themselves for looking past the surface. Finally, by having the same institution chase both the smart and dumb money the system works to funnel the dumb money toward smart outcomes (charities which lose all their smart money will tend to wither or at least change practices).

Comment by TruePath on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T11:15:59.955Z · EA · GW

It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.

It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect so let's instead simply focus on avoiding the kind of unjustified claims that generated the trouble.

Of course, it is entirely possible that some level of deception is necessary to run a successful charity. I'm sure a degree of at least moral coercion is, e.g., asking people for money in circumstances it would look bad not to give. However, I'm confident this can be done in the same way traditional companies deceive, i.e. by merely creating positive associations and downplaying negative ones rather than outright lying.

Comment by TruePath on Saving expected lives at $10 apiece? · 2016-12-17T02:03:21.328Z · EA · GW

Lives saved is a very very weird and mostly useless metric. At the very least try and give an estimate in QALYs (quality adjusted life years) since very few people actually value saving life per say (e.g. stopping someone who is about to die of cancer from dying a few minutes earlier).

Given that many non-deaths from food scarcity are probably pretty damn unpleasant this would probably be a more compelling figure.

Comment by TruePath on Three Heuristics for Finding Cause X · 2016-12-02T18:21:05.577Z · EA · GW

This doesn't actually provide anything like a framework to evaluate Cause X candidates. Indeed, I would argue it doesn't even provide a decent guide to finding plausible Cause X candidates.

Only the first methodology (expanding the moral sphere) identifies a type of moral claim that we have historically looked back on and found to be compelling. The second and third methods just list typical ways people in the EA community claim to have found Cause X. Moreover, there is good reason for thinking that successfully finding something that qualifies as Cause X will require coming up with something that isn't an obvious candidate.

Comment by TruePath on Integrity for consequentialists · 2016-12-02T16:19:21.680Z · EA · GW

I think this post is confused on a number of levels.

First, as far as ideal behavior is concerned integrity isn't a relevant concept. The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one's word or other actions will have on the perceptions (and thus future actions) of other people.

Now the post rightly notes that as a limited human agent we aren't truly able to engage in this kind of analysis. Both because of our computational limitations and our inability to perfectly deceive it is beneficial to adopt heuristics about not lying, stabbing people in the back etc.. (which we may judge to be worth abandoning in exceptional situations).

However, the post gives us no reason to believe it's particular interpretation of integrity "being straightforward" is the best such heuristic. It merely asserts the author's belief that this somehow works out to be the best.

This brings us to the second major point, even though the post acknowledges the very reason for considering integrity is that, "I find the ideal of integrity very viscerally compelling, significantly moreso than other abstract beliefs or principles that I often act on." the post proceeds to act as if it was considering what kind of integrity like notion would be appropriate to design into (or socially construct) in some alternative society of purely rational agents.

Obviously, the way we should act depends hugely on the way in which others will interpret our actions and respond to them. In the actual world WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST. It doesn't matter if some other alternate notion of integrity might have been better to have if we don't show integrity in the traditional manner we will be punished.

In particular, "being straightforward" will often needlessly imperil people's estimation of our integrity. For example, consider the usual kinds of assurances we give to friends and family that we "will be there for them no matter what" and that "we wouldn't ever abandon them." In truth pretty much everyone, if presented with sufficient data showing their friend or family member to be a horrific serial killer with every intention of continuing to torture and kill people, would turn them in even in the face of protestations of innocence. Does that mean that instead of saying "I'll be there for you whatever happens" we should say "I'll be there for you as long as the balance of probability doesn't suggest that supporting you will cost more than 5 QALYs" (quality adjusted life years)?

No, because being straightforward in that sense causes most people to judge us as weird and abnormal and thereby trust us less. Even though everyone understands at some level that these kind of assurances are only true ceterus parabus actually being straightforward about that fact is unusual enough that it causes other people to suspect that they don't understand our emotions/motivations and thus give us less trust.

In short: yes, the obvious point that we should adopt some kind of heuristic of keeping our word and otherwise modeling integrity is true. However, the suggestion that this nice simple heuristic is somehow the best one is completely unjustified.