Posts

OPP's "Last Dollar" 2017-01-29T20:24:46.105Z · score: 5 (7 votes)
Quick Thoughts on New Career Profiles on 80,000 Hours 2015-07-24T02:51:34.472Z · score: 4 (4 votes)
Tentative Thoughts on the SENS Foundation 2015-01-06T02:39:01.021Z · score: 6 (8 votes)

Comments

Comment by fluttershy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T23:19:06.148Z · score: 1 (1 votes) · EA · GW

Yeah, this sort of thing is basically always in danger of becoming politics all the way down. One good heuristic is to keep the goals you hope to satisfy by engaging in mind--if you want to figure out whether to accept an article's central claim, is the answer to your question decisive with respect to your decision? If you're trying to sway people, are you being careful to make sure it's plausibly deniable that you're doing anything other than truthseeking? If you're engaging because you think it's impactful to do so, are you treating your engagement as a tool rather than an end?

Comment by fluttershy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T21:32:08.420Z · score: 9 (8 votes) · EA · GW

As a guy who used to be female (I was AMAB), Kelly's post rings true to me. Fully endorsed. It would be particularly interesting to hear about AFAB transmen's experiences with respect to this.

The change in how you're treated is much more noticeable when making progress in the direction of becoming more guyish; not sure if this is because this change tends to happen quickly (testosterone is powerful + quick) or because of the offsetting stigma re: people making transition progress towards being female. I could also see this stigma making up some of the positive effect that AMAB people feel on detransitioning, though it's mostly possible to disentangle the effect of the misogyny from that of the transmisogyny if you have good social sense.

In anticipation of being harassed (based on past experience with this community), I'll leave it at that. I'm not going to respond to any BS or bother with politics.

Comment by fluttershy on The value of money going to different groups · 2017-05-17T06:08:59.539Z · score: 0 (0 votes) · EA · GW

I like the article. The first table makes it viscerally available that the VOI for better estimating eta (or for finding a better model for utility as a function of consumption on the margins) could be high, if you're relatively more interested in global poverty-focused EA than in other causes within EA.

I'm not aware of any better figures you could have used for GWWC/TLYCS/REG's leverage, and I'm not sure if many of us take estimates of leverage for meta-organizations literally, even relative to how literally we take normal EA cost-effectiveness estimates. I agree that combining the leverage estimates with the consumption multipliers in order to estimate impact would be the correct thing to do if you managed to get accurate estimates of both that weren't dependent or interdependent on each other, though!

To the extent that GWWC/TLYCS/REG count donations that they have received themselves as having a certain leverage because of the donations "caused"/influenced by GWWC/TLYCS/REG, everyone who has had their donations "caused"/influenced by GWWC/TLYCS/REG (at least according to GWWC/TLYCS/REG) should count their donations as having proportionally less than 1.0x leverage. (Alternatively, GWWC/TLYCS/REG could claim to have less leverage, and thereby allow those who they claim to have influenced to claim that they've caused a greater fraction of the impact that their own donations have caused). This prevents double-counting of impact, and gives us a more accurate estimate of how much good donations to various organizations cause, which in turn lets us figure out how we can do the most good.

Comment by fluttershy on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-14T01:26:15.387Z · score: 1 (1 votes) · EA · GW

I strongly agree with both of the comments you've written in this thread so far, but the last paragraph here seems especially important. Regarding this bit, though:

I might be a bit of an outlier

This factor may push in the opposite way than you'd think, given the context. Specifically, if people who might have gotten into EA in the past ended up avoiding it because they were exposed to this example, then you'd expect the example to be more popular than it would be if everyone who once stood a reasonable chance of becoming an EA (or even a hardcore EA) had stuck around to give you their opinion on whether you should use that example. So, keep doing what you're doing! I like your approach.

Comment by fluttershy on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-14T00:53:51.761Z · score: 2 (2 votes) · EA · GW

The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn't have to do with how many QALYs we'd save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we're showing to disabled people. I'm not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it's ableist. (It's also possible that using the example as it's typically used causes negative consequences by affecting how intellectually rigorous EA is, but that's another topic). A few different points that might be used to support this argument would be:

  • On average, people get a lot of value out of having self-esteem; often, having more self-esteem on the margins enables them to do value-producing things they wouldn't have done otherwise (flow-through effects!). Sometimes, it just makes them a bit happier (probably a much smaller effect in utilitarian terms).
  • Roughly, raising or lowering the group-wise esteem of a group has an effect on the self-esteem of some of the group's members.
  • Keeping from lowering a group's esteem isn't very costly, if doing so involves nothing more than using a different tone. (There are of course situations where making a certain claim will raise or lower a group's esteem a large amount if a certain tone is used, and a lesser amount if a different tone is used, even though the group's esteem is nevertheless changed in the same direction in either case).
  • Decreases in a group's ability to do value-producing things or be happy because their esteem has been lowered by someone acting in an ablelist manner, do not cause others to experience a similarly sized boost to their ability to be happy or do value-producing things. (I.e. the truth value of claims that "status games are zero sum" has little effect on the extent to which it's true that decreasing a group's esteem by e.g. ableist remarks has negative utilitarian consequences).

I've generally found it hard to make this sort of observation publicly in EA-inhabited spaces, since I typically get interpreted as primarily trying to say something political, rather than primarily trying to point out that certain actions have certain consequences. It's legitimately hard to figure out what the ideal utilitarian combination of tone and example would be for this case, but it's possible to iterate towards better combinations of the two as you have time to try different things according to your own best judgement, or just ask a critic what the most hurtful parts of an example are.

Comment by fluttershy on Update on Effective Altruism Funds · 2017-04-23T18:44:56.402Z · score: 1 (1 votes) · EA · GW

It just seems like the simplest explanation of your observed data is 'the community at large likes the funds, and my personal geographical locus of friends is weird'.

And without meaning to pick on you in particular (because I think this mistake is super-common), in general I want to push strongly towards people recognising that EA consists of a large number of almost-disjoint filter bubbles that often barely talk to each other and in some extreme cases have next-to-nothing in common. Unless you're very different to me, we are both selecting the people we speak to in person such that they will tend to think much like us, and like each other; we live inside one of the many bubbles. So the fact that everyone I've spoken to in person about the EA funds thinks they're a good idea is particularly weak evidence that the community thinks they are good, and so is your opposing observation.

I'd say this is correct. The EA Forum itself has such a selection effect, though it's weaker than the ones either of our friend groups have. One idea would be to do a survey, as Peter suggests, though this makes me feel slightly uneasy given that a survey may weight the opinions of people who have considered the problem less or feel less strongly about it equally with the opinions of others. A relevant factor here is that it sometimes takes people a fair bit of reading or reflection to develop a sense for why integrity is particularly valuable from a consequentialist's perspective, and then link this up to why EA Funds continuing has the consequence of showing people that projects others use relatively lower-integrity methods to report on and market can succeed despite (or even because?) of this.

I'd also agree that, at the time of Will's post, it would have been incorrect to say:

The community is probably net-neutral to net-negative on the EA funds, but Will's post introducing them is the 4th most upvoted post of all time

But what we likely care about is whether or not the community is positive on EA Funds at the moment, which may or may not be different from whether it was positive on EA Funds in the past.

My view is further that the community's response to this sort of thing is partly a function of how debates on honesty and integrity have been resolved in the past; if lack of integrity in EA has been an issue in the past, the sort of people who care about integrity are less likely to stick around in EA, such that the remaining population of EAs will have fewer people who care about integrity, which itself affects how the average EA feels about future incidents relating to integrity (such as this one), and so on. So, on some level I'm positing that the public response to EA Funds would be more negative if we hadn't filtered certain people out of EA by having an integrity problem in the first place.

Comment by fluttershy on Update on Effective Altruism Funds · 2017-04-22T21:53:26.070Z · score: 4 (4 votes) · EA · GW

A more detailed discussion of the considerations for and against concluding that EA Funds had been well received would have been helpful if the added detail was spent examining people's concerns re: conflicts of interest, and centralization of power, i.e. concerns which were commonly expressed but not resolved.

I'm concerned with the framing that you updated towards it being correct for EA Funds to persist past the three month trial period. If there was support to start out with and you mostly didn't gather more support later on relative to what one would expect, then your prior on whether EA Funds is well received should be stronger but you shouldn't update in favor of it being well received based on more recent data. This may sound like a nitpick, but it is actually a crucially important consideration if you've framed things as if you'll continue on with the project only if you update in the direction of having more public support than before.

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused). This is a part of what some of us mean when we talk about a tax on criticism in EA.

Comment by fluttershy on Update on Effective Altruism Funds · 2017-04-22T20:20:20.967Z · score: 5 (5 votes) · EA · GW

In one view, the concept post had 43 upvotes, the launch post had 28, and this post currently has 14. I don't think this is problematic in itself, since this could just be an indication of hype dying down over time, rather than of support being retracted.

Part of what I'm tracking when I say that the EA community isn't supportive of EA Funds is that I've spoken to several people in person who have said as much--I think I covered all of the reasons they brought up in my post, but one recurring theme throughout those conversations was that writing up criticism of EA was tiring and unrewarding, and that they often didn't have the energy to do so (though one offered to proofread anything I wrote in that vein). So, a large part of my reason for feeling that there isn't a great deal of community support for EA funds has to do with the ways in which I'd expect the data on how much support there actually is to be filtered. For example:

  • the method in which Kerry presented his survey data made it look like there was more support than there was
  • the fact that Kerry presented the data in this way suggests it's relatively more likely that Kerry will do so again in the future if given the chance
  • social desirability bias should also make it look like there's more support than there is
  • the fact that it's socially encouraged to praise projects on the EA Forum and that criticism is judged more harshly than praise should make it look like there's more support than there is. Contrast this norm with the one at LW, and notice how it affected how long it took us to get rid of Gleb.
  • we have a social norm of wording criticism in a very mild manner, which might make it seem like critics are less serious than they are.

It also doesn't help that most of the core objections people have brought up have been acknowledged but not addressed. But really, given all of those filters on data relating to how well-supported the EA Funds are, and the fact that the survey data doesn't show anything useful either way, I'm not comfortable with accepting the claim that EA Funds has been particularly well-received.

Comment by fluttershy on Update on Effective Altruism Funds · 2017-04-21T11:23:52.222Z · score: 6 (18 votes) · EA · GW

I appreciate that the post has been improved a couple times since the criticisms below were written.

A few of you were diligent enough to beat me to saying much of this, but:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

This seems false, based on these replies. The author of this post replied to the majority of those comments, which means he's aware that many people have in fact raised concerns about things other than communication and EA Funds' website. To his credit, someone added a paragraph acknowledging that these concerns had been raised elsewhere, in the pages for the EA community fund and the animal welfare fund. Unfortunately, though, these concerns were never mentioned in this post. There are a number of people who would like to hear about any progress that's been made since the discussion which happened on this thread regarding the problems of 1) how to address conflicts of interest given how many of the fund managers are tied into e.g. OPP, and 2) how centralizing funding allocation (rather than making people who aren't OPP staff into Fund Managers) narrows the amount of new information about what effective opportunities exist that the EA Funds' Fund Managers encounter.

I've spoken with a couple EAs in person who have mentioned that making the claim that "EA Funds are likely to be at least as good as OPP’s last dollar" is harmful. In this post, it's certainly worded in a way that implies very strong belief, which, given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds instead of whatever else they might donate to counterfactually. This is the same sort of effect people get from looking at this sort of advertising, but more subtle, since it's less obvious on a gut level that this slogan half-implies that the reader is morally bad for not donating. Using this slogan could be net negative even without considering that it might make EAs feel bad about themselves, if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

More immediately, I have negative feelings about how this post used the Net Promoter Score to evaluate the reception of EA Funds. First, it mentions that EA Funds "received an NPS of +56 (which is generally considered excellent according to the NPS Wikipedia page)." But the first sentence of the Wikipedia page for NPS, which I'm sure the author read at least the first line of given that he linked to it, states that NPS is "a management tool that can be used to gauge the loyalty of a firm's customer relationships" (emphasis mine). However, EA Funds isn't a firm. My view is that implicitly assuming that, as a nonprofit (or something socially equivalent), your score on a metric intended to judge how satisfied a for-profit company's customers are can be compared side by side with the scores received by for-profit firms (and then neglecting to mention that you've made this assumption) belies a lack of intent to honestly inform EAs.

This post has other problems, too; it uses the NPS scoring system to analyze donors and other's responses to the question:

How likely is it that your donation to EA Funds will do more good in expectation than where you would have donated otherwise?

The NPS scoring system was never intended to be used to evaluate responses to this question, so perhaps that makes it insignificant that an NPS score of 0 for this question just misses the mark of being "felt to be good" in industry. Worse, the post mentions that this result

could merely represent healthy skepticism of a new project or it could indicate that donors are enthusiastic about features other than the impact of donations to EA Funds.

It seems to me that including only positive (or strongly positive-sounding) interpretations of this result is incorrect and misleadingly optimistic. I'd agree that it's a good idea to not "take NPS too seriously", though in this case, I wouldn't say that the benefit that came from using NPS in the first place outweighed the cost that was incurred by the resultant incorrect suggestion that we should feel there was a respectable amount of quantitative support for the conclusions drawn in this post.

I'm disappointed that I was able to point out so many things I wish the author had done better in this document. If there had only been a couple errors, it would have been plausibly deniable that anything fishy was going on here. But with as many errors as I've pointed out, which all point in the direction of making EA Funds look better than it is, things don't look good. Things don't look good regarding how well this project has been received, but that's not the larger problem here. The larger problem is that things don't look good because this post decreases how much I am willing to trust communications made on the behalf of EA funds in particular, and communications made by CEA staff more generally.

Writing this made me cry, a little. It's late, and I should have gone to bed hours ago, but instead, here I am being filled with sad determination and horror that it feels like I can't trust anyone I haven't personally vetted to communicate honestly with me. In Effective Altruism, honesty used to mean something, consequentialism used to come with integrity, and we used to be able to work together to do the most good we could.

Some days, I like to quietly smile to myself and wonder if we might be able to take that back.

Comment by fluttershy on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-31T00:09:59.954Z · score: 2 (2 votes) · EA · GW

This is a problem, both for the reasons you give:

Why do I think intuition jousting is bad? Because it doesn’t achieve anything, it erodes community relations and it makes people much less inclined to share their views, which in turn reduces the quality of future discussions and the collective pursuit of knowledge. And frankly, it's rude to do and unpleasant to receive.

and through this mechanism, which you correctly point out:

The implication is nearly always that the target of the joust has the ‘wrong’ intuitions.

The above two considerations combine extremely poorly with the following:

I’ve noticed IJing happens much more among effective altruists than academic philosophers.

Another consequence of this tendency, when it emerges, is that communicating a felt sense of something is much harder to do, and less rewarding to do, when there's some level of social expectation that arguments from intuition will be attacked. Note that the felt senses of experts often do contain information that's not otherwise available when said experts work in fields with short feedback loops. (This is more broadly true: norms of rudeness, verbal domination, using microaggressions, and nitpicking impede communication more generally, and your more specific concept of IJ does occur disproportionately often in EA).

Note also that development of a social expectation whereby people believe on a gut level that they'll receive about as much criticism, verbal aggression, and so on regardless of how correct or useful their statements are may be especially harmful (See especially the second paragraph of p.2).

Comment by fluttershy on Introducing CEA's Guiding Principles · 2017-03-08T13:44:16.455Z · score: 3 (5 votes) · EA · GW

I'd like to respond to your description of what some people's worries about your previous proposal were, and highlight how some of those worries could be addressed, hopefully without reducing how helpfully ambitious your initial proposal was. Here goes:

the risk of losing flexibility by enforcing what is an “EA view” or not

It seems to me like the primary goal of the panel in the original proposal was to address instances of people lowering the standard of trustworthiness within EA and imposing unreasonable costs (including unreasonable time costs) on individual EAs. I suspect that enumerating what sorts of things "count" as EA endeavors isn't a strictly necessary prerequisite for forming such a panel.

I can see why some people held this concern, partly because "defining what does and doesn't count as an EA endeavor" clusters in thing-space with "keeping an eye out for people acting in untrustworthy and non-cooperative ways towards EAs", but these two things don't have to go hand in hand.

the risk of consolidating too much influence over EA in any one organisation or panel

Fair enough. As with the last point, the panel would likely consolidate less unwanted influence over EA if it focused solely on calling out sufficiently dishonestly harmful behavior by anyone who self-identified as an EA, and made no claims as to whether any individuals or organizations "counted" as EAs.

the risk of it being impossible to get agreement, leading to an increase in politicisation and squabbling

This seems like a concern that's good, in that a bit harder for me to address satisfactorily. Hopefully, though, there would some clear-cut cases the panel could choose to consider, too; the case of Intentional Insights' poor behavior was eventually quite clear, for one. I would guess that the less clear cases would tend to be the ones where a clear resolution would be less impactful.

In response, we toned back the ambitions of the proposed ideas.

I'd have likely done the same. But that's the wrong thing to do.

In this case, the counterfactual to having some sort of panel to call out behavior which causes unreasonable amounts of harm to EAs is relying on the initiative of individuals to call out such behavior. This is not a sustainable solution. Your summary of your previous post puts it well:

There’s very little to deal with people representing EA in ways that seem to be harmful; this means that the only response is community action, which is slow, unpleasant for all involved, and risks unfairness through lack of good process.

Community action is all that we had before the Intentional Insights fiasco, and community action is all that we're back to having now.

I didn't get to watch the formation of the panel you discuss, but it seems like a nontrivial amount of momentum, which was riled up by the harm Intentional Insights caused EA, went into its creation. To the extent that that momentum is no longer available because some of it was channeled into the creation of this panel, we've lost a chance at building a tool to protect ourselves against agents and organizations who would impose costs on, and harm EAs and EA overall. Pending further developments, I have lowered my opinion of everyone directly involved accordingly.

Comment by fluttershy on Advisory panel at CEA · 2017-03-07T13:57:56.190Z · score: 4 (8 votes) · EA · GW

Noted! I can understand that it's easy to feel like you're overstepping your bounds when trying to speak for others. Personally, I'd have been happy for you all to take a more central leadership role, and would have wanted you all to feel comfortable if you had decided to do so.

My view is that we still don't have reliable mechanisms to deal with the sorts of problems mentioned (i.e. the Intentional Insights fiasco), so it's valuable when people call out problems as they have the ability to. It would be better if the EA community had ways of calling out such problems by means other than requiring individuals to take on heroic responsibility, though!

This having been said, I think it's worth explicitly thanking the people who helped expose Intentional Insight's deceitful practices—Jeff Kaufman, for his original post on the topic, and Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, Claire Zabel, and others who have not been mentioned or who contributed anonymously, for writing this detailed document.

Comment by fluttershy on Some Thoughts on Public Discourse · 2017-02-24T04:25:04.578Z · score: 9 (9 votes) · EA · GW

I believe you when you say that you don't benefit much from feedback from people not already deeply engaged with your work.

There's something really noticeable to me about the manner in which you've publicly engaged with the EA community through writing for the past while. You mention that you put lots of care into your writing, and what's most noticeable about this for me is that I can't find anything that you've written here that anyone interested in engaging with you might feel threatened or put down by. This might sound like faint praise, but it really isn't meant to be; I find that writing in such a way is actually somewhat resource intensive in terms of both time, and something roughly like mental energy.

(I find it's generally easier to develop a felt sense for when someone else is paying sufficient attention to conversational nuances regarding civility than it is to point out specific examples, but your discussion of how you feel about receiving criticism is a good example of this sort of civility).

As you and James mention, public writeups can be valuable to readers, and I think this is true to a strong extent.

I'd also say that, just as importantly, writing this kind of well thought out post which uses healthy and civil conversational norms creates value from a leadership/coordination point of view. Leadership in terms of teaching skills and knowledge is important too, but I guess I'm used to thinking of those as separate from leadership in terms of exemplifying civility and openness to sharing information. If it were more common for people and foundations to write frequently and openly, and communicate with empathy towards their audiences when they did, I think the world would be the better for it. You and other senior Open Phil and GiveWell staff are very much respected in our community, and I think it's wonderful when people are happy to set a positive example for others.

(Apologies if I've conflated civility with openness to sharing information; these behaviors feel quite similar to me on a gut level—possibly because they both take some effort to do, but also nudge social norms in the right direction while helping the audience.).

Comment by fluttershy on Why I left EA · 2017-02-22T02:43:18.915Z · score: 1 (5 votes) · EA · GW

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first passage I've quoted, it's socially (but not logically) implied that Telofy has "speculated", "overlooked things", and used "motivated reasoning". The second passage I've quoted states that certain people who "don't feel respected or disrespected" should "respect us, first and foremost", which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

These examples are part of a trend in your writing.

Cut it out.

Comment by fluttershy on Why I left EA · 2017-02-21T06:30:06.495Z · score: 7 (9 votes) · EA · GW

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

Comment by fluttershy on Why I left EA · 2017-02-21T02:47:18.777Z · score: 2 (2 votes) · EA · GW

There's nothing necessarily intersectional/background-based about that

People have different experiences, which can inform their ability to accurately predict how effective various interventions are. Some people have better information on some domains than others.

One utilitarian steelman of this position that's pertinent to the question of the value of kindness and respect of other's time would be that:

  • respecting people's intellectual autonomy and being generally kind tends to bring more skilled people to EA
  • attracting more skilled EAs is worth it in utilitarian terms
  • there are only some people who have had experiences that would point them to this correct conclusion

Sure, they're valid perspectives. They're also untenable, and we don't agree with them

The kind of 'kindness' being discussed here [is]... another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

I feel that both of these statements are untrue of myself, and I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

Comment by fluttershy on Why I left EA · 2017-02-21T00:21:26.129Z · score: 1 (3 votes) · EA · GW

We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have! I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than you do, and that their perspectives are valid too.

(As a more general note, and not something I want to address to kbog in particular, I've noticed that I do sometimes System-1-feel like I have to justify arguments for being considerate in terms of utilitarianism. Utilitarianism does justify kindness, but feeling emotionally compelled to argue for kindness on grounds of utilitarianism rather than on grounds of decency feels like overkill, and makes it feel like something is off--even if it is just my emotional calibration that's off.)

Comment by fluttershy on Why I left EA · 2017-02-20T10:51:16.797Z · score: 2 (2 votes) · EA · GW

For me, most of the value I get out of commenting in EA-adjacent spaces comes through tasting the ways in which I gently care about our causes and community. (Hopefully it is tacit that one of the many warm flavors of that value for me is in the outcomes our conversations contribute to.)

But I suspect that many of you are like me in this way, and also that, in many broad senses, former EAs have different information than the rest of us. Perhaps the feedback we hear when anyone shares some of what they've learned before they go will tend to be less rewarding for them to share, and more informative to us to receive, than most other feedback. In that spirit, I'd like to affirm that it's valuable to have people in similar positions to Lila's share. Thanks to Lila for doing so.

Comment by fluttershy on GiveWell and the problem of partial funding · 2017-02-16T02:22:34.047Z · score: 6 (6 votes) · EA · GW

Personally, I've noticed that being casually aware of smaller projects that seem cash-strapped has given me the intuition that it would be better for Good Ventures to fund more of the things it thinks should be funded, since that might give some talented EAs more autonomy. On the other hand, I suspect that people who prefer the "opposite" strategy, of being more positive on the pledge and feeling quite comfortable with Givewell's approach to splitting, are seeing a very different social landscape than I am. Maybe they're aware of people who wouldn't have engaged with EA in any way other than by taking the pledge, or they've spent relatively more time engaging with Givewell-style core EA material than I have?

Between the fact that filter bubbles exist, and the fact that I don't get out much (see the last three characters of my username), I think I'd be likely to not notice if lots of the disagreement on this whole cluster of related topics (honesty/pledging/partial funding/etc.) was due to people having had differing social experiences with other EAs.

So, perhaps this is a nudge towards reconciliation on both the pledge and on Good Ventures' take on partial funding. If people's social circles tend to be homogeneous-ish, some people will know of lots of underfunded promising EAs and projects (which indirectly compete with GV and GiveWell top charities for resources), and others will know of few such EAs/projects. If this is case, we should expect most people's intuitions on how many funding opportunities for small projects (which only small donors can identify effectively) there are, to be systematically off in one way or another. Perhaps a reasonable thing to do here would be to discuss ways to estimate how many underfunded small projects, which EAs would be eager to fund if only they knew about them, there are.

Comment by fluttershy on Use "care" with care. · 2017-02-09T11:36:25.248Z · score: 3 (3 votes) · EA · GW

You're clearly pointing at a real problem, and the only case in which I can read this as melodramatic is the case in which the problem is already very serious. So, thank you for writing.

When the word "care" is used carelessly, or, more generally, when the emotional content of messages is not carefully tended to, this nudges EA towards being the sort of place where e.g. the word "care" is used carelessly. This has all sorts of hard to track negative effects; the sort of people who are irked by things like misuse of the word "care" are disproportionately likely to be the sort of people who are careful about this sort of thing themselves. It's easy to see how a harmful "positive" feedback loop might be created in such a scenario if not paying attention to the connotations of words can drive our friends away.

Comment by fluttershy on Anonymous EA comments · 2017-02-09T09:02:15.981Z · score: 4 (4 votes) · EA · GW

What I'd like to see is an organization like CFAR, aimed at helping promising EAs with mental health problems and disabilities -- doing actual research on what works, and then helping people in the community who are struggling to find their feet and could be doing a lot in cause areas like AI research with a few months' investment. As it stands, the people who seem likely to work on things relevant to the far future are either working at MIRI already, or are too depressed and outcast to be able to contribute, with a few exceptions.

I'd be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I've thought a lot about how we can support each other. I'm not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.

Speculation ahoy:

1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.

2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).

3) I wonder how much good work could be done on anyone's mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I've never tried something like this before, but I'd eventually like to.

Well, writing that comment was a journey. I doubt I'll stand by all of what I've written here tomorrow morning, but I do think that I'm correct on some points, and that I'm pointing in a few valuable directions.

Comment by fluttershy on Anonymous EA comments · 2017-02-09T04:14:08.238Z · score: 1 (1 votes) · EA · GW

It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.

Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?

Comment by fluttershy on Introducing the EA Funds · 2017-02-09T03:34:36.053Z · score: 9 (9 votes) · EA · GW

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.

Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

It makes some sense that there could be gaps which Open Phil isn't able to fill, even if Open Phil thinks they're no less effective than the opportunities they're funding instead. Was that what was meant here, or am I missing something? If not, I wonder what such a funding gap for a cost-effective opportunity might look like (an example would help)?

There's a part of me that keeps insisting that it's counter-intuitive that Open Phil is having trouble making as many grants as it would like, while also employing people who will manage an EA fund. I'd naively think that there would be at least some sort of tradeoff between producing new suggestions for things the EA fund might fund, and new things that Open Phil might fund. I suspect you're already thinking closely about this, and I would be happy to hear everyone's thoughts.

Edit: I'd meant to express general confidence in those who had been selected as fund managers. Also, I have strong positive feelings about epistemic humility in general, which also seems highly relevant to this project.

Comment by fluttershy on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T17:43:05.277Z · score: 3 (3 votes) · EA · GW

Thank you! I really admired how compassionate your tone was throughout all of your comments on Sarah's original post, even when I felt that you were under attack . That was really cool. <3

I'm from Berkeley, so the community here is big enough that different people have definitely had different experiences than me. :)

Comment by fluttershy on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T04:24:29.411Z · score: 10 (10 votes) · EA · GW

I should add that I'm grateful for the many EAs who don't engage in dishonest behavior, and that I'm equally grateful for the EAs who used to be more dishonest, and later decided that honesty was more important (either instrumentally, or for its own sake) to their system of ethics than they'd previously thought. My insecurity seems to have sadly dulled my warmth in my above comment, and I want to be better than that.

Comment by fluttershy on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T04:15:48.627Z · score: 12 (14 votes) · EA · GW

This issue is very important to me, and I stopped identifying as an EA after having too many interactions with dishonest and non-cooperative individuals who claimed to be EAs. I still act in a way that's indistinguishable from how a dedicated EA might act—but it's not a part of my identity anymore.

I've also met plenty of great EAs, and it's a shame that the poor interactions I've had overshadow the many good ones.

Part of what disturbs me about Sarah's post, though, is that I see this sort of (ostensibly but not actually utilitarian) willingness to compromise on honesty and act non-cooperatively more in person than online. I'm sure that others have had better experiences, so if this isn't as prevalent in your experience, I'm glad! It's just that I could have used stronger examples if I had written the post, instead of Sarah.

I'm not comfortable sharing examples that might make people identifiable. I'm too scared of social backlash to even think about whether outing specific people and organizations would even be a utilitarian thing for me to do right now. But being laughed at for being an "Effective Kantian" because you're the only one in your friend group who wasn't willing to do something illegal? That isn't fun. Listening to hardcore EAs approvingly talk about how other EAs have manipulated non-EAs for their own gain, because doing so might conceivably lead them to donate more if they had more resources at their disposal? That isn't inspiring.

Comment by fluttershy on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T01:37:36.881Z · score: 5 (5 votes) · EA · GW

Since there are so many separate discussions surrounding this blog post, I'll copy my response from the original discussion:

I’m grateful for this post. Honesty seems undervalued in EA.

An act-utilitarian justification for honesty in EA could run along the lines of most answers to the question, “how likely is it that strategic dishonesty by EAs would dissuade Good Ventures-sized individuals from becoming EAs in the future, and how much utility would strategic dishonesty generate directly, in comparison?” It’s easy to be biased towards dishonesty, since it’s easier to think about (and quantify!), say, the utility the movement might get from having more peripheral-to-EA donors, than it is to think about the utility the movement would get from not pushing away would-be EAs who care about honesty.

I’ve [rarely] been confident enough to publicly say anything when I’ve seen EAs and ostensibly-EA-related organizations acting in a way that I suspect is dishonest enough to cause significant net harm. I think that I’d be happy if you linked to this post from LW and the EA forum, since I’d like for it to be more socially acceptable to kindly nudge EAs to be more honest.

Comment by fluttershy on Semi-regular Open Thread #35 · 2017-01-03T00:53:57.980Z · score: 1 (1 votes) · EA · GW

Good Ventures recently announced that it plans to increase its grantmaking budget substantially (yay!). Does this affect anyone's view on how valuable it is to encourage people to take the GWWC pledge on the margin?

Comment by fluttershy on We Must Reassess What Makes a Charity Effective · 2016-12-25T03:17:42.708Z · score: 2 (4 votes) · EA · GW

It's worth pointing out past discussions of similar concerns with similar individuals.

I'd definitely be happy for you to expand on how any of your points apply to AMF in particular, rather than aid more generally; constructive criticism is good. However, as someone who's been around since the last time we had this discussion, I'm failing to find any new evidence in your writing—even qualitative evidence—that what AMF is doing is any less effective than I'd previously believed. Maybe you can show me more, though?

Thanks for the post.

Comment by fluttershy on 2016 AI Risk Literature Review and Charity Comparison · 2016-12-15T07:58:56.470Z · score: 8 (10 votes) · EA · GW

This post was incredibly well done. The fact that no similarly detailed comparison of AI risk charities had been done before you published this makes your work many times more valuable. Good job!

At the risk of distracting from the main point of this article, I'd like to notice the quote:

Xrisk organisations should consider having policies in place to prevent senior employees from espousing controversial political opinions on facebook or otherwise publishing materials that might bring their organisation into disrepute.

This seems entirely right, considering society's take on these sorts of things. I'd suggest that this should be the case for EA-aligned organizations more widely, since PR incidents caused by one EA-related organization can generate fallout which affects both other EA-related organizations, and the EA brand in general.

Comment by fluttershy on Concerns with Intentional Insights · 2016-10-27T14:14:32.674Z · score: 4 (4 votes) · EA · GW

I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don't think the world is ready for it yet... Another thing is that there could be some unexpected obstacle or Chesterton's fence we don't know about yet.

Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the "Status" chapter of Keith Johnstone's Impro, which contains this quote:

We soon discovered the 'see-saw' principle: 'I go up and you go down'. Walk into a dressing-room and say 'I got the part' and everyone will congratulate you, but will feel lowered [in status]. Say 'They said I was old' and people commiserate, but cheer up perceptibly... The exception to this see-saw principle comes when you identify with the person being raised or lowered, when you sit on his end of the see-saw, so to speak. If you claim status because you know some famous person, then you'll feel raised when they are: similarly, an ardent royalist won't want to see the Queen fall off her horse. When we tell people nice things about ourselves this is usually a little like kicking them. People really want to be told things to our discredit in such a way that they don't have to feel sympathy. Low-status players save up little tit-bits involving their own discomfiture with which to amuse and placate other people.

Emphasis mine. Of course, a large fraction of EA folks and rationalists I've met claim to not be bothered by others bragging about their accomplishments, so I think you're right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.

Comment by fluttershy on Setting Community Norms and Values: A response to the InIn Open Letter · 2016-10-27T07:04:30.285Z · score: 6 (8 votes) · EA · GW

Creating a community panel that assesses potential egregious violations of those principles, and makes recommendations to the community on the basis of that assessment.

This is an exceptionally good idea! I suspect that such a panel would be taken the most seriously if you (or other notable EAs) were involved in its creation and/or maintenance, or at least endorsed it publicly.

I agree that the potential for people to harm EA by conducting harmful-to-EA behavior under the EA brand will increase as the movement continues to grow. In addition, I also think that the damage caused by such behavior is fairly easy to underestimate, for the reason that it is hard to keep track of all of the different ways in which such behavior causes harm.

Comment by fluttershy on Reflections on EA Global from a first-time attendee · 2016-09-19T11:12:46.389Z · score: 4 (4 votes) · EA · GW

Thank you for posting this, Ian; I very much approve of what you've written here.

In general, people's ape-y human needs are important, and the EA movement could become more pleasant (and more effective!) by recognizing this. Your involvement with EA is commendable, and your involvement with the arts doesn't diminish this.

Ideally, I wouldn't have to justify the statement that people's human needs are important on utilitarian grounds, but maybe I should: I'd estimate that I've lost a minimum of $1k worth of productivity over the last 6 months that could have trivially been recouped if several less-nice-than-average EAs had shown an average level of kindness to me.

I would be more comfortable with you calling yourself an effective altruist than I would be with you not doing so; if you're interested in calling yourself an EA, but hesitate because of your interests and past work, that means that we're the ones doing something wrong.

Comment by fluttershy on Should you switch away from earning to give? Some considerations. · 2016-08-27T12:11:35.460Z · score: 6 (6 votes) · EA · GW

It seems like there's a disconnect between EA supposedly being awash in funds on the one hand, and stories like yours on the other.

This line is spot-on. When I look around, I see depressingly many opportunities that look under-funded, and a surplus of talented people. But I suspect that most EAs see a different picture--say, one of nearly adequate funding, and a severe lack of talented people.

This is ok, and should be expected to happen if we're all honestly reporting what we observe! In the same way that one can end up with only Facebook friends who are more liberal than 50% of the population, so too can one end up knowing many talented people who could be much more effective with funding, since people's social circles are often surprisingly homogeneous.

Comment by fluttershy on The Redundancy of Quantity · 2015-09-04T01:03:31.750Z · score: 0 (0 votes) · EA · GW

Nice post. Spending resources on self-improvement is generally something EA's shouldn't feel bad about.

One solution may be different classes of risk-aversity. One low-risk class may be dedicated to GiveWell- or ACE-recommended charities, another to metacharities or endeavors as Open Phil might evaluate, and another high-risk class to yourself, an intervention as 80,000 Hours might evaluate.

I do intuit that the best high-risk interventions ought to be more cost-effective than the best medium-risk interventions, which ought to be more cost-effective than the best low risk interventions, such that someone with a given level of risk tolerance might want to mainly fund the best known interventions at a certain level of riskiness. However, since effective philanthropy isn't an efficient market yet, this needn't be true.

Comment by fluttershy on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2015-08-27T23:22:39.416Z · score: 1 (1 votes) · EA · GW

Thanks! I've never looked into the Brain Preservation Foundation, but since RomeoStevens' essay, which is linked to in the post you linked to above, mentions it as being potentially a better target of funding than SENS, I'll have to look into it sometime.

Comment by fluttershy on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2015-08-27T22:53:00.398Z · score: 2 (2 votes) · EA · GW

Epistemic status: low confidence on both parts of this comment.

On life extension research:

See here and here, and be sure to read Owen's comments after clicking on the latter link. It's especially hard to do proper cost effectiveness estimates on SENS, though, because Aubrey de Grey seems quite overconfident (credence-wise) most of the time. SENS is still the best organization I know of that works on anti-aging, though.

On cyonics:

I suspect that most of the expected value from cyonics comes from the outcomes in which cyonics becomes widely enough available that cyonics organizations are able to lower costs (especially storage costs) substantially. Popularity would also help on the legal side of things-- being able to start cooling and perfusion just before legal death could be a huge boon, and earlier cooling is probably the easiest thing that could be done to increase the probability of successful cryonics outcomes in general.

Comment by fluttershy on EA risks falling into a "meta trap". But we can avoid it. · 2015-08-26T22:19:54.728Z · score: 1 (1 votes) · EA · GW

You mention that far meta concerns with high expected value deserve lots of scrutiny, and this seems correct. I guess that you could use a multi-level model to penalize the most meta of concerns, and calculate new expected values for different things that you might fund, but maybe even that wouldn't be sufficient.

It seems like funding a given meta activity on the margin should be given less consideration (i.e. your calculated expected value for funding that thing should be further revised downwards) if x % of charitable funds being spent by EA's are already going to meta causes, and more consideration if only e.g. 0.5 * x % of charitable funds being spent by EA's are already going to meta causes. This makes since because of reputational effects-- it looks weird to new EA's if too much is being spent on meta projects.

Comment by fluttershy on Assessing EA Outreach’s media coverage in 2014 · 2015-03-19T06:17:57.734Z · score: 2 (2 votes) · EA · GW

Does anyone have any thoughts on how much we should value leading other people to donate? I mean this in a very narrow sense, and my thoughts on this topic are quite muddled, so I'll try to illustrate what I mean with a simplified example. I apologize if my confusion ends up making my writing unclear.

If I talk with a close friend of mine about EA for a bit, and she donates $100 to, say, GiveWell, and then she disengages from EA for the rest of her life, how much should I value her donation to GiveWell? In this scenario, it seems like I've put some time and effort into getting my friend to donate, and she presumably wouldn't have donated $100 if I hadn't chatted with her, so it feels like maybe I did a few dollars worth of good by chatting with her. At the same time, she's the one who donated the money, so it feels like she should get credit for all of the good that was done because of her donation. But wait-- if I did a few dollars of good, then does that mean that she did less than $100 worth of good?

At this point, my moral intuitions on this issue are all over the place. I guess that positing that the story above actually has a problem implies that the sum of good done by my friend and I should sum to $100, but the only reason I've tacitly assumed that to be true is because it intuitively feels true. I previously wrote a comment on LessWrong on this topic that wasn't any clearer than this comment, and this response was quite clear, but I'm still confused.

Comment by fluttershy on March Open Thread · 2015-03-11T01:20:03.632Z · score: 0 (0 votes) · EA · GW

Thanks for the encouragement, Ryan!

Comment by fluttershy on March Open Thread · 2015-03-10T07:13:46.407Z · score: 1 (1 votes) · EA · GW

I've been tentatively considering a career in the actuarial sciences recently. It seems like the field compensates people pretty well, is primarily merit-based, doesn't require much, if any programming ability (which I don't really have), and doesn't have very many prerequisites to get into, other than strong mathematical ability and a commitment to taking the actuarial exams.

Also, actuarial work seems much slower paced than the work done in many careers that are frequently discussed on 80K Hours, which would make me super happy. I'm a bit burnt out on life right now, and I really don't want to go into a high-stress job, or a job with unusually long hours after I graduate at the end of this semester. I guess that if I wasn't a failure, I would have figured out what I was doing after graduation by now.

Are there any actuaries in the EA movement, or does anyone have any insights about this field that I might not have? My main concern regarding potentially becoming a trainee actuary is that the field is somewhat prone to automation. Page 71 of this paper, which was linked to in 80K Hours' report on career automation, suggests that there's a 21 % chance that actuarial work can be automated. The automation of certain tasks done by actuaries is frequently discussed on the actuary subreddit, as well.

Thanks for reading, and for any advice or thoughts that you might have for me!

Comment by fluttershy on Effective Altruism and Utilitarianism · 2015-01-31T01:10:13.114Z · score: 0 (0 votes) · EA · GW

I'm an emotivist-- I believe that "x is immoral" isn't a proposition, but, rather, is just another way of saying "boo for x". This didn't keep me from becoming an EA, though; I would feel hugely guilty if I didn't end up supporting GiveWell and other similar organizations once I have an income, and being charitable just feels nice anyways.

Comment by fluttershy on Tentative Thoughts on the SENS Foundation · 2015-01-06T23:12:52.162Z · score: 4 (4 votes) · EA · GW

I agree with everything in your two replies to my post.

You know, I'm probably more susceptible to being dazzled by de Grey than most-- he's a techno-optimist, he's an eloquent speaker, he's involved in Alcor, and I personally have a stake in life-extension tech being developed. I'm not sure how much these factors have influenced me in subtle ways while I was writing up my thoughts on SENS.

Anyhow, doing cost-effectiveness estimates is one of my favorite ways of thinking about and better understanding problems, even when I end up throwing out the cost-effectiveness estimates at the end of the day.

Comment by fluttershy on Tentative Thoughts on the SENS Foundation · 2015-01-06T11:51:41.992Z · score: 2 (2 votes) · EA · GW

I haven't found any such breakdown, even after looking around for a while. The 80,000 Hours interview with Aubrey, as well as a number of Youtube interviews featuring Aubrey (I don't remember which ones, sorry) note that Aubrey thinks SENS could make good use of $1 billion over the next ten years, but none of these sources justify why this much money is needed.

Comment by fluttershy on Effective Altruists elsewhere: Bronies for Good · 2014-12-22T03:37:38.827Z · score: 2 (2 votes) · EA · GW

Thank you for sharing this! I hadn't known that Bronies for Good had switched to fundraising for organizations recommended by GiveWell-- given the variety of organizations that Bronies for Good has supported in the past, I certainly hope that they continue to support EA-approved organizations in the future, rather than moving on to another cause.

Comment by fluttershy on Make your own cost-effectiveness Fermi estimates for one-off problems · 2014-12-14T08:46:23.557Z · score: 2 (2 votes) · EA · GW

Anti-aging seems like a plausible area for effective altruists to consider giving to, so thank you for raising this thought. It looks like GiveWell briefly looked into this area before deciding to focus its efforts elsewhere.

I've seen a few videos of Aubrey de Grey speaking about how SENS could make use of $100 million per year to fund research on rejuvenation therapies, so presumably SENS has plenty of room for more funding. SENS's I-990 tax forms show that the organization's assets jumped by quite a lot in 2012, though this was because of de Grey's donations during this year, and though I can't find SENS's I-990 for 2013, I would naively guess that they've been able to start spending the money donated in 2012 during the last couple of years. I still think that it would be worthwhile to ask someone at SENS where the marginal donation to the foundation would go in the short term-- maybe a certain threshold of donations needs to be reached before rejuvenation research can be properly begun in the most cost-effective way.

I agree with Aubrey that too much money is spent researching cures to specific diseases, relative to the amount spent researching rejuvenation and healthspan-extension technology. I've focused this response on SENS because, as a person with a decent science background, I feel like Aubrey's assertion that (paraphrased from memory) "academic research is constrained in a way that rewards low expected value projects which are likely to yield results quickly over longer term, high expected value projects" is broadly true, and that extra research into rejuvenation technologies is, on the margin, more valuable than extra research into possible treatments for particular diseases.

Comment by fluttershy on Open thread 5 · 2014-11-29T04:34:45.345Z · score: 2 (2 votes) · EA · GW

Hi there! In this comment, I will discuss a few things that I would like to see 80,000 Hours consider doing, and I will also talk about myself a bit.

I found 80,000 Hours in early/mid-2012, after a poster on LessWrong linked to the site. Back then, I was still trying to decide what to focus on during my undergraduate studies. By that point in time, I had already decided that I needed to major in a STEM field so that I would be able to earn to give. Before this, in late 2011, I had been planning on majoring in philosophy, so my decision in early 2012 to do something in a STEM field was a big change from my previous plans. I hadn't known which STEM field I wanted to major in at this point; I had only realized that STEM majors generally had better earning potentials than philosophy majors.

The way that this ties back into 80,000 Hours is that I think that I would have liked someone to help me decide which STEM field to go into. Actually, I can't find any discussion of choosing a college major on the 80,000 Hours site, though there are a couple of threads on this topic posted to LessWrong. I would like to see an in-depth discussion page on major choice as one of the core posts on 80,000 Hours.

Anyhow, I ended up majoring in chemistry because it seemed like one of the toughest things that I could major in-- I made this decision under the rule-of-thumb that doing hard things makes you stronger. I probably should have majored in mathematics, because I actually really enjoy math, and have gotten good grades in most of my math classes; neither of those two things are true of the chemistry classes I have taken. I think that my biggest previous misconception about major choice was that all STEM majors were roughly equal in how well they prepared you for the job market-- looking back, I feel that CS and Math are two of the best choices for earning to give, followed by engineering and then biology, with chemistry and physics as the two worst options for students interested in earning to give. Of course, YMMV, and people with physics degrees do go into quantitative finance, but I do think that not all STEM majors are equally useful for earning to give.

The second thing that I would like to mention is that, from my point of view, 80,000 Hours seems very elitist. I don't mean this in a bad way, really, I don't, but it is hard to be in the top third of mathematics graduates from an ivy league university. The first time that I had a face-to-face conversation with an effective altruist who had been inspired by 80,000 Hours, I told them that I was planning on doing important scientific research, and they just gave me a look and asked me why I wasn't planning on going into one of the more lucrative earning-to-give type of careers.

I am sure that this person is a good person, but this episode leads me to wonder if adding more jobs that very smart people who aren't quite ready to go into quantitative finance or strategic consulting could do to the top careers page on 80,000 Hours' site would be a good idea. Specifically, mechanical, chemical, and electrical engineering, as well as the actuarial sciences, could be acceptable fields for one to go into for earning to give.