Why I left EA 2017-02-19T17:42:44.422Z
Announcement: crowdsourcing argumentation at IARPA 2016-02-16T23:22:22.728Z
Moral anti-realists don't have to bite bullets 2015-12-27T16:48:31.415Z
The big problem with how we do outreach 2015-12-26T19:36:49.262Z
Might wireheaders turn into paperclippers? 2015-09-13T21:11:27.378Z
The Bittersweetness of Replaceability 2015-07-11T23:44:20.366Z


Comment by Lila on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-10-26T01:08:07.964Z · EA · GW

It looks like there might be confounders in the time series because there is a negative "effect" on life satisfaction prior to becoming disabled or unemployed. (With divorce and widowhood it's plausible that some people would see it coming years in advance.)

Comment by Lila on EA needs a cause prioritization journal · 2018-09-14T01:50:58.060Z · EA · GW

Academics will not find a new journal run by non-academics credible, much less prestigious. No one would be able to put this journal on an academic CV. So there's really no benefit to "publishing" relative to posting publicly and letting people vote and comment.

Comment by Lila on Expected cost per life saved of the TAME trial · 2018-05-27T00:01:16.311Z · EA · GW

Metformin isn't a supplement though. It's unlikely it would ever get approved as a supplement or OTC, especially given that it has serious side effects.

Comment by Lila on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-24T17:46:02.461Z · EA · GW

Really interesting. I appreciate you sharing this and your attitude toward this. Good luck with your career in philosophy - epistemic honesty will take you far.

You might consider cross-posting this on a site like Medium to reach a larger audience.

Comment by Lila on On funding medical research · 2018-02-16T03:41:31.425Z · EA · GW

It's not either/or. It's likely not to be a single disease - would probably be more accurate to call it a syndrome.

Comment by Lila on [deleted post] 2018-01-18T04:33:00.119Z

I'm not sure how the beliefs in Table 3 would lead to positive social change. Mostly just seems like an increase in some vague theism, along with acceptance/complacency/indifference/nihilism. The former is epistemically shaky, and the latter doesn't seem like an engine for social change.

Comment by Lila on [deleted post] 2018-01-17T17:51:44.113Z

You might as well randomly go through the list of multimillionaires/billionaires and cold-call them. Maybe not the worst idea, but there's nothing in particular to suggest this guy would be special.

Comment by Lila on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-17T12:47:53.490Z · EA · GW

Technology to do something like this is already being developed, but it's not nanotechnology:

Nanotechnology is rarely the most practical way to probe very small things. People have been able to infer molecular structures since the 19th century. Modern molecular biology/biochemistry makes use of electron microscope, fluorescent microscopy, and sequencing-based assays, among other techniques.

Comment by Lila on The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes · 2018-01-15T17:46:00.739Z · EA · GW

What do you mean by nanoscale neural probes? What are the questions that these probes would answer?

Comment by Lila on [deleted post] 2018-01-15T17:29:14.229Z

Modeling the risk of psychedelics as nonexistent seems like a very selective reading of Carbonaro 2016:

"Eleven percent put self or others at risk of physical harm; factors increasing the likelihood of risk included estimated dose, duration and difficulty of the experience, and absence of physical comfort and social support. Of the respondents, 2.6% behaved in a physically aggressive or violent manner and 2.7% received medical help. Of those whose experience occurred >1 year before, 7.6% sought treatment for enduring psychological symptoms. Three cases appeared associated with onset of enduring psychotic symptoms and three cases with attempted suicide."

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2018-01-10T03:24:42.390Z · EA · GW


Comment by Lila on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-29T00:08:56.871Z · EA · GW

You reveal that you are highly motivated to argue that exterminating humanity is not in the interest of an AI, regardless of whether that statement is true. So your arguments will present weak evidence at best, given your clear bias.

Comment by Lila on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-11-27T17:36:50.724Z · EA · GW

Is the ai supposed to read this explanation? Seems like it tips your hand?

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-21T23:44:41.084Z · EA · GW

Neither of those statements are upsetting to me.

It's often useful to be able to imagine what will be upsetting to other people and why, even if it's not upsetting to you. Maybe you'll decide that it's worth hurting people, but at least make your decisions with an accurate model of the world. (By the way, "because they're oversensitive" doesn't count as an explanation.)

So let's try to think about why someone might be upset if you told them that they're more likely to be a rapist because of their race. I can think of a few reasons: They feel afraid for their personal safety. They feel it's unfair to be judged for something they have no control over. They feel self-conscious and humiliated.

Emotional turing tests might be a good habit in general.

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-21T20:51:10.330Z · EA · GW

I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.

This unpleasant tangent is a great example of why applying aggregate statistics to actual people isn't a good strategy. It should be clear why people find the following statements upsetting:

Statistically, there are X rapists in the EA community.

Statistically, as a man/black person/Mexican/non-college grad/Muslim, there is X probability you're a rapist.

Let's please not go down this path.

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-19T00:08:06.910Z · EA · GW

I would far prefer being raped over a 1% chance of dying immediately. I think the tradeoff would be something like 100,000 to 1.

Comment by Lila on Talking About Effective Altruism At Parties · 2017-11-16T21:52:36.212Z · EA · GW

I don't think most of these will convince people to share your views, often because they come from different moral perspectives. They seem too negative or directly contradictory for people to change their minds - particularly the ones on social justice. However, it might help people understand your personal choices. What have been your results?

Comment by Lila on Looking for People Interested in Exploring Plant-Based Startups · 2017-11-14T16:51:21.054Z · EA · GW

I'm a 4th year PhD student in bioinformatics. I've previously considered doing something similar, though I focused more on stem cell technology, which is most relevant to my current research. However, would definitely be interested in discussing further!

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T16:03:12.347Z · EA · GW

I agree with this for the most part, but let's not exclude people from EA who, like me, are low-IQ and high-libido.

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T03:15:51.698Z · EA · GW

It seems that you are vastly underestimating the intensity of psychological trauma that comes with rape.

Even if this is descriptively true (and I think it varies a lot - some people aren't bothered long-term), there's no reason that this is a desirable outcome. Everything is mediated through attitudes.

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T00:45:06.594Z · EA · GW

I'm convinced that most people have an instinctive reaction to sexual violence which involves psychological trauma being triggered automatically.

There's no reason that this should be the case.

Yet, if a child is raped, that's psychologically devastating. The damage can last their whole lives. Explain that.

There are a lot of factors that are difficult to untangle. The ways that adults or peers react can certainly have an influence. I heard one father saying that a sexual abuser "stole his daughter's innocence", or something in a similar vein. While I'm sure he meant well, I'm not sure if these types of heavy-handed symbolic declarations are constructive for healing. I think sexual abuse could be prevented and its effects could be mitigated if people could have conversations (including with children) about healthy sexuality versus violence and coercion. Instead, some people seem more upset about the "sexual" side than the abuse side.

Comment by Lila on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T22:10:24.062Z · EA · GW

103 - 607 male rapists in EA

False precision much? This seems like an inappropriately specific number - it makes it sound like you have concrete evidence, but in reality you're just multiplying the number of men in EA by 6%. I hope that this number won't start getting spread around.

A more tractable approach to reducing the trauma from sexual violence might be to change perceptions of sexuality. Many people believe that it's important for women to be sexually "pure", which is one reason that female victims experience trauma.

Feminists, to their credit, reject such notions, but if anything they interpret sexual violence even more symbolically - as an attempt to have power over women and "violate" them, whatever that means. According to feminist theory, rape is never about sexual gratification. However, there isn't much evidence for this interpretation. Interviews with convicted sex offenders reveal a mix of motivations. In addition, there does seem to be a relationship between sexual attractiveness and probability of rape. For example, one study looked at female robbery victims, using age as a proxy for attractiveness. (For obvious reasons, we can't actually study the attractiveness of victims.) Middle-aged and older women were far less likely to be raped by their assailant.

Setting aside the empirical question of whether rape is actually about destroying the victim's autonomy, it seems unhelpful to interpret negative events in one's life symbolically, personalize them, or cast them as part of a larger conspiracy. Cognitive behavioral therapy and other techniques may help victims overcome irrational negative beliefs.

Comment by Lila on Living in an Inadequate World · 2017-11-09T23:46:20.288Z · EA · GW

Treating Candida via diet isn't accepted science:

So it's not surprising a doctor wouldn't diagnose you.

Comment by Lila on Moloch's Toolbox (1/2) · 2017-11-05T13:42:02.323Z · EA · GW

I consider GWAS applied, not basic, because it's not mechanistic. Most biologists I've spoken to have a fairly poor opinion of GWAS, as do I. Much of the biological research that gets funded is basic.

Comment by Lila on Moloch's Toolbox (1/2) · 2017-11-04T22:51:18.374Z · EA · GW

The p-value critique doesn't apply to many scientific fields. As far as I can tell, it mostly applies to social science and maybe epidemiological research. In basic biological research, a paper wouldn't be published in a good journal on the basis of a single p-value. In fact, many papers don't have any p-values. When p-values are presented, they're often so low (10^-15) that they're unnecessary confirmations of a clearly visible effect. (Silly, in my opinion.) Most papers rely on many experiments, which ideally provide multiple lines of evidence. It's also common to propose a mechanism that's plausible given the existing literature. In some cases, you can see the fingerprints of skeptical reviewers. For example, when I see "to exclude the possibility that", I assume that this experiment was added later at the demand of a reviewer. Published biology is often wrong, but for subtler reasons.

Comment by Lila on An Equilibrium of No Free Energy · 2017-11-04T22:32:09.077Z · EA · GW

I'm a current PhD student in computational biology, so I can offer a perspective on academic research in biology. I agree that biologists aren't optimizing for benefiting humanity - instead, I think high-quality basic research gets the most respect and that academia can't be beat here in most cases.

EAs attempting to do biology outside academia have two options. They can try to circumvent basic research and simply "hack" biology by experimenting with various interventions. However, given the complexity of biological systems, this seems unlikely to work unless you have access to tens of thousands of organic compounds and a way to screen them, for example. And this obviously puts you in competition with pharmaceutical companies. Or they can try to make novel biological discoveries. (I include "translating" basic research to applications here, given how easy it is to misinterpret findings.) Much of the life extension, genetic engineering, and transhumanism community relies on this. Even if you believe that a field is being ignored by academia for political reasons, you're still unlikely to advance knowledge outside academia. Academia teaches a framework for studying biology that's impossible to replicate independently:

"It’s not just that you have to read lots of books, although you do. It’s the experience of working with an advisor and other grad students, of coming up with theories and having them be shot down. Two stories I’ve heard from multiple grad student friends: “I spent two months working on something really cool, and in the first thirty seconds of presenting it to my advisor she came up with a simple proof it could never work” and “I spent two months working on something really cool, and in the first thirty seconds of presenting it to my advisor, she said ‘Oh yeah, that’s Smith’s Lemma, very exciting when it was published forty years ago.'” But eventually you come out of it not just with book learning, but with the thought-patterns and methods of a field baked into your brain, a strong sense of what is or isn’t interesting, can or can’t be done."

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:19:51.613Z · EA · GW

Where do we draw the line? Is intrinsic abilities an acceptable topic of casual discussion? Do you think it would be humiliating for people who are being discussed as having less intrinsic ability?

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T00:31:06.956Z · EA · GW

I can see 1-3 being problems to some extent (and I don't think Kelly would disagree)... but "overrepresentation of vegetarians and vegans"?? You might as well complain about an overrepresentation of people who donate to charity.

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T23:58:30.644Z · EA · GW

So I think that if you identify with or against some group (e.g. 'anti-SJWs'), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you're inclined to attribute way more to the person's statements than what they're actually saying or you set an overly demanding bar for them to "prove" that what they're saying is correct. And I think all of that is pretty bad for discourse.

This used to be me... It wasn't so much my beliefs that changed (I'm not a leftist/feminist/etc). It was more a change in attitude, related to why I rejected ultra-strict interpretations of utilitarianism. Not becoming more agreeable or less opinionated... just not feeling like I was on a life-or-death mission. Anyway, happy to discuss these things privately, including with people who are still on the anti-SJW mission.

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T23:41:28.369Z · EA · GW

But you don't want discrimination hypotheses to be discussed either? I guess that could be an acceptable compromise, to not debate the causes of disparities but at the same time focus on improving diversity in recruitment.

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T22:36:40.697Z · EA · GW

I think there's a bit of an empathy gap in this community. When people are angry for what seems to be no reason, a good first step is to ask whether you've done something that made them feel unsafe/humiliated/demeaned/etc, even if that wasn't your intention. It doesn't take a lot of imagination to see how unsolicited exploration of "other hypotheses" (cough cough) for racial and gender disparities could be very distressing for the people who are being discussed as if they're not there.

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T17:24:24.036Z · EA · GW

Politics is rarely used as an example of a positive environment for women.

It's not just the actual numbers that are concerning (though I disagree with you that a 70% skew can be brushed off). It's the exclusionary behavior within EA.

Comment by Lila on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T17:19:48.919Z · EA · GW

Thanks Kelly. I agree that this is a problem in EA in ways that people don't realize. In retrospect, I feel stupid for not realizing how casual discussion of IQ and eugenics would be hurtful. Same thing with applying that classic EA skepticism to people's lived experiences.

Culture isn't the main reason I left EA, but it's #3. And I think it contributes to the top two reasons I felt alienated: the mockery of moral views that deviate from strict utilitarianism, and what I believed were naive over-confident tactics.

Comment by Lila on An Argument for Why the Future May Be Good · 2017-07-22T09:32:48.764Z · EA · GW

Humans are generally not evil, just lazy


Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.

Comment by Lila on Why I left EA · 2017-02-20T05:02:41.802Z · EA · GW

You're free to offer your own thoughts on the matter, but you seemed to be trying to engage me in a personal debate, which I have no interest in doing. This isn't a clickbait title, I'm not concern trolling, I really have left the EA community. I don't know of any other people who have changed their mind about EA like this, so I thought my story might be of some interest to people. And hey, maybe a few of y'all were wondering where I went.

Comment by Lila on Why I left EA · 2017-02-20T02:10:45.092Z · EA · GW

I don't expect you to convince me to stay.

Maybe I should have said "I'd prefer if you didn't try to convince me to stay". Moral philosophy isn't a huge interest of mine anymore, and I don't really feel like justifying myself on this. I am giving an account of something that happened to me. Not making an argument for what you should believe. I was very careful to say "in my view" for non-trivial claims. I explicitly said "Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me)." So I'm not interested in hearing why prioritizing animals does not necessarily rely on total view utilitarianism.

Comment by Lila on Why I left EA · 2017-02-20T01:03:10.589Z · EA · GW

To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible

I don't think I do anything in my life to the maximal extent possible

Comment by Lila on Why I left EA · 2017-02-19T21:53:10.376Z · EA · GW

That's a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don't think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they'd be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.

EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions, such as public health interventions. It's sometimes useful when thinking about careers. But I used to feel compelled to pursue careers that I hated and probably wouldn't be good at, just on the off chance it would work. Now I see morality as being more closely tied to what I find meaning in (again, anti-realism). And I don't find meaning in saving a trillion EV lives or whatever.

Comment by Lila on On making spaces friendlier to parents · 2016-08-18T03:56:58.451Z · EA · GW

"But I think supporting the continuation of humanity and the socialization of the next generation can be considered a pretty basic part of human life."

Maybe it's a good thing at the margins, but we have more than enough people breeding at this point. There's nothing particularly noble about it, anymore than it's noble for an EA to become a sanitation worker. Sure, society would fall apart without sanitation workers, but still...

You're entitled to do what you want with your life, but there's no reason to be smug about it.

Comment by Lila on On making spaces friendlier to parents · 2016-08-18T03:42:48.819Z · EA · GW

One of the few things I remember about EA Global, through my haze of jet lag, was how much your baby screamed during the talks.

Comment by Lila on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-23T17:38:54.949Z · EA · GW

This might be alright. See these guidelines though:

Comment by Lila on Philanthropy Advisory Fellowship: Water, Sanitation, and Handwashing · 2016-07-23T15:29:11.770Z · EA · GW

You should probably explain what SODIS is.

Comment by Lila on Some Organisational Changes at the Centre for Effective Altruism · 2016-07-23T15:18:13.438Z · EA · GW

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia

Wikipedia's policies forbid original research. Publishing the research on the organization's website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)

I think this is worth mentioning because I've seen some embarrassing violations of Wikipedia policy on EA-related articles recently.

Comment by Lila on The Effective Altruism Newsletter & Open Thread – July 2016 · 2016-07-20T18:40:12.700Z · EA · GW

It feels like telling two rival universities to cut their football programs and donate the savings to AMF. "Everyone wins!"

Anyway, two billion dollars isn't that much in the scheme of things. I remember reading somewhere that Americans spend more money on Halloween candy than politics.

Comment by Lila on EA != minimize suffering · 2016-07-16T03:13:26.144Z · EA · GW

My point was that opiates are extremely pleasurable but I wouldn't want to experience them all the time, even with no consequences. Just sometimes.

Comment by Lila on EA != minimize suffering · 2016-07-15T14:03:20.979Z · EA · GW

"Reducing "existential risk" will of course increase wild animal suffering as well as factory farming, and future equivalents."

Yes, this isn't a novel claim. This is why people who care a lot about wild animal suffering are less likely to work on reducing x risk.

Comment by Lila on EA != minimize suffering · 2016-07-15T04:49:44.091Z · EA · GW

I've had vicodin and china white and sometimes indulge in an oxy. They're quite good, but it hasn't really changed my views on morality. Despite my opiate experience, I'm much less utilitarian than the typical EA.

Comment by Lila on EA != minimize suffering · 2016-07-15T04:43:36.088Z · EA · GW

I agree that points 1 and 2 are unrelated, but I think most people outside EA would agree that a universe of happy bricks is bad. (As I argued in a previous post, it's pretty indistinguishable from a universe of paperclips.) This is one problem that I (and possibly others) have with EA.

Comment by Lila on EA != minimize suffering · 2016-07-15T04:40:26.226Z · EA · GW

I'd be happy if the EA movement became interested in this, just as I'd be happy if the Democratic Party did. But my point was, the label EA means nothing to me. I follow my own views, and it doesn't matter to me what this community thinks of it. Just as you're free to follow your own views, regardless of EA.

Comment by Lila on EA != minimize suffering · 2016-07-14T14:55:06.574Z · EA · GW

Yeah it's confusing because the general description is very vague: do the most good in the world. EAs are often reluctant to be more specific than that. But in practice EAs tend to make arguments from a utilitarian perspective, and the cause areas have been well-defined for a long time: GiveWell recommended charities (typically global health), existential risk (particularly AI), factory farming, and self-improvement (e.g. CFAR). There's nothing terribly wrong with these causes, but I've become interested in violence and poor governance in the developing world. EA just doesn't have much to offer there.