Biology project in search of first author: imaging the brain of popular farmed insect Black Soldier Fly 2021-04-09T18:19:46.178Z
How bad is coronavirus really? 2020-05-08T17:29:18.439Z
1 min Key and Peele clip on children saved per dollar 2020-02-27T02:11:17.197Z
Why did MyGiving need to be replaced? And why is the replacement so bad? 2019-10-04T19:35:31.716Z
Harvard's Agathon Career Fellowship: A Post-mortem 2019-09-04T00:29:37.709Z
The Turing Test podcast #8: Spencer Greenberg 2019-05-13T15:30:23.246Z
Scrupulosity: my EAGxBoston 2019 lightning talk 2019-05-02T23:08:25.894Z
I want an ethnography of EA 2019-05-02T20:33:38.915Z
The Turing Test podcast is back with Bryan Caplan! 2019-04-15T20:35:51.995Z
[blog cross-post] The remembering self needs to get real about the experiencing self. 2019-02-08T18:21:18.746Z
Who sets the read time estimates? 2018-12-28T16:41:26.014Z
[blog cross-post] On privacy 2018-12-28T15:41:29.448Z
[blog cross-post] potential lost; substance gained 2018-12-12T21:13:20.689Z
[blog cross-post] Charity hacks 2018-12-12T20:56:43.697Z
[blog cross-post] More on narcissism 2018-12-12T20:42:17.656Z
[blog cross-post] So-called communal narcissists 2018-12-12T20:37:24.046Z
We are in triage every second of every day 2016-08-26T20:20:21.300Z


Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-19T19:35:58.372Z · EA · GW

Seems like others agreed with you. I meant it mostly seriously. 

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-19T19:24:47.716Z · EA · GW

The more substantial point that I'm trying to make is that the political balance of the EA Forum shouldn't be a big factor in someone's decision to publicize important information about a major charity evaluator, or probably even in how they put the criticism. Many people read posts linked from the EA Forum who never read the comments or don't visit the Forum often for other posts, i.e. they are not aware of the overall balance of political sympathies on the Forum. The tenor of the Forum as a whole is something that should be managed (though I wouldn't advocate doing that through self-censorship) to make EA welcoming or for the health of the community, but it's not that important compared to the quality of information accessible through the Forum, imo. 

I'm a little offended at the suggestion that expressing ideas or important critiques of charities should in any way come second to diplomatic concerns about the entire Forum. 

Comment by Holly_Elmore on Are mice or rats (as pests) a potential area of animal welfare improvement? · 2021-05-12T17:21:49.951Z · EA · GW

I have been researching sterilizing rodents instead of killing them to control their populations, and it's much more popular already than I had realized. ContraPest is a bait that sterilizes rats with a few doses. It reduces sperm viability in males and induces aging of ovarian follicles in females, sort of like early menopause. There's a bit of a lag before the population reduces, but it has the benefits of humaneness, not disturbing the rats' territories (because older rats stick around, preventing movement between territories which can spread disease), and providing a better longterm maintenance solution. It's already widely used, and Senestech, the company that makes it, has had big contracts with cities like NYC and Wasington DC. 

I was very surprised to find out how widespread the use of sterilants already was considering I had not heard of them for rodent pest control until last year!

I think this is a good cause not only to reduce harm to household pests, but because having to participate in cruelty toward animals can lead to cognitive dissonance or defensiveness or the status quo treatment of animals. Finding out about sterilants got me out of binary way of thinking towards rat infestation (it's them or me) and that's the kind of creative problem-solving we need if we're ever going to make real improvements in wild animal welfare. 

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-06T22:16:08.564Z · EA · GW

Look who's never heard of intersectionality

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-06T22:15:34.445Z · EA · GW

I think this post is pretty damning of ACE. Are you saying OP shouldn't have posted important information about how ACE is evaluting animal charities because there has been too much anti-SJ/DEI stuff on the forum lately?

Comment by Holly_Elmore on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-04-30T21:15:17.371Z · EA · GW

Are you implying that Larry Summers was wrong or that Texaco's actions were somehow his fault?

Comment by Holly_Elmore on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-04-30T21:14:36.778Z · EA · GW

I think it's important for EA to promote high decoupling in intellectual spaces.  You also have to consider that this is a philosophy dissertation, which is an almost maximally decoupling space. 

Comment by Holly_Elmore on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-04-27T16:50:08.647Z · EA · GW

I don't understand why thinking like that quote isn't totally passe to EAs. At least to utilitarian EAs. If anyone's allowed to think hypothetically ("divorced from the reality") I would think it would be a philosophy grad student writing a dissertation.

Comment by Holly_Elmore on Ask Rethink Priorities Anything (AMA) · 2020-12-15T19:01:53.109Z · EA · GW
  1. Personally, I’m very self-conscious about my work and tend to wait to long too share it. But the culture of RP seems to fight that tendency— which I think is very productive!
Comment by Holly_Elmore on Ask Rethink Priorities Anything (AMA) · 2020-12-15T18:56:19.422Z · EA · GW

I can answer 6, as I’ve been doing it for Wild Animal Welfare since I was hired in September. WAW is a new and small field, so it is relatively easy to learn the field, but there’s still so much! I started by going backwards (into the Welfare Biology movement of the 80s and 90s) and forwards (into the WAW EA orgs we know today) from Brain Tomasik, consulting the primary literature over various specific matters of fact. A great thing about WAW being such a young field (and so concentrated in EA) is that I can reach out to basically anyone who’s published on it and have a real conversation. It’s a big shortcut!

I should note that my background is in Evolutionary Biology and Ecology, so someone else might need a lot more background in those basics if they were to learn WAW.

Comment by Holly_Elmore on How bad is coronavirus really? · 2020-05-10T16:22:49.100Z · EA · GW

Such an answer is exactly what I am looking for!

Comment by Holly_Elmore on How bad is coronavirus really? · 2020-05-10T16:21:22.298Z · EA · GW

I’m curious about people’s evaluations of (2)— how long would that go on? How bad would it really be compared to the losses from shutdown?

Comment by Holly_Elmore on Harvard's Agathon Career Fellowship: A Post-mortem · 2019-11-04T22:33:55.348Z · EA · GW

iirc, we actually did prompt them to take the exit survey and give them time to fill it out during the fourth meeting, but clearly not everyone did. But my memory of that is really not clear. We had been in breakout groups most of that session so maybe there was too much disorder when we asked them to take a survey at the end of that. And if we had done that then they wouldn't have had their one-on-one meetings with us yet.

For the 9 month follow-up we just sent them an email.

Comment by Holly_Elmore on Only a few people decide about funding for community builders world-wide · 2019-10-25T21:59:07.358Z · EA · GW

Don't forget that a lot of groups have other funding sources available, especially student groups. The EA groups at Harvard make use of CEA, and we wouldn't be able to do as much without money from CEA, but we have plenty of other funding sources (such as Harvard and well-off alum EAs) and many of our events cost only volunteer labor.

Comment by Holly_Elmore on Problems in effective altruism and what to do about them · 2019-10-20T03:12:38.178Z · EA · GW

Is it really a matter of incorrectness or just that you think that argument is really important and he didn’t include it? There are plenty of innocent reasons he might not have included that argument or many others. He might have thought it was a weak argument or maybe didn’t include it because it wasn’t relevant to his personal objections to NU.

Comment by Holly_Elmore on Problems in effective altruism and what to do about them · 2019-10-19T16:12:16.182Z · EA · GW

But Peter, he just didn't have time and the CV issue was too unimportant (not to publish-- just too unimportant to verify):

The issue with Bostrom’s CV is a minor thing compared to the other things I write about in this text. For example, if I were to ask Bostrom something, I would rather ask him about the seemingly problematic behaviour of the organisation FHI he leads. There are also many other people that I mention in this text who I could have asked about more important things than a CV before publishing this text. But I doubt I would have time for that work, so I prefer to write based on the information I have in a hedged way using phrases such as ‘I doubt’ and ‘I suspect.’

Anon, do you think publishing something that attacks people's individual reputations and damages the reputation of negative utilitarians as a whole despite "not having time" to do it right is an acceptable practice?

Comment by Holly_Elmore on Problems in effective altruism and what to do about them · 2019-10-19T16:01:35.635Z · EA · GW

What's unacceptable about this in your opinion, anon account?

Comment by Holly_Elmore on Problems in effective altruism and what to do about them · 2019-10-19T14:43:07.593Z · EA · GW

None of the accusations here is shocking, and often they reflect the author's naivete more than any wrongdoing on the part of the accused. Assistants contribute to writing books (however, private correspondence is meant to stay private). Organizations set ethical standards for the conducting and sharing of their research. People present themselves in the best light possible. Will is a co-founder of EA, not of the idea of maximizing social impact, but of the set of ideas and practices that governs this community today.

Comment by Holly_Elmore on Problems in effective altruism and what to do about them · 2019-10-19T14:02:28.453Z · EA · GW

I don't like Toby's "Why I'm Not a Negative Utilitarian" essay because I think it doesn't engage good arguments in favor of NU (to which I am partial). But I don't think it's in any way dishonest for him to have written an informal essay describing his views on the matter. I found it immensely helpful in understanding Toby's writings about the kind of utilitarianism he endorses.

Comment by Holly_Elmore on Why did MyGiving need to be replaced? And why is the replacement so bad? · 2019-10-09T02:23:09.061Z · EA · GW

I really appreciate this! Thank you! And I feel lucky to get any free tools like this. I was just irked because I didn’t understand the need for the change. I feel much better about the loss of the recurring donations functionality now that I know the old platform was at the end of its life.

Comment by Holly_Elmore on Why did MyGiving need to be replaced? And why is the replacement so bad? · 2019-10-05T13:18:40.847Z · EA · GW

I find it much less intuitive and the aesthetic very cold. I liked the pie chart on my MyGiving dashboard... although I understand how diversifying causes made it easy to break that feature.

Comment by Holly_Elmore on Why did MyGiving need to be replaced? And why is the replacement so bad? · 2019-10-05T01:00:24.687Z · EA · GW

Overgeneral though this comment is, it does seem to me like GWWC and donations are really getting the shaft from EA Uber-orgs, and that giving simply not being a priority is probably part of the problem.

What I still don’t understand is why they abandoned a perfectly good platform with MyGiving (imo) in order to make an incomplete move to

Comment by Holly_Elmore on Why did MyGiving need to be replaced? And why is the replacement so bad? · 2019-10-05T00:55:41.492Z · EA · GW

I’m no power user either. I just want to be able to add and modify recurring reservations, which you can’t do with (I just learned you can email them with the details of a recurring reservation to have them add it for you, but come on.) You could do this easily in MyGiving. I also find the interface very bare, unlike MyGiving. I just don’t understand why they needed to make this move when they weren’t prepared to finish it.

Comment by Holly_Elmore on 'Longtermism' · 2019-08-03T01:17:08.389Z · EA · GW

The only reason I don’t identify as longtermist is tractability. I would appreciate a definition that allowed me to affirm that when a being occurs in time is morally arbitrary without also committing me to focusing my efforts on the long-term.

Comment by Holly_Elmore on Age-Weighted Voting · 2019-08-02T19:35:58.849Z · EA · GW
one thing to bear in mind is that even using the the weighting scheme I suggested in the post - which seemingly strongly favors young people - that would move the median voter (in the US) from age 55 to age 40.

How do you get this result? Are you just saying with these multipliers applied to the current age distribution of voters, the median US vote would be cast by a 40 yo? Or if this anticipating the response to the multipliers? Like, for example, does this take into account that young people would probably vote more if their votes counted 6x more?

I'm not knocking the overall idea, but I am skeptical that young people will be that much better at resisting short-term political temptations than old people. If young people got huge vote multipliers, politicians would only pander to their weaknesses more. I guess like most people commenting here I have the most faith in middle-aged people. I like the idea of a more gradual tapering up and down of the vote multiplier, but a system that complicated is probably doomed.

Maybe parents should get huge vote multipliers. Seems to me they usually care about the future a lot more than the young people who are on track to outlive them.

Comment by Holly_Elmore on 'Longtermism' · 2019-08-01T17:43:13.182Z · EA · GW

I downvoted your comments as well, Milan, because I think this is exactly the kind of thing that should go on the EA Forum. The emergence of this term “longtermism” to describe a vaguer philosophy that was already there has been a huge, perhaps the main EA topic for like 2 years. I don’t even subscribe to longtermism (well, at least not to strong longtermism, which I considered to be the definition before reading this post) but the question of whether to hyphenate has come up many times for me. This was all useful information that I’m glad was put up for engagement within EA.

And the objection that words can never be precise is pretty silly. Splitting hairs can be annoying but this was an important consideration of meaningfully different definitions of longtermism. It’s very smart for EA to figure this out now to avoid all the problems that Will mentioned, like vagueness, when the term has become more widely known.

It sounded like your objection was that this post was about words and strategy instead of about the concepts. I for one am glad that EA is not just about thinking but about doing what needs to be done, including reaching agreement about how to talk about ideas and what kind of pitches we should be making.

Comment by Holly_Elmore on 'Longtermism' · 2019-08-01T17:01:16.907Z · EA · GW
An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs. This rules out views on which we should discount the future or that we should ignore the long-run indirect effects of our actions, but would not rule out views on which it’s just empirically intractable to try to improve the long-term future

I’ve referred to this definition as “temporal cosmopolitanism.” Whatever we call it, I agree that we should have some way of distinguishing the view that time at which something occurs is morally arbitrary from a view that prioritizes acting today to try to affect the long-run future.

Comment by Holly_Elmore on Age-Weighted Voting · 2019-07-13T23:13:33.497Z · EA · GW

FWIW, I think the young lacking life experience and crystallized intelligence is pretty clutch. This argument rests on the young having not only a greater stake in future but being able to make sensible decisions about what to do with it. I would at least suggest that 18-25 yo voters not have a multiplier.

I do like reducing the influence of the old who know very well when voting that, for instance, climate change will not really affect them. But I think any vote weighting scheme has to take stakeholding and competence into account.

Comment by Holly_Elmore on There's Lots More To Do · 2019-06-11T16:21:49.266Z · EA · GW

So you think he's worried about other people being misled?

Comment by Holly_Elmore on Life history classification · 2019-06-08T01:35:38.320Z · EA · GW

You've done a good job at reporting the trends in thought and terminology here. I'm not directing the following at you, but at the trend in the field you're describing.

I'm an evolutionary biologist and I'm tired of people saying r/K has been discredited. I think what really happened is that people realized r/K was a generalization without realizing that every other useful principle in evolutionary biology is also a generalization.

I use r/K parlance and I never get any complaints from the evolutionary theorists and population geneticists around me. It's just a heuristic. Would you say the logistic model of population dynamics has been debunked because someone points out that it's doesn't capture every variable that affects population growth? No, because it's just a model, so that was obvious from the start. Hence I don't see why people pointing out that there are other dimensions to life history somehow invalidates using the r/K spectrum as a knowing simplification. I'm all for clarifying that r/K is just a heuristic and educating people about the fundamentals of life history theory, but I don't think the fact that there's more to it invalidates r->K as a useful dimension.

There's never going to be a life history theory that's both 100% accurate and can provide generalizations at the gross level at which we typically consider life history traits. In order to make any useful statements about the relationship between offspring number and life span, for example, we're going to have to allow for exceptions.

Comment by Holly_Elmore on There's Lots More To Do · 2019-06-07T18:35:30.184Z · EA · GW

My point is that Ben is in fact able to do whatever legal thing he wants. He doesn't need to make us wrong to do so. It's interesting that he feels the need to. Whether EA or Peter Singer has suggested that it's morally wrong not to give, Ben is free to follow his own conscience/desires and does not need our approval. If his real argument is that he should be respected by EAs for his decision not to give, I think that should be distinguished from a pseudo-factual argument that we're deceived about the need to give money.

Comment by Holly_Elmore on There's Lots More To Do · 2019-06-07T03:25:00.524Z · EA · GW
But you seem to be also arguing "you don't need to justify your actions to yourself / at all"

Kinda. More like "nobody can make you act in accordance with your own true values-- you just have to want to."

If people aren't required to live in accordance with even their own values, what's the point in having values?

To fully explain my position would require a lot of unpacking. But, in brief, no-- how could people be required to live in accordance with their own values? Other people might try to enforce value-aligned living, but they can't read your mind or fully control you-- hardly makes it a "requirement." If what you're getting at is that people **should** live according to their values, then, sure, maybe (not sure I would make this a rule on utilitarian grounds because a lot of people's values or attempts to live up to their values would be harmful).

Suffice to say that, if Ben does not want to give money, he does not have to explain himself to us. The natural consequence of that may be losing respect from EAs he knows, like his former colleagues at GiveWell. He may be motivated to come up with spurious justifications for his actions so that it isn't apparent to others that either his values have changed or he's failing to live up to them. I would like to create conditions where Ben can be honest with himself. That way he either realizes that he still believes it's best to give even though the effects or giving are more abstract or he faces up to the fact that his values have changed in an unpopular way but is able to stay in alignment with them. (This is all assuming that his post did not represent his true rejection, which it very well might have.)

Comment by Holly_Elmore on Framing Effective Altruism as Overcoming Indifference · 2019-06-06T20:54:57.339Z · EA · GW

"However, effective altruism really is warm and calculating."

I can't believe I've never thought of this! That's great :)

Great post, too. I think EA has a helpful message for most people who are drawn to it, and for many people that message is overcoming status quo indifference. However, I worry that caring too much, as in overidentifying with or feeling personally responsible for the suffering of the world, is also a major EA failure mode. I have observed that most people assume their natural tendency towards either indifference or overresponsibility is shared by basically everyone else, and this assumption determines what message they think the world needs to hear. For instance, I'm someone who's naturally overresponsible. I don't need EA to remind me to care. I need it to remind me that the indiscriminate fucks I'm giving are wasted, because they can take a huge toll on me and aren't particularly helping anyone else. Hence, I talk a lot about self-care and the pitfalls of trying to be too morally perfect within EA. When spreading the word about EA, I emphasize the moral value of prioritization and effectiveness because that's what was missing for me.

EA introduced me to many new things to care about, but I only didn't care about them before because I hadn't realized they were actionable. This might be quibbling, but I wouldn't say I was indifferent before-- I just had limiting assumptions about how I could help. I side more with Aaron's "unawareness" frame on this.

Comment by Holly_Elmore on A vision for anthropocentrism to supplant wild animal suffering · 2019-06-06T20:06:04.196Z · EA · GW

Is this speaking to a concern someone has that terraforming would make a bunch more animals to suffer? What motivated this piece?

Comment by Holly_Elmore on Considering people’s hidden motives in EA outreach · 2019-06-05T01:34:41.720Z · EA · GW

From the early sections, I thought you were going in the opposite direction-- how already involved EAs can be mindful of their secret motives for being involved. (I think that's super-important, btw.) For outreach, I would have thought the implication was that we should balance the need to appeal to and accomodate the human need for status with the possibility that EA would get diluted by the attempt to market EA in a low-fidelity way. I agree with CEA's emphasis on the high-fidelity model: there's no point in growing EA if it stops being EA in the process.

I think there is some very low-hanging fruit EA orgs can pick re:prestige they can offer recruits. #1 is making sure the name of the organization and the name of positions are as impressive and not-loaded as possible. Foundational Research Institute, for example, went with that title over "The Future of Suffering Institute" because they got feedback from academics that they wouldn't be able to put that name on their CVs. At Harvard EA, we have multiple named fellowships for students (the undergrad one is the "Arete Fellowship"). There is no reason we can't call our programs fellowships or name them, even though they are just student club programming. But being able to put "2016 Fellow of the Harvard College Effective Altruism Arete Fellowship" on a resume gives Harvard students the prestige they need to justify spending their time on us. There is a ton of cheap status EA can confer without it costing us anything (just requires us to contribute to the inflation of terms for volunteering, employment, and awards-- I'm not losing any sleep).

Comment by Holly_Elmore on There's Lots More To Do · 2019-06-04T23:44:56.233Z · EA · GW

Now that I've made all these comments, I realize I should have just asked Ben if his post was his true rejection of EA-style giving. My comments have all been motivated by suspicion that Ben just isn't convinced by arguments about giving enough to give himself, but he feels like he has to prove them wrong on their own terms instead of just acting as he sees fit. (That's a lot of assumptions on my part.) If that particular scenario happens to be true for him or anyone reading, my message is that you are in charge of these decisions and you don't have to justify yourself to EAs.

The broader issue that concerns me here is people thinking that the only way to do the things they want to make them is happy is to convince everyone else that those things are objectively right. There are a lot of us here with perilously high need for consistency. When we don't respect personal freedom and freedom of conscience, people will start to hijack EA ideas to make them more pallatable for them without having to admit to being inconsistent or failing to live up to their ideals. This happens all the time in religious movements.

I can't promise Ben that no one will judge him morally inferior for not giving. But I can promote respect for people in the community feeling empowered to follow their own judgment within their own domains. EA benefits from debate, but much more so if that debate is restricted to true rejections and not coming from a need for self-justification. Reminding people that all EA lifestyle decisions are choices is thus a means of community epistemic hygiene.

Comment by Holly_Elmore on There's Lots More To Do · 2019-06-04T20:03:24.717Z · EA · GW

Singer says it's wrong to spend frivolously on ourselves while there are others in need but he doesn't say it should be illegal. He also doesn't give any hard and fast rules about giving, and he doesn't think people who don't give should be shamed. He simply points out how much more the money could do for others, each of whom matter as much as any of us.

I just get the feeling that Ben isn't comfortable doing what he wants or what he thinks would make most of us (wealthy people) happier without getting us to agree with him first that it's what everyone should do. I want to remind him that what he does within the law is his prerogative. We don't have to be wrong for him to do what he wants. If he just wants to focus on himself and his loved ones, he doesn't have to convince us that we've filled every funding gap so our ideas are moot and he's still a good person despite not giving. He's already free to act as he sees fit. The last thing he needs to do to feel in charge of his own life and resources is attack EA.

I say this all because that line about focusing on your loved one and doing "concrete" things made me suspect that that desire might have motivated the whole argument. In that case, we can avoid a pointless argument of dueling back-of-the-envelope estimates by pointing out that EA doesn't have to be wrong for Ben and others like him to do what they want with their lives.

I could be wrong and the post could represent Ben's true rejection. In that case, I'd expect to hear back that he is doing what he wants, and what he wants depends on the frequency of drowning children, which is why he's trying to figure this out.

Comment by Holly_Elmore on There's Lots More To Do · 2019-06-04T02:56:19.079Z · EA · GW

As I commented on Ben's blog, I just think it bears mentioning that we're allowed to focus on our own lives whether or not there are people who could use or money more than us. So if anyone were motivated to undermine the need for donations in order to feel justified in focusing on themselves and their loved ones, they needn't do it. It's already okay to do that, and no one's perfectly moral. Maybe if you don't feel the need to prove EA wrong before taking care of yourself, you'll want to return to giving or other EA activities after giving yourself some tlc, because instead of feeling forced, you know you want to do these things of your own free will.

Comment by Holly_Elmore on [deleted post] 2019-05-13T17:33:34.106Z

I'd like to propose another group that shouldn't donate: people with a pre-disposition to conditions that require treatment with medication that is hard on the kidneys.

I'm really glad I didn't try to donate my kidney a few years ago before I knew I would need to be taking a med (probably for the rest of my life) that can cause serious renal damage. In fact, kidney damage is a major reason people have to go off this drug and often they don't find an equivalent cocktail for dealing with the disease symptoms.

I imagine getting treated with any brutal medication is harder with one kidney. I hope this is something discussed with altruistic donors, but I never hear about it. I only hear about how you'd be higher up on the transplant list if you had kidney disease, and that that's an advantage because most kidney disease would have hit both kidneys (were they there) anyway. But that makes me imagine disease arising within the kidney or the body, not kidney damage due to treating other conditions.

Comment by Holly_Elmore on [deleted post] 2019-05-13T17:25:26.263Z
People who are doing direct work, if they expect three weeks of their work to produce more QALYs than donating.
It may be worth considering whether the enforced rest from donating a kidney would have some of the benefits of taking a vacation for you.

This could be turned into a searing satire of EA. "Earn a rest from the work that's too marginally impactful to pause for a few weeks by donating a kidney. To you, post-surgical recovery will seem like a vacation!"

Comment by Holly_Elmore on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T13:18:17.126Z · EA · GW

The real goal you seem to be advancing, Milan, is spirituality, not psychedelics per se. Based on testimony from people I trust and some slightly dubious research, I think psychedelics can likely be helpful in that, but they shouldn't be our frontline tool. I think meditation is a much better candidate for that.

Sam Harris and Michael Pollan argue that psychedelics are useful for convincing people there's a there there, and that makes sense to me. You have to put a lot of time and blind effort into meditation to get that same assurance. But the struggle, and particularly "asking" for deeper wisdom through your faithful efforts, is a really important part of spiritual realization according to most traditions (and in my personal experience). Based on what I've read (haven't taken them), I don't think taking psychedelics often does the trick on its own.

And there are many downsides to psychedelics. People who don't know how mentally unstable they are may take them and be thrown badly off-kilter. Bad trips are harrowing and can reach unimaginable heights of terror. I don't think most people have the slightest clue how deeply and completely their minds could torture them. Even if people one day are grateful for what they've been through (as I am now with my mental illness), I would not knowingly inflict that risk on people when there are gentler ways. Even intense meditation can have these destabilizing effects, but psychedelics are much more potent, can't be stopped on demand, and can be wielded by totally unskilled people. My guess is that the the most common harm comes from tripping habitually out of sensation-seeking rather than humbly to gain self-insight or wisdom. Again, this can happen in meditation, too, but it's a lot less likely. When you add in all the infrastructure necessary to mitigate these risks, like comprehensive mental health screenings and guides and practice sessions, doing psychedelics right doesn't seem that much easier than a meditation retreat and it doesn't teach you any skills. The advantage of psychedelics at that point is speed and the guarantee that some experience of altered consciousness will take place, which is not nothing, but all this safety equipment undercuts the elegance of taking a little pill proponents have harped on.

Psychedelics could be a more EA-style intervention than meditation (if either of them qualify) because pills are scalable, but creating a safe environment with skilled guides is a lot less so. Meditation can be taught by one teacher to many people in parallel with much less equipment. It can even be taught pretty well through apps. Meditation takes longer to reach the experiences/insights psychedelics throw up in your face, but they are more digestible through meditation and insight alone is insufficient for most people to transform their lives-- the vast majority also need skills like equanimity acquired through practice.

Psychedelics probably have a role to play, but I do not think they are the magic bullet proponents claim they are. They come with serious dangers, and mitigating those dangers undercuts their scalability, which was imo their biggest EA selling point. Safer alternatives, the vast array of meditative schools and techniques, exist. Psychedelics have some advantages over traditional meditation-- speed and guaranteed action-- but they are no panacea. My best guess is that they should be a targeted prescription for certain roadblocks on the spiritual path.

Comment by Holly_Elmore on I want an ethnography of EA · 2019-05-09T16:40:04.960Z · EA · GW

Haven't had a chance to read much but it's already gold

Comment by Holly_Elmore on Complex value & situational awareness · 2019-05-09T00:05:43.359Z · EA · GW

But they have project projects as well as what you're describing.

Comment by Holly_Elmore on Complex value & situational awareness · 2019-05-08T23:04:31.406Z · EA · GW

I think this is your strongest point, but the question remains whether you can specialize in situational awareness and adding complex value. Personally, I think you need to have a main hustle to really apply these abilities.

Comment by Holly_Elmore on Complex value & situational awareness · 2019-05-08T22:53:18.134Z · EA · GW

Not to be mean, but how much value has Alex actually generated? The size of his network is very impressive, but do we know that making it has had substantial positive outcomes?

(This is mostly a rhetorical question because I know Alex and his activities very well. I know my opinion but perhaps you will disagree. Also, he knows about my skepticism.)

Comment by Holly_Elmore on [deleted post] 2019-05-08T02:43:35.545Z

I appreciate this!

Comment by Holly_Elmore on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-07T21:00:14.120Z · EA · GW

Although I don't think it's a likely EA cause area, I definitely think it's good for the world to raise awareness about the costs of sleep deprivation among EAs! I'd love to see norms in our community of respecting sleep, like not having events too late, not making them too overstimulating, not relying on alcohol to make something a social event, rejecting startup-y "always on" culture on by doing business mostly by daylight, etc.

Comment by Holly_Elmore on How do we check for flaws in Effective Altruism? · 2019-05-07T19:57:12.081Z · EA · GW

I think I know very well where Nathan is coming from, and I don't think it's invalid, for the reasons you state among others. But after much wrangling with the same issues, my comment is the only summary statement I've ever really been able to make on the matter. He's just left religion and I feel him on not knowing what to trust-- I don't think there's any othe place he could be right now.

I suppose what I really wanted to say is that you can never surrender those doubts to anyone else or some external system. You just have to accept that you will make mistakes, stay alert to new information, and stay in touch with what changes in you over time.

Comment by Holly_Elmore on If this forum/EA has ethnographic biases, here are some suggestions · 2019-05-07T19:48:29.065Z · EA · GW
First of all, youch, people did not like this post. That's okay.

Aww, I'm sorry-- I didn't mean to sound harsh. I get very sensitive on this forum so I hate that I made you feel that way. I guess I was just really eager to clarify that diversity was not why I wanted an ethnography done and not considerate enough of the position you laid out.

I have a strong reaction against weighted voting on the basis of demographics, but it would definitely be interesting to see how it changed things.

Comment by Holly_Elmore on How do we check for flaws in Effective Altruism? · 2019-05-06T20:08:38.502Z · EA · GW

Just person to person, I don't think there's any substitute for staying awake and alert around your beliefs. I don't mean be tense or reflexively skeptical-- I mean accept that there is always uncertainty, so you have to trust that, if you are being honest with yourself and doing your best, you will notice when discomfort with your professed beliefs arises. You can set up external standards and fact checking, but can't expect some external system to do the job for you of knowing whether you really think this stuff is true. People who don't trust themselves on the latter over-rely on the former.