Posts

Optimizing seed:pollen ratio to spread ideas 2022-09-20T16:37:10.535Z
Holly_Elmore's Shortform 2022-06-08T17:33:47.567Z
The Rodent Birth Control Landscape 2022-04-29T20:00:02.090Z
Virtue signaling is sometimes the best or the only metric we have 2022-04-28T04:51:01.817Z
Crucial considerations in the field of Wild Animal Welfare (WAW) 2022-04-10T19:43:42.537Z
Monitoring Wild Animal Welfare via Vocalizations 2021-12-10T00:37:24.098Z
Biology project in search of first author: imaging the brain of popular farmed insect Black Soldier Fly 2021-04-09T18:19:46.178Z
How bad is coronavirus really? 2020-05-08T17:29:18.439Z
1 min Key and Peele clip on children saved per dollar 2020-02-27T02:11:17.197Z
Why did MyGiving need to be replaced? And why is the EffectiveAltruism.org replacement so bad? 2019-10-04T19:35:31.716Z
Harvard's Agathon Career Fellowship: A Post-mortem 2019-09-04T00:29:37.709Z
The Turing Test podcast #8: Spencer Greenberg 2019-05-13T15:30:23.246Z
Scrupulosity: my EAGxBoston 2019 lightning talk 2019-05-02T23:08:25.894Z
I want an ethnography of EA 2019-05-02T20:33:38.915Z
The Turing Test podcast is back with Bryan Caplan! 2019-04-15T20:35:51.995Z
[blog cross-post] The remembering self needs to get real about the experiencing self. 2019-02-08T18:21:18.746Z
Who sets the read time estimates? 2018-12-28T16:41:26.014Z
[blog cross-post] On privacy 2018-12-28T15:41:29.448Z
[blog cross-post] potential lost; substance gained 2018-12-12T21:13:20.689Z
[blog cross-post] Charity hacks 2018-12-12T20:56:43.697Z
[blog cross-post] More on narcissism 2018-12-12T20:42:17.656Z
[blog cross-post] So-called communal narcissists 2018-12-12T20:37:24.046Z
We are in triage every second of every day 2016-08-26T20:20:21.300Z

Comments

Comment by Holly_Elmore on Optimizing seed:pollen ratio to spread ideas · 2022-09-21T05:54:34.277Z · EA · GW

Huh, I’ll try to fix that. Yes, it’s a pdf of the book on Robert Trivers’s Research Gate page.

Comment by Holly_Elmore on Reducing nightmares as a cause area · 2022-07-20T08:28:15.686Z · EA · GW

Consequently, it may be the case that a tremendous amount of negative subjective experience is transpiring without being remembered... One can imagine that many people go to hell for some period of their slumbers, migrate to subsequent dreams, and they have no memory of many of the horrid experiences they had.

Makes you think-- it's at least 1.66x as important as we usually think to be mentally healthy and able to flexibly and skillfully respond to distress if we need those skills when we're asleep too. Maybe more important than that if we deal with more extreme and less reality-constrained challenges in sleep!

Comment by Holly_Elmore on Reducing nightmares as a cause area · 2022-07-20T08:24:11.586Z · EA · GW

What if someone made a free website or app that walks people through the steps of imagery rehearsal treatment? Seems relatively low effort with a potentially high payoff. 

I've always heard that it was hard to get what you called imagery reversal therapy because not many are trained in it, so I like this idea! Maybe you could persuade people who want to make yet another meditation or CBT app to try this instead, or for this to be added to an existing CBT app.

And I love that you started with the most rough and ready possibility, literally just describing the steps and making them available. Is this something you could do? Could you oversee a bright high schooler as they did it over the summer?
 

Comment by Holly_Elmore on EA for dumb people? · 2022-07-20T08:17:56.330Z · EA · GW

(I have a lot of karma because I've been on here a long time)

Comment by Holly_Elmore on EA for dumb people? · 2022-07-20T08:16:03.874Z · EA · GW

I second this-- a lot of prominent EAs don't look at the Forum. I check the Forum something like once a week on average and rarely post despite this being where my research reports are posted. A lot of EA social engagement happens on facebook and Discord and discourse may take place over more specialized fora like the Alignment Forum or specific Slacks.

Comment by Holly_Elmore on EA for dumb people? · 2022-07-20T08:13:35.702Z · EA · GW

Exactly, it's an issue if people think the posts on here are all aimed at a general EA audience 

Comment by Holly_Elmore on [deleted post] 2022-07-20T08:09:05.999Z

It's tempting to think that criticizing an entire system or paradigm is higher leverage because the scale is larger, but I agree with this take that that usually just blunts the ability of the criticism to do anything and diffuses responsibility.

Comment by Holly_Elmore on My EA Failure Story · 2022-07-13T00:02:57.676Z · EA · GW

Another frame that may be comforting is that the expected value of all of these plans seems like it was positive. I'm sure there are things you could have done differently to improve your odds, but it doesn't sound like a better you would have chosen not to pursue these angles or not tried to maximize your positive impact.  Nobody has a guarantee that their plans will succeed-- all we can do is try to maximize EV, knowing that p < 1. Kudos to you for shooting your shot. I think you should get as much credit for that part as if you had succeeded. 

Comment by Holly_Elmore on EA for dumb people? · 2022-07-12T20:45:43.644Z · EA · GW

Yeah I would still love to see something like ethnographies of EA: https://forum.effectivealtruism.org/posts/YsH8XJCXdF2ZJ5F6o/i-want-an-ethnography-of-ea

Comment by Holly_Elmore on EA for dumb people? · 2022-07-12T20:43:27.148Z · EA · GW

Another issue here is that the EA Forum is used sort of as the EA research journal by many EAs and EA orgs, including my employer, Rethink Priorities. We sometimes post write-ups here that aren't optimized for the average EA to read at all, but are more for a technical discipline within EA.

Comment by Holly_Elmore on EA for dumb people? · 2022-07-12T20:24:12.427Z · EA · GW

Whenever I come on the EA forum I literally feel like my brain is going to explode with some of the stuff that is posted on here, I just don't understand it.

Dude, I have a degree from Harvard but it's in biology and I feel this way about a lot of the AI stuff! I admire your humility but you might not be that dumb.

I think your critique is totally spot-on, and I think a better EA community would have room for all kinds of engagement. When longtermism became dominant (along with the influx of a lot of cash so that we were more talent-constrained than money constrained) we lost a lot of the activities that had brought the entire community together, like thinking a lot about how to save and donate money or even a lot of emphasis on having local communities. We also stopped evangelizing as much as our message got more complicated and we became focused on issues like AI alignment that require specific people more than a large group of people. 

But even though I frequently say we should shore up the community by bringing back some focus on the original EA bread and butter causes like global health, I don't know if the current community is really making a mistake by focusing our limited efforts here. I think having more kinds of people in the community would  be great, but not if it detracted from the kind of discourse that you're saying is over your head. I'm not sure how to pull this off.

Have you thought about organizing a group yourself to focus on the ideas you are interested in? I think it would be really good for the ivory tower part of the community to have more EA classic groups out there, and it wouldn't be taking anyone's efforts away from this part of the community. 

Comment by Holly_Elmore on We are in triage every second of every day · 2022-06-23T23:56:12.750Z · EA · GW

This is such a great summary and restatement! You suggest a shorter version of the piece and I think a longer version of this comment might do that job perfectly.

Comment by Holly_Elmore on Virtue signaling is sometimes the best or the only metric we have · 2022-06-21T18:52:10.028Z · EA · GW

I disagree-- Rationalists (well, wherever you want to put Bostrom) invented the term infohazard. See Scott Alexander on Virtue of Silence. They take the risks of information as power very seriously, and if knowledge of P  equaling NP posed a threat to lots of beings and they thought the best thing was suppress that, they would do it. In my experience, both EAs and rationalists are very respectful of the need for discretion.

I think I see the distinction you're making and I think the general idea is correct, but this specific example is wrong.

Comment by Holly_Elmore on Holly_Elmore's Shortform · 2022-06-21T18:39:48.703Z · EA · GW

I can see good reasons for individual orgs to do that, but way fewer for EA writ large to do this. I'm with Rob Bensinger on this.

Comment by Holly_Elmore on Megaprojects for animals · 2022-06-19T18:40:28.962Z · EA · GW

I'm nervous about implementing AI solutions in the near-term, because, as you allude, what they are used to achieve is matter of who's programming them :/

Comment by Holly_Elmore on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-19T18:35:09.197Z · EA · GW

 > I do not want an epistemic culture that finds it acceptable to challenge an individuals overall credibility in lieu of directly engaging with their arguments.

I think it's fair to talk about a person's lifetime performance when we are talking about forecasting. When we don't have the expertise ourselves, all we have to go on is what little we understand and the track records of the experts we defer to. Many people defer to Eliezer so I think it's a service to lay out his track record so that we can know how meaningful his levels of confidence and special insights into this kind of problem are. 

Comment by Holly_Elmore on Holly_Elmore's Shortform · 2022-06-09T19:45:01.592Z · EA · GW

What's so weird to me about this is that EA has the clout it does today because of these frank discussions. Why shouldn't we keep doing that?

I'm in favor of not sharing infohazards but that's about the extent of reputation management I endorse-- and I think that leads to a good reputation for EA as honest!

Comment by Holly_Elmore on Holly_Elmore's Shortform · 2022-06-08T17:33:47.687Z · EA · GW

We all know EAs and rationalists are anxious about getting involved in politics because of the motivated reasoning and soldier mindset that it takes to succeed there (https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer).

Would it work to have a stronger distinction in our minds between discourse, which should stay pure from politics, and interventions, which can include, e.g. seeking a political office or advocating for a ballot measure?

Since EA political candidacies are happening whether we all agree or not, maybe we should take measures to insulate the two.  I like the "discourse v intervention" frame as a tool for doing that, either as a conversational signpost or possibly to silo conversations entirely. Maybe people involved in political campaigns should have to recuse themselves from meta discourse? 

Comment by Holly_Elmore on The Rodent Birth Control Landscape · 2022-05-07T01:14:34.496Z · EA · GW

Thank you! I was hoping it would be useful to people to consult just the heading they were curious about.

Comment by Holly_Elmore on Effective altruism’s odd attitude to mental health · 2022-04-29T18:54:19.734Z · EA · GW

I think there's a conflation of "mental health is difficult to measure", "mental health interventions are difficult to study well", and "my mental health is such a winding road" that often people kind of end doubting that mental health interventions are worthwhile even while seeking them in their own lives. 

Comment by Holly_Elmore on Virtue signaling is sometimes the best or the only metric we have · 2022-04-28T16:45:29.211Z · EA · GW

I just want to reiterate, I am not advocating doing something insincere for social benefit. I'm advocating getting and giving real data about character.

Comment by Holly_Elmore on Virtue signaling is sometimes the best or the only metric we have · 2022-04-28T16:28:22.863Z · EA · GW

lol, see the version of this on less wrong to have your characterization of the rationalist community confirmed: https://www.lesswrong.com/posts/hpebyswwhiSA4u25A/virtue-signaling-is-sometimes-the-best-or-the-only-metric-we

 

From an EA point of view, doing the most good is the most important thing, so socially-motivated virtue signaling is defensible if it consequentially results in more good.

EAs may be more likely to think this, but this is not what I'm saying. I'm saying there is real information value in signals of genuine virtue and we can't afford to leave that information on the table.  I think it's prosocial to monitor your own virtue and offer proof of trustworthiness (and other specific virtues) to others, not because fake signals somehow add up to good social consequences, but because it helps people to be more virtuous. 

Rationalists are erring so far in the direction of avoiding false or manipulative signals that they are operating in the dark, when at the same time they are advocating more and more opaque and uncertain ways to have impact. I think that by ignoring virtue and rejecting virtue signals, rationalists are not treating the truth as "the most important thing". (In fact I think this whole orientation is a meta-virtue-signal that they don't need validation and they don't conform-- which is a real virtue, but I think is getting in the way of more important info.) It's contradicting our values of truth and evidence-seeking not to get what information we can about character, at least own own characters. 

Comment by Holly_Elmore on Consider Not Changing Your Forum Username to Your Real Name · 2022-04-28T04:59:44.778Z · EA · GW

It would be so beneficial to me if there were a more standard "First Name, Last Name" format to forum users because it's a lot of cognitive overhead for me to keep up with abbreviations, only using a common first name, and open pseudonyms. Just the other day someone misattributed something Holly Morgan wrote to me. It's one thing if the account is anonymous and I'm not supposed to know who they are. It's quite another if I'm expected to recognize people's idiosyncratic naming or alt accounts. I'm not saying anyone's done anything wrong-- it just creates unnecessary friction to discourse.

Comment by Holly_Elmore on High absorbency career paths · 2022-04-14T17:51:34.769Z · EA · GW

Great idea. I notice a huge disconnect between the idealized ranking of high impact careers 80K puts out and what it actually takes  to move people on the ground into higher impact roles, and the high emotional costs of trying to enter low absorbency fields is definitely one of the factors. On a population level, I agree that it would probably be higher EV to recommend careers more people are more likely to be able to successfully enter. 

Comment by Holly_Elmore on Against the "smarts fetish" · 2022-04-14T17:44:33.381Z · EA · GW

And beyond neglectedness, a reason to focus more on these other important traits relative to IQ — at the level of what we seek to develop individually and incentivize collectively — is that many of these other traits and skills probably are more elastic and improvable than is IQ

This is the most important thing to me. We're burning a lot of fuel proving that we have a good (basically) fixed trait, and what's the point? What do we actually gain by knowing the exact smartness ranking of the people in EA? Just seems like a waste of time compared to learning things, gaining skills, forming new collaborations, etc. 

Also disturbs me that being found to be smart seems to be its own reward, instead of an instrument for having a positive impact.

Comment by Holly_Elmore on Crucial considerations in the field of Wild Animal Welfare (WAW) · 2022-04-14T17:31:21.950Z · EA · GW

Good "catch"

Comment by Holly_Elmore on Crucial considerations in the field of Wild Animal Welfare (WAW) · 2022-04-13T19:39:58.739Z · EA · GW

I almost preemptively disavowed it lol

Comment by Holly_Elmore on Crucial considerations in the field of Wild Animal Welfare (WAW) · 2022-04-13T01:33:08.691Z · EA · GW

Okay, so it turns out the details of how that number was estimated are still unpublished, and I'll cite them as such along with that meme Peter shared.

Good catch, once again!

Comment by Holly_Elmore on Crucial considerations in the field of Wild Animal Welfare (WAW) · 2022-04-12T16:50:28.129Z · EA · GW

Oh shoot, you seem to be right. I must have left a link out. This is the fastest link I could find that makes reference to the Rethink Priorities findings just to give you guys some assurance: https://www.facebook.com/groups/OMfCT/posts/3060710004243897

I'll get a real one!

Comment by Holly_Elmore on Crucial considerations in the field of Wild Animal Welfare (WAW) · 2022-04-11T02:58:33.764Z · EA · GW

It seems like it can just overpower/lock-in humans without obtaining these competencies (it doesn't even need to be AGI to be extremely dangerous).

Ideally, I think WAW would consider all the different AI timelines. TAI that just increases our industrial capacity might be enough to seriously threaten wild animals if it makes us even more capable of shaping their lives and we don't have considered values about how to look out for them.
 


So it’s possible that relatively simple tools are sufficient to improve WAW, or at least the sophistication is orthogonal to AGI?

I agree! Personally, I don't think it's lack of ntelligence per se holding us back from complex WAW intervention (by which I mean interventions that have to compensate for ripple effects on the ecosystem or require lots of active monitoring). I think we're more limited by the number of monitoring measurements we can take and our ability to deliver specific, measured intervention at specific places and times. I think we could conceivably gain this ability with hardware upgrades alone and no further improvement in algorithms.

Comment by Holly_Elmore on I want an ethnography of EA · 2022-04-11T02:47:27.288Z · EA · GW

I think that EAs are, at least ostensibly, very open to being studied and critiqued. I think they could be an excellent population for academic ethnographers or simply very compliant client community for action-oriented evaluation.

Comment by Holly_Elmore on I want an ethnography of EA · 2022-04-11T02:44:32.855Z · EA · GW

Oh man, I would love to try, even if all I do is locate and pay someone else who can find an ethnographer.

Comment by Holly_Elmore on We are in triage every second of every day · 2021-12-09T18:40:49.106Z · EA · GW

I would guess that this post would be even better if it was more independent of the podcast episode.

I wish I had known it would be such a hit!

Comment by Holly_Elmore on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:41:07.567Z · EA · GW

Longtermism in its nascent form relies on a lot guesstimates and abstractions that I think could be made more empirical and solid. Personally, I am very interested in asking whether people at x time in the past had the information they needed to avoid later disasters that occurred. What kinds of catastrophes have humans been able to foresee,  and when we were able to but didn't, what obstacles were in the way? History is the only evidence available in a lot of longtermist domains and I don't see EA exploiting it enough. 

Comment by Holly_Elmore on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:33:45.796Z · EA · GW

I've adjusted imperfectly to working from home, so anyone  who has that strength in addition to my strengths would be better. I wish I knew more forecasting and modeling, too. 

Comment by Holly_Elmore on Rat population control: how to lack less information ? · 2021-11-09T01:37:54.866Z · EA · GW

Hey Alexandre,  just wanted to note that ContraPest is not hormone-based, although it's true it has not gone through the regulatory approval process in Europe yet. 

https://en.wikipedia.org/wiki/ContraPest

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-13T18:00:19.329Z · EA · GW

They can be blissful or terrifying depending on where in the brain they occur. I thought is was pretty well understood that locality is what determines the experience, not harmonics of the seizure. Even if harmonics have something to do with it, I wouldn't say that experiences during seizures are evidence in favor of STV. 

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-09T16:11:06.575Z · EA · GW

It's also worth noting there are a number of reasons I'm skeptical of the attraction to symmetry. I think it's reasoning  from aesthetics that we have very good  and well-understood reasons (not realted to the nature of valence) to hold. And, if the claim is that the resonances are conveying the valence, highly synchronous or symmetrical states hold less information, so I'm skeptical that that would be a way of encoding valence. It's at best redundant as a way of storing information (at worst its a seizure, where too many neurons are recruited away from doing their job to doing the same thing at once).

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-09T16:04:15.100Z · EA · GW

This is an interesting summary, and was basically what I guessed STV was getting at, but this is a hypothesis, not a theory. The hypothesis is: what if there is content in the symmetry encoded in various brain states? 

I don't understand is how symmetry in brain readings is supposed to really explain valence better than, say,  neurons firing brain areas involved in attraction/repulsion. Is the claim that the symmetry is  the qualia of valence? How would symmetries and resonance be exempt from the hard problem any more than neuronal activation?

 > How compelling this feels (and just feels!) to investigate is something most readers won't appreciate unless they've experienced altered states of consciousness themselves.

Do you think it should be compelling based on a trip? Is that real evidence? I'm not closed to the possibility in principle, but outside view it sounds like psychedelics just give you an attraction to certain shapes and ideas and give you a sense of insight. That might not be totally unrelated to a relevant observation about valence or qualia, but I don't see any reason to think pschedelics give you more direct access to the nature of our brains. 

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-08T05:00:07.459Z · EA · GW

At the very least, miscommunication this bad is evidence of serious incompetence at QRI. I think you are mistaken to want to excuse that. 

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-08T04:54:37.833Z · EA · GW

My promise is that "there is something here" and that to "get it" is not merely to buy into the theory blindly, but rather, it is what happens when you give it enough benefit of the doubt, share a sufficient number of background assumptions, and have a wide enough experience base that it actually becomes a rather obvious "good fit" for all of the data available.

 

I started out very skeptical of STV myself, and in fact it took about three years of thinking it through in light of many meditation and exotic high-energy experiences to be viscerally convinced that it's pointing in the right direction.

It sounds like you're saying we all need to become more suggestible and just feel like your theory is true before we can understand it. Do you see what poor reasoning that would be?

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-08T04:18:06.777Z · EA · GW

This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria.

But did you have any reason to posit it? Any evidence that this identity is the case? 

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-08T04:14:57.308Z · EA · GW

I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the "Keep EA Weird" spirit to me.

Keeping EA honest and rigorous is much higher priority. Making excuses for incompetence or lack of evidence base is the opposite of EA. 

Comment by Holly_Elmore on A Primer on the Symmetry Theory of Valence · 2021-09-08T03:47:52.701Z · EA · GW

In 2016 I introduced the Symmetry Theory of Valence (STV) built on the expectation that, although the details of IIT may not yet be correct, it has the correct goal — to create a mathematical formalism for consciousness. STV proposes that, given such a mathematical representation of an experience, the symmetry of this representation will encode how pleasant the experience is (Johnson 2016). STV is a formal, causal expression of the sentiment that “suffering is lack of harmony in the mind” and allowed us to make philosophically clear assertions such as:

  • X causes suffering because it creates dissonance, resistance, turbulence in the brain/mind.
  • If there is dissonance in the brain, there is suffering; if there is suffering, there is dissonance in the brain. Always.

 

This makes absolutely no sense on its face. I am not a neuroscience expert. I am not a consciousness expert. I do not need to be to say that these conclusions just do not follow. 

To recap what you said: You start by saying that, if you could make a complete mathematical representation of the brain (IIT), it would be symmetric to the physical manifestation of the brain, and therefore pleasure would be included in the representation. Then you claim that STV is a formal and causal theory, without backing that up or explaining it all. And then you just assert these ideas about dissonace and harmony being the structural correlates of suffering and pleasure! 

You present this all as if you were building a case where one point leads to another. Perhaps it's just poor communication about a a better idea, but what's here is very shoddy reasoning. 

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-19T19:35:58.372Z · EA · GW

Seems like others agreed with you. I meant it mostly seriously. 

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-19T19:24:47.716Z · EA · GW

The more substantial point that I'm trying to make is that the political balance of the EA Forum shouldn't be a big factor in someone's decision to publicize important information about a major charity evaluator, or probably even in how they put the criticism. Many people read posts linked from the EA Forum who never read the comments or don't visit the Forum often for other posts, i.e. they are not aware of the overall balance of political sympathies on the Forum. The tenor of the Forum as a whole is something that should be managed (though I wouldn't advocate doing that through self-censorship) to make EA welcoming or for the health of the community, but it's not that important compared to the quality of information accessible through the Forum, imo. 

I'm a little offended at the suggestion that expressing ideas or important critiques of charities should in any way come second to diplomatic concerns about the entire Forum. 

Comment by Holly_Elmore on Are mice or rats (as pests) a potential area of animal welfare improvement? · 2021-05-12T17:21:49.951Z · EA · GW

I have been researching sterilizing rodents instead of killing them to control their populations, and it's much more popular already than I had realized. ContraPest is a bait that sterilizes rats with a few doses. It reduces sperm viability in males and induces aging of ovarian follicles in females, sort of like early menopause. There's a bit of a lag before the population reduces, but it has the benefits of humaneness, not disturbing the rats' territories (because older rats stick around, preventing movement between territories which can spread disease), and providing a better longterm maintenance solution. It's already widely used, and Senestech, the company that makes it, has had big contracts with cities like NYC and Wasington DC. 

I was very surprised to find out how widespread the use of sterilants already was considering I had not heard of them for rodent pest control until last year!

I think this is a good cause not only to reduce harm to household pests, but because having to participate in cruelty toward animals can lead to cognitive dissonance or defensiveness or the status quo treatment of animals. Finding out about sterilants got me out of binary way of thinking towards rat infestation (it's them or me) and that's the kind of creative problem-solving we need if we're ever going to make real improvements in wild animal welfare. 

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-06T22:16:08.564Z · EA · GW

Look who's never heard of intersectionality

Comment by Holly_Elmore on Concerns with ACE's Recent Behavior · 2021-05-06T22:15:34.445Z · EA · GW

I think this post is pretty damning of ACE. Are you saying OP shouldn't have posted important information about how ACE is evaluting animal charities because there has been too much anti-SJ/DEI stuff on the forum lately?

Comment by Holly_Elmore on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-04-30T21:15:17.371Z · EA · GW

Are you implying that Larry Summers was wrong or that Texaco's actions were somehow his fault?