Posts

“How many people might ever exist, calculated” by Primer [Video] 2022-08-16T16:33:30.270Z
Beware ethical systems without repugnant conclusions 2022-08-02T16:47:11.705Z
If EA is no longer funding constrained, why should *I* give? 2022-05-14T10:44:10.666Z

Comments

Comment by Ezra Newman on Criticism of the 80k job board listing strategy · 2022-09-21T16:01:09.530Z · EA · GW

See also https://www.effectivejobsboard.org/, courtesy of Nathan Young and Superlinear.

Comment by Ezra Newman on It’s not effective to call everything effective and how (not) to name a new organisation · 2022-09-16T00:17:08.600Z · EA · GW

FWIW, I have to correct myself every time I read EV Ops that it’s not Expected Value Ops. (That being said, I don’t know anything about marketing; n=1)

Comment by Ezra Newman on 21 criticisms of EA I'm thinking about · 2022-09-06T23:38:25.006Z · EA · GW

Easy context: 14.) I don't think we pay enough attention to some aspects of EA that could be at cross-purposes

Comment by Ezra Newman on Beware ethical systems without repugnant conclusions · 2022-09-05T02:53:09.599Z · EA · GW

This is a good point, sorry for getting back to it so late.

One idea I cut from the post: I think scope insensitivity means we should be suspicious of our gut intuitions in situations dealing with lots of people, so I think that’s another point in favor of accepting the RC. My main  goal with this point was to suggest this central idea: “sometimes trust your ethical framework in situations where you expect your intuituon to be wrong.”

 

That being said, the rest of your point still stands.

Comment by Ezra Newman on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-08-30T08:14:17.141Z · EA · GW

I said this on Twitter, but: this is really great! (Also very glad it’s coming directly to my inbox!)

Comment by Ezra Newman on Questioning the Foundations of EA · 2022-08-28T20:23:26.554Z · EA · GW

This is a good point, I guess.

Comment by Ezra Newman on Questioning the Foundations of EA · 2022-08-28T05:14:30.381Z · EA · GW

From my (new since you asked this) reply to Kirmani’s comment:

I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition.

Honestly, its a pretty specific argument/recommendation so I’m having trouble thinking of another example that adds something. Maybe the difference between how I feel about my dog vs farmed animals, or near vs far people. If you’d like/it would help you or someone else, I can spend some more time thinking of one. 

Comment by Ezra Newman on Questioning the Foundations of EA · 2022-08-28T05:08:27.513Z · EA · GW

I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition.

Specifically:

  • Have as much sex as you want (with a consenting adult, etc). Have as many children as you can reasonably care for. But even if you disagree with that, I don’t think this is a good counterexample. It’s not a conflict between small scale beliefs and large scale beliefs. 
  • This is new information, not a small-large conflict. 
  • Same as above. 
Comment by Ezra Newman on Questioning the Foundations of EA · 2022-08-27T21:58:53.144Z · EA · GW

In response to “Shut Up and Divide:”

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small. A tribe might have at most a few hundred people, which happens to be ~where your naive intuition stops scaling linearly.

So it seems like your default behavior should be extended to your new circumstances instead of extending your new circumstances to default state.

(Although, I think SUAD might be useful for not getting trapped in caring too much about unimportant news, for example).

(I’m writing this on my phone, please correct typos more than you otherwise would. For the same reason, this is fairly short, please steelman in additional details as necessary to convince yourself)

Comment by Ezra Newman on Digital Networking for Dummies · 2022-07-09T05:09:12.806Z · EA · GW

I have 13 followers, and most of those are friends or coworkers, so I don’t feel qualified to be that someone. But I would also love to see this!

Comment by Ezra Newman on Digital Networking for Dummies · 2022-07-08T23:05:43.141Z · EA · GW

FWIW, this has worked for me too. I got hired this summer (college freshman) because I was impressed with + interested by some GPT-3 stuff that Peter Wildeford was doing on Twitter and wanted to try it myself. Those tweets got me hired!

 

TLDR: Tweet about interesting stuff and reply to people you think are smart!

Comment by Ezra Newman on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-22T00:17:25.322Z · EA · GW

Okay, nevermind then!

Comment by Ezra Newman on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-21T23:13:46.301Z · EA · GW

Do you mean for the title to say "<= 3 mins"? I think you have your ">" inverted. (It took me about 3 minutes for the first section, and about 10 minutes all-in)

Comment by Ezra Newman on Transcripts of interviews with AI researchers · 2022-05-11T04:14:15.554Z · EA · GW

With all the “AI psychology” posts on here and Twitter, I thought this was going to be “interviews with AIs that are researchers” not “interviews with humans researching AI”. This is probably more valuable!

Comment by Ezra Newman on Why do you care? · 2022-05-08T16:06:11.534Z · EA · GW

My justification is pretty simple:

  1. I like being happy and not having malaria and eating food.

  2. I appear to be fundamentally similar to other people.

  3. Therefore, other people probably want to be happy and not have malaria and have food to eat.

  4. I don’t appear to be special, so my interests shouldn’t be prioritized more than my fair share.

  5. Therefore I should help other people more than I help myself because there are more of them and they need more help.