Posts

DontDoxScottAlexander.com - A Petition 2020-06-25T23:29:46.491Z · score: 60 (33 votes)

Comments

Comment by ben-pace on How are the EA Funds default allocations chosen? · 2020-08-12T17:21:26.402Z · score: 3 (2 votes) · EA · GW

Interesting. Thank you very much.

Comment by ben-pace on How are the EA Funds default allocations chosen? · 2020-08-11T17:13:49.961Z · score: 10 (4 votes) · EA · GW

This seems to be a coincidence. Less than 10% of total donation volume is given according to the default allocation.

I roll to disbelieve? Why do you think this? Like, even if there’s slight variation I expect it’s massively anchored on the default allocation.

Comment by ben-pace on Donor Lottery Debrief · 2020-08-09T23:05:05.266Z · score: 3 (2 votes) · EA · GW

I think a lot of people in the Bay lack funding.

Comment by ben-pace on The 80,000 Hours job board is the skeleton of effective altruism stripped of all misleading ideologies · 2020-08-08T06:54:23.948Z · score: 9 (7 votes) · EA · GW

I'm not Oli, but jotting down some of my own thoughts: I feel like the job board gives a number of bits of useful selection pressure about which orgs are broadly 'helping out' in the world; out of all the various places people go in careers, it's directing a bit of energy towards some better ones. Analogous to helping raise awareness of which foods are organic or something, which is only a little helpful for the average person, but creating that information can be pretty healthy for a massive population. I expect 80k was motivated to make the board because such a large order of magnitude of people who wanted their advice, and they felt that this was an improvement on the margin that had a large effect if thousands of people tried to follow the advice.

As I wouldn't expect this was a massive change to your health to start eating organic food, I wouldn't suddenly become excited about someone and their impact if they became the 100th employee at John Hopkins or if they were the marginal civil servant in the UK government. 

In fact (extending this analogy to its breaking point) nutrition is an area where it's hard to give general advice, the data mostly comes from low quality observational studies, and the truth is you have to do a lot of self-experimentation and building your own models of the domain to get any remotely confident beliefs about your own diet and health. Similarly, I'm excited by people who try a lot of their own projects and have some successes at weird things like forming a small team and creating a very valuable product that people pay a lot of money for, or people who do weird but very insightful research (like Gwern or Scott Alexander to give obvious examples, but also things like this that take 20 hours and falsifies a standard claim from psychology), who figure out for themselves what's valuable and try very very hard to achieve it directly without waiting for others to give them permission.

Comment by ben-pace on EA Forum update: New editor! (And more) · 2020-08-04T17:23:13.428Z · score: 6 (3 votes) · EA · GW

Narrator: “He was right.”

Comment by ben-pace on A list of good heuristics that the case for AI X-risk fails · 2020-07-16T21:36:45.899Z · score: 9 (5 votes) · EA · GW

(I would include the original author’s name somewhere in the crosspost, especially at the top.)

Comment by ben-pace on EA Forum feature suggestion thread · 2020-06-26T01:48:18.814Z · score: 15 (5 votes) · EA · GW

+50 points for making UI mockups, makes it much more likely to get the feature.

Comment by ben-pace on EA Forum feature suggestion thread · 2020-06-24T22:00:26.876Z · score: 2 (1 votes) · EA · GW

Hah! You're forgiven. I've seen this sort of thing a lot from users.

Comment by ben-pace on EA Forum feature suggestion thread · 2020-06-20T16:50:37.959Z · score: 4 (2 votes) · EA · GW

The new editor has this! :)

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-10T07:28:55.983Z · score: 2 (1 votes) · EA · GW
suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay?

Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. I'd suggest the following from the Fuzzies/Utilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)

--

If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:

  • To purchase warm fuzzies, find some hard-working but poverty-stricken woman who's about to drop out of state college after her husband's hours were cut back, and personally, but anonymously, give her a cashier's check for $10,000.  Repeat as desired.
  • To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price.  Make a big deal out of it, show up for their press events, and brag about it for the next five years.
  • Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar.  Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.

But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time... Of course, if you're not a millionaire or even a billionaire—then you can't be quite as efficient about things, can't so easily purchase in bulk.  But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries.  Volunteer at a soup kitchen.  Or just get your warm fuzzies from holding open doors for little old ladies.  Let that be validated by your other efforts to purchase utilons, but don't confuse it with purchasing utilons.  Status is probably cheaper to purchase by buying nice clothes.

And when it comes to purchasing expected utilons—then, of course, shut up and multiply.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-10T07:16:12.117Z · score: 2 (1 votes) · EA · GW
If there were no great essays with similar themes aside from Eliezer's, I'd be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn't get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I'm likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.

I see. As I hear you, it's not that we must go overboard on avoiding atheism, but that it's a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.

From my vantage point, there aren't essays in this series that deal with giving up hope as directly as this essay. I think Singer's piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you don't even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-09T23:35:55.022Z · score: 13 (5 votes) · EA · GW
As for "Beyond the Reach of God," I'd prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.

I think that if the essay said things like "Religious people are stupid isn't it obvious" and attempted to do social shaming of religious people, then I'd be pretty open to suggesting edits to such parts.

But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.

I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case they'll be perfectly able to appreciate the finer points of the post even though it's written in a way that misunderstands their relationship to their religion.

I think either way, if they're going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn't expect that there's an interventionist aligned superintelligence (my terminology, I don't mean nothing by it). I don't think it's right to walk on eggshells around religious people, and I don't think it makes sense to throw out powerful ideas and pieces of strongly emotional/artistic work to make sure such people don't need to learn to engage with art and ideas that don't share their specific assumptions about the world.

Scott's piece was part of the second edition of the Handbook, and I agree that it's a classic; I'd like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott's piece fits in well there).

Checks out, that makes sense.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-09T23:35:06.598Z · score: 9 (2 votes) · EA · GW

*nods* I'll respond to the specific things you said about the different essays. I split this into two comments for length.

I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn't use obfuscating jargon or dive off into tangents

I think there's a few pieces of jargon that you could change (e.g. Unit of Caring talks about 'akrasia', which isn't relevant). I imagine it'd be okay to request a few small edits to the essay.

But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I don't think you should be afraid to show people a high-level discussion, just because they don't know all of the details being discussed already. It's okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear – I think this is good, it helps see that there are levels above to reach.

It's like how EA student group events would always be "Intro to EA". Instead, I think it's really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you don't fully understand all the terms they use like "System 1 / System 2" or "conservation law" or "derivatives market". I would give the Feynman lectures to a young teenager who doesn't know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.

Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Graham's essays are always said in simple language, but I don't think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and that's necessary to make their points.

See the graph about dating strategies here. The goal is to get strong hits that make a person say "This is one of the most important things I've ever read", not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if there's true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.

Comment by ben-pace on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-06-05T02:54:38.133Z · score: 4 (2 votes) · EA · GW

Congratulations! I'm happy to hear that.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-04T19:21:46.096Z · score: 8 (4 votes) · EA · GW

That makes me happy to hear :)

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-03T21:20:39.527Z · score: 33 (10 votes) · EA · GW

I thought a bit about essays that were key on me becoming more competent and able to take action in the world to improve it, that connected to what I cared about. I'll list some and the ways they helped me. (I filled out the rest of the feedback form too.)

---

Feeling moral by Eliezer Yudkowsky. Showed me an example where my deontological intuitions were untrustworthy and that simple math was actually effective.

Purchase Fuzzies and Utilons Separately by Eliezer. Showed me where attempts to do good can get very confused and simply looking at outcomes can avoid a lot of problems from reasoning by association or by what's 'considered a good idea'.

Ends Don’t Justify Means (Among Humans) by Eliezer. Helped me understand a very clear constraint on naive utilitarian reasoning, which avoided worlds where I would naively trust the math in all situations.

Dive In by Nate Soares. Helped point my flailing attempts to improve and do better in a direction where I would actually get feedback. Only by actually repeatedly delivering a product, even if you changed your mind about what you should be doing and whether it was valuable 10 times a day, can you build up real empirical data about what you can accomplish and what's valuable. Encouraged me to follow-through on projects a whole lot more.

Beyond the Reach of God by Eliezer. This helped ground me, it helped me point at what it's like to have false hope and false trust, and recognise it more clearly in myself. I think it's accurate to say that looking directly and with precision at the current state of the world involves trusting the world a lot less than most people, and a lot less than establishment narratives would say (Steven Pinker's "Everything is getting better and will continue to get better" isn't the right way to conceptualise our position in history, there's much more risk involved than that). A lot of important improvements in my ability to improve the world have involved me realising I had unfounded trust in people or institutions, and realising that unless I took responsibility for things myself, I couldn't trust that they would work out well by default, and this essay was one of the first places I clearly conceptualised what false hope feels like.

Money: The Unit of Caring by Eliezer. Similar things to the Fuzzies and Utilons post, but a bit more practical. And Kelsey named her whole Tumblr after this, which I guess is a fair endorsement.

Desperation by Nate. This does similar things to Beyond the Reach of God, but in a more hopeful way (although it's called 'Desperation', so how hopeful can it be?). It helped me conceptualise what it looks like to actually try to do something difficult that people don't understand or think looks funny, and to notice whether or not it was something I had been doing. It also helped me notice (more cynically) that a lot of people weren't doing things that tended to look like this, and to not try to emulate those kind of people so much.

Scope Insensitivity by Eliezer. Similar things to Feeling Moral, but a bit simpler / more concrete and tries to be actionable.

--

Some that I came up with that you already included:

  • On Caring
  • 500 Million, But Not A Single One More

It's odd that you didn't include Scott Alexander's classic on Efficient Charity or Eliezer's Scope Insensitivity, although Nate's "On Caring" maybe is sufficient to get the point about scope and triage across.

Comment by ben-pace on EA Forum: Data analysis and deep learning · 2020-05-13T05:45:10.618Z · score: 3 (4 votes) · EA · GW

This is awesome.

Comment by Ben Pace on [deleted post] 2020-04-03T21:19:27.887Z

+1

Comment by ben-pace on April Fool's Day Is Very Serious Business · 2020-04-02T07:04:54.412Z · score: 4 (2 votes) · EA · GW

I'm sorry, I've been overwhelmed with things lately, I didn't get round to it. But please do something similar next year!

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T20:07:53.352Z · score: 4 (3 votes) · EA · GW

I think your questions are great. I suggest that you leave 7 separate comments so that users can vote on the ones that they’re most interested in.

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T17:57:01.891Z · score: 2 (1 votes) · EA · GW

This is such an odd question. Could produce surprising answers though, if it’s something like “the least interesting ideas that people still took seriously” or “the least interesting ideas that are still a little bit interesting”. Upvoted.

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:25:17.152Z · score: 26 (12 votes) · EA · GW

What's a regular disagreement that you have with other researchers at FHI? What's your take on it and why do you think the other people are wrong? ;-)

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:21:06.609Z · score: 2 (1 votes) · EA · GW

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of engineered pandemics?

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:19:33.723Z · score: 11 (5 votes) · EA · GW

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of misaligned artificial general intelligence?

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:18:29.463Z · score: 10 (6 votes) · EA · GW

Can you tell us something funny that Nick Bostrom once said that made you laugh? We know he used to do standup in London...

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:17:45.801Z · score: 23 (14 votes) · EA · GW

We're currently in a time of global crisis, as the number of people infected by the coronavirus continues to grow exponentially in many countries. This is a bit of a hard question, but a time of crisis is often the time when governments substantially refactor things because it's finally transparent that they're not working, so can you name a feasible concrete change in the UK government (or a broader policy for any developed government) that you think would put us in a far better position for future such situations, especially future pandemics that have a much more serious chance of being an existential catastrophe?

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:16:53.458Z · score: 16 (8 votes) · EA · GW

Can you tell us a specific insight about AI that has made you positively update on the likelihood that we can align superintelligence? And a negative one?

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:16:37.087Z · score: 8 (5 votes) · EA · GW

What's a book that you read and has impacted how you think / who you are, that you expected most people here won't have read?

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T05:16:20.106Z · score: 16 (9 votes) · EA · GW

What are the three most interesting ideas you've heard in the last three years? (They don't have to be the most important, just the most surprising/brilliant/unexpected/etc.)

Comment by ben-pace on April Fool's Day Is Very Serious Business · 2020-03-13T19:56:23.816Z · score: 4 (2 votes) · EA · GW

Sure! I'm down to write a new top EA cause on April 1st.

Comment by ben-pace on April Fool's Day Is Very Serious Business · 2020-03-13T19:56:01.696Z · score: 4 (2 votes) · EA · GW

I like having it be an answer to a question. If you'd like to write a top-level post, you can always link to it from your answer.

Comment by ben-pace on EA Forum Prize: Winners for January 2020 · 2020-03-02T23:38:23.056Z · score: 4 (4 votes) · EA · GW

Lol, I got a prize for that massive rambly comment that nobody replied to. Thank you very much :)

Comment by ben-pace on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T19:44:12.491Z · score: 7 (4 votes) · EA · GW
grantees might be discouraged from applying due to concerns about publicizing their personal lives

This is a good point. While I think the disclosure policy is correct, some mitigation of this can happen - it's probably good to mention in the policy that all public writeups get seen by grantees before being published, and they will not be blindsided by private information being published about them.

Comment by ben-pace on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T19:28:50.756Z · score: 13 (5 votes) · EA · GW

Using the information available to you and not excluding a person's judgment in situations where they could reasonably be called 'biased' is the standard practise in places like Y Combinator and OpenPhil. OpenPhil writes about this in the classic post Hits-based Giving. A relevant quote (emphasis in original):

We don’t: put extremely high weight on avoiding conflicts of interest, intellectual “bubbles” or “echo chambers.”
...In some cases, this risk may be compounded by social connections. When hiring specialists in specific causes, we’ve explicitly sought people with deep experience and strong connections in a field. Sometimes, that means our program officers are friends with many of the people who are best suited to be our advisors and grantees.
...it sometimes happens that it’s difficult to disentangle the case for a grant from the relationships around it.[2] When these situations occur, there’s a greatly elevated risk that we aren’t being objective, and aren’t weighing the available evidence and arguments reasonably. If our goal were to find the giving opportunities most strongly supported by evidence, this would be a major problem. But the drawbacks for a “hits-based” approach are less clear, and the drawbacks of too strongly avoiding these situations would, in my view, be unacceptable.
Comment by ben-pace on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T19:23:41.067Z · score: 9 (2 votes) · EA · GW
I can't imagine myself being able to objectively cast a vote about funding my room-mate / close friend / partner's boss / someone who I had a substantial romantic relationship with that ended 2 years ago (especially if the potential grantee is in a desperate financial situation!). I'm skeptical that humans in general can make reasonably objective judgments in such cases.

(emphasis added)

This isn't a point about the OP, but I thought I'd mention that I think humans can make these choices, if they have the required discipline and virtue, and I think in many situations we see that.

When you're the CEO of a successful company, you often have very close relationships with the 5-20 staff closest around you. You might live / have lived with them, work with them for hours every day, be good friends with them, etc. And many CEOs make sensible decisions about when to move these people around and fire them - it's not remotely standard practise to 'recuse' yourself from such decisions, you're the person with the most information about the person and how the organisation works, and if you actually care about those things enough and are competent enough to know your own mind and surround yourself with good people and a healthy environment, many CEOs are massively successful at making these decisions. I think this is true in other groups as well - I expect many people are pretty good at deciding e.g. if a close friend is unhealthy for them and that they want to cut ties.

I agree many people make quite unfortunate decisions here, but it is no iron law of psychology that 'humans in general' cannot make 'reasonably objective judgments' in such cases.

Comment by ben-pace on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-05T01:06:36.099Z · score: 4 (4 votes) · EA · GW

I'm tapping out of this discussion. I disagree with much of the above, but I cannot respond to it properly for now.

Comment by ben-pace on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T23:53:26.791Z · score: 15 (13 votes) · EA · GW
refraining from 'morbid' topics for betting only excludes a small minority of questions one can bet upon

This is directly counter to my experience of substantive and important EA conversation. All the topics I'm interested in are essentially morbid topics when viewed in passing by a 'person on the street'. Here are examples of such questions:

  • How frequently will we have major pandemics that kill over N people?
  • How severe (in terms of death and major harm) will the worst pandemic in the next 10 years be?
  • How many lives are saved by donations to GiveWell recommended charities? If we pour 10-100 million dollars into them, will we see a corresponding decline in deaths from key diseases globally?
  • As AI gets more powerful, will we get warning shots across the bow that injure
    or kill <10,000 people with enough time for us to calibrate to the difficulty of the alignment problem, or will it be more sudden than that?

Like, sometimes I even just bet on ongoing death rates. Someone might say to me "The factory farming problem is very small of course" and I'll reply "I will take a bet with you, if you're so confident. You say what you think it is, I'll say what I think it is, then we'll use google to find out who's right. Because I expect you'll be wrong by at least 2 orders of magnitude." I'm immediately proposing a bet on number of chickens being murdered per year, or some analogous number. I also would make similar bets when someone says a problem is big small e.g. "Ageing/genocide/cancer is/isn't very important" -> "I'll take a bet on the number of people who've died from it in the last 10 years."

Comment by ben-pace on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-02T06:05:20.414Z · score: 5 (3 votes) · EA · GW

(+1 to Oli's reasoning - I have since removed my downvote on that comment.)

Comment by ben-pace on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T08:17:34.473Z · score: 15 (11 votes) · EA · GW

I strongly object to saying we're not allowed to bet on the most important questions - questions of life or death. That's like deciding to take the best person off the team defending the president. Don't handicap yourself when it matters most. This is the tool that stops us from just talking hot air and actually records which people are actually able to make correct predictions. These are some of the most important bets on the forum.

Comment by ben-pace on AMA: Rob Mather, founder and CEO of the Against Malaria Foundation · 2020-01-27T19:46:28.140Z · score: 5 (3 votes) · EA · GW
‘Do give money to these 3 charities. Don’t give money to these 132.’ I liked those numbers.

Exactly! I loved that too when I first discovered it as a teenager. Thx for the reply.

Comment by ben-pace on AMA: Rob Mather, founder and CEO of the Against Malaria Foundation · 2020-01-27T19:08:38.804Z · score: 11 (8 votes) · EA · GW

What was it like finding out about GiveWell - what was your initial impression? Did you think it was weird that these people were trying to evaluate charities? I’d also be interested to know what lead you to think they were worth interacting with.

Comment by ben-pace on In praise of unhistoric heroism · 2020-01-08T08:05:48.756Z · score: 4 (4 votes) · EA · GW

Are you... really eli?

(i.e. the eli mentioned in the post)

Comment by ben-pace on In praise of unhistoric heroism · 2020-01-08T08:02:13.707Z · score: 27 (15 votes) · EA · GW

Epistemic Status: Thinking out loud.

Also: Pardon the long comment, I didn't have the time to write a short one. No one is under any obligation to address everything or even most things I said when writing replies.

During the past 4 years of being involved in-person with EA, my instinctive reaction to this problem has been to mostly argue against it whenever anyone tells me they personal think like this.

I think I can 'argue' convincingly against doing the things on your list, or at least the angst that comes associated with them.

  • I talk about how you should compare yourself to reality, not to other people. I talk about the fact that if I generalise your reasoning, this means you think all your friends should also feel bad about themselves - and also (to borrow your example) so should Carl Shulman. He could be 100x more impactful. He's not the single most impactful person, and this is a heavy-tailed world, so apparently he should feel like barely a percentage point of worth when comparing himself to some other people.
  • I bring up Sam Altman's line, where he says the most impactful people "Spend ~1 year exploring broadly, ~4 years relentless focus executing on the most interesting direction, repeat" which is in direct conflict with "[constantly] obsess about their own personal impact and how big it is" and "generally feel miserable about themselves because they’re not helping the world more". Altman's quote is about allocating time to different kinds of thinking and not thinking all the thoughts all the time.

In line with the latter, I often try to identify the trains of thought running through someone's mind that are causing them to feel pain, and try to help them bucket times for dealing with them, rather than them being constant.

I have conversations like this:

I hear that you are constantly worrying about how much money you're spending because you could be donating it. I think your mental space is very important, so let me suggest that instead of filling it with this constant worry, you could set a few hours aside every month to figure out whether your current budget is okay, and otherwise not think about it.
Would you trust that process? Can we come up with a process you would trust? Do you want to involve your friends in your process to hold you accountable? Do you want to think about it fortnightly? Do you want to write your conclusion down on paper and stick it to your wall?
It'd be good to have a commitment like this to rely on. Rather than comparing your moral worth to a starving African child every time you're hungry and need food, I want you to be able to honestly say to yourself "I've set time aside for figuring out what financial tradeoffs I can make here, and I trust myself to make the right call at that time, so thinking about it more at this moment now isn't worthwhile, and I follow what I decided to do."

And yet somehow, given what has always felt to me like a successful attempt to clearly lay out the considerations, the problem persists, and people are not reliably cured after I talk to them. I mean, I think I have helped, and helped some people substantially, but I've not solved the general problem.

When a problem persists like this, especially for numerous people, I've started to look instead to incentives and social equilibria.

Incentives and Social Equilibria in EA

Here is a different set of observations.

A lot of the most successful parts of EA culture are very mission-oriented. We're primarily here to get sh*t done, and this is more central than finding friends or feel warm fuzzies.

EA is the primary place in the world for smart and young people who reason using altruism and empathy to make friends with lots of others who think in similar ways, and get advice about how to live life and live out their careers.

EA is new, it's young, and it's substantially built over the internet, and doesn't have many community elements to it. It's mostly a global network of people with a strong intellectual and emotional connection, rather than a village community where all the communal roles can be relied on to be filled by different townsfolk (caretakers, leaders, parents, party organisers, police, lawyers, etc).

The majority of large EA social events are often the primary way many people interact with people who may hire them in the future, or that they may wish to hire. For many people who identify as "EA", this is also the primary environment in which they are able to interact with widely respected EAs who might offer them jobs some day. This is in contrast with parties within major companies or universities, where there is a very explicit path in your career that will lead to you being promoted. In OpenPhil's RA hiring round, I think there were over 1000 applications, of which I believe they have hired and kept 4 people. Other orgs hiring is similarly slow. This suggests that in general you shouldn't expect to be able to have a career progression within orgs run by the most widely respected EAs.

Many people are trying to devote their entire lives to EA and EA goals, and give up on being committed members of other cultures and communities in the pursuit of this. (I was once at a talk where Anna Salamon noted, with sadness, that many people seem to stop having hobbies as they moved closer into EA/Rationality.)

This puts a very different pressure on social events. Failing to impress someone at a party / other event sometimes feels not merely like a social disappointment, but also one for your whole career and financial security and social standing among your friends and acquaintances. If the other people you mainly socialise with also attend those parties (as is true for me), in many ways these large events set the norms for social events in the rest of your life, with other things being heavily influenced by the dynamics of what is reward/punished in those environments.

I think this puts many people in bad negotiating positions. With many other communities (e.g. hobby communities built around sports/arts etc, professional communities that are centuries old like academia/finance/etc) if the one you're in isn't healthy for you, it's always an option to find another sport, or another company. But, speaking personally, I don't feel there are many other communities who are going to be able to proactively deal with the technological challenges of this century, who are smart and versatile and competent and care enough about humanity and its future to work on the existential problems. I mean, it's not like there aren't other places I could do good work, but I'd have to sacrifice a lot of who I am and what I care about to feel at home within them. So leaving doesn't tend to feel like much of an option (and I didn't even write about all the evolutionary parts of my brain screaming at me to never do anything socially risky never mind decide to leave my tribe).

So the standards of the mission are used as the standards of the community, and the community is basically hanging off of much of the mission, and that leads people to use the standards for themselves in places one would never normally apply those standards (e.g. self-worth and respect by friends).

Further Thoughts

Hmm, on reflection, something about the above feels a bit stronger than the truth (read: false). As with other healthy professional communities, I think in many parts of EA and rationality the main way to get professional respect is to actually build useful things and have interesting ideas, far more than having good social interactions at parties[1]. I'm trying to talk about the strange effects it has when there's also something like a community or social group built around these groups as well, that people devote their lives to, that isn't massively selective - insofar as it's not just the set of people who work full-time on EA projects, but anyone who identifies with EA or likes it.

I think it is interesting though, to try to think of a fairly competent company with 100s of employees, and imagining what would happen if a group of people tried to build their entire social life around the network inside of that company, and genuinely tried to live in accordance with the value judgements that company made, where the CEO and top executives were the most respected. Not only was this community inside the company, but lots of other people who like what the company is doing would turn up to the events, and also be judged precisely in accordance with how much utility they're providing the company, and how they're evaluated by the company. And they'd keep trying to get hired by the company, even though there are more people in the community than in the company by like 10x, or maybe 100x.

I think that's a world where I'd expect to see blogposts, by people in both the community and throughout the company, that saying things like "I know we all try to judge ourselves by where we stand in the company, but if you die having never become a top executive or even getting hired, maybe you shouldn't feel like your life has been a tragic waste?" And these get mixed into weird, straightforwardly false messages that people sometimes say behind closed doors just to keep themselves sane like "Ah, it only matters how much you tried, not whether you got hired" and "Just caring about the company is enough, it doesn't matter if you never actually helped the company make money."

When the company actually matters, and you actually care about outcomes, these memes are at best unhelpful, but when the majority of community members around the company can't do anything to affect the trajectory of the company, and the community uses this standard in place of other social standards, these sorts of memes are used to avoid losing your mind.

--

[1] Also with EA (much more than with the LessWrong in-person diaspora) has parts that aren't trying to be a community or a company, but are trying to be a movement, and that has further weird interactions with the other parts.


Comment by ben-pace on Did Fortify Health receive $1 million from EA Funds? · 2019-12-21T02:57:48.969Z · score: 3 (2 votes) · EA · GW

You included a full-stop at the end of the link, so it goes to a broken page ;)

Comment by ben-pace on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-10T20:55:53.613Z · score: 8 (5 votes) · EA · GW

Here, I think you dropped this: )

Comment by ben-pace on EA Meta Fund November 2019 Payout Report · 2019-12-10T20:55:37.489Z · score: 17 (8 votes) · EA · GW

I think in general that grantmakers probably should have a rule against publicly listing people they rejected, because it can feel very judgmental/shaming for the applicants, and discourage future good applicants who would be sensitive to that kind of thing who would be worried about it happening to them.

That said I think it's definitely positive for applicants who are happy to have it be public that they applied and were rejected to to ask for feedback publicly.

Comment by ben-pace on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-26T20:24:14.963Z · score: 1 (1 votes) · EA · GW

Do you mean

that agents should in general NOT make decisions by carrying out utilitarian reasoning.

It seems to better fit the pattern of the example just prior.

Comment by ben-pace on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T09:54:46.181Z · score: 5 (3 votes) · EA · GW

I’m having a hard time understanding whether everything below the dotted lines is something you just wrote, or a full quote from an old thread. The first time I read it I thought the former, and on reread think the latter. Might you be able to make it more explicit at the top of your comment?

Comment by ben-pace on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T04:01:28.244Z · score: 20 (7 votes) · EA · GW

Thanks :)

I'm hearing "the current approach will fail by default, so we need a different approach. In particular, the new approach should be clearer about the reasoning of the AI system than current approaches."

Noticeably, that's different from a positive case that sounds like "Here is such an approach and why it could work."

I'm curious how much of your thinking is currently split between the two rough possibilities below.

First:

I don't know of another approach that could work, so while I maybe personally feel more of an ability to understand some people's ideas than others, many people's very different concrete suggestions for approaches to understanding these systems better are all arguably similar in terms of how likely we should think they are to pan out, and how much resources we should want to put behind them.

Alternatively, second:

While it's incredibly difficult to communicate mathematical intuitions of this depth, my sense is I can see a very attractive case for why one or two particular efforts (e.g. MIRI's embedded agency work) could work out.
Comment by ben-pace on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T03:29:42.283Z · score: 3 (2 votes) · EA · GW

Hmm, I'm surprised to hear you say that about the second story, which I think is describing a fairly fast end to human civilization - "going out with a bang". Example quote:

If influence-seeking patterns do appear and become entrenched, it can ultimately lead to a rapid phase transition from the world described in Part I to a much worse situation where humans totally lose control.

So I mostly see it as describing a hard take-off, and am curious if there's a key part of a fast-takeoff / discontinuous take-off that you think of as central that is missing there.