Posts

DontDoxScottAlexander.com - A Petition 2020-06-25T23:29:46.491Z · score: 60 (33 votes)

Comments

Comment by ben-pace on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T18:28:59.771Z · score: 4 (2 votes) · EA · GW

Why are your comments hidden on the EA Forum?

Added: It seems the author moved the relevant post back into their drafts.

Comment by ben-pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T05:52:14.760Z · score: 16 (8 votes) · EA · GW

No it's not! Avoiding the action because you know you'll be threatened until you change course is the same as submitting to the threat.

Comment by ben-pace on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T03:42:39.364Z · score: 4 (3 votes) · EA · GW

:)

Comment by ben-pace on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T00:46:09.696Z · score: 14 (10 votes) · EA · GW

By the way Ian, I've not followed these posts in great detail and I mostly think getting involved in partisan politics in most straightforward ways seems like a bad idea, but I've really appreciated the level of effort you've put in and are clearly willing to put in to have an actual conversation about this (in comments here, with Wei Dai, with others). It's made me feel more at home in the Forum. Thank you for that.

Comment by ben-pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T23:02:38.428Z · score: 11 (9 votes) · EA · GW

Naturally, you have to understand Rohin, that in all of the situations where you tell me what the threat is, I'm very motivated to do it anyway? It's an emotion of stubbornness and anger, and when I flesh it out in game-theoretic terms it's a strong signal of how much I'm willing to not submit to threats in general.

Returning to the emotional side, I want to say something like "f*ck you for threatening to kill people, I will never give you control over me and my community, and we will find you and we will make sure it was not worth it for you, at the cost of our own resources".

Comment by ben-pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T22:55:46.993Z · score: 13 (9 votes) · EA · GW

It's a good question. I've thought about this a bit in the past.

One surprising rule is that overall I think people with a criminal record should still be welcome to contribute in many ways. If you're in prison, I think you should generally be allowed to e.g. submit papers to physics journals, you shouldn't be precluded from contributing to humanity and science. Similarly, I think giving remote talks and publishing on the EA Forum should not be totally shut off (though likely hampered in some ways) for people who have behaved badly and broken laws. (Obviously different rules apply for hiring them and inviting them to in-person events, where you need to look at the kind of criminal behavior and see if it's relevant.) 

I feel fairly differently to people who have done damage in and to members of the EA community. Someone like Gleb Tsipursky hasn't even broken any laws and should still be kicked out and not welcomed back for something like 10 years, and even then he probably won't have changed enough (most people don't).

In general EA is outcome-oriented, it's not a hobby community, there's sh*t that needs to be done because civilization is inadequate and literally everything is still at stake at this point in history. We want the best contributions and care about that to the exemption of people being fun or something. You hire the best person for the job.

There's some tension there, and I think overall I am personally willing to put in a lot of resources in my outcome-oriented communities to make sure that people who contribute to the mission are given the spaces and help they need to positively contribute.

I can't think of a good example that isn't either of a literal person or too abstract... like, suppose Einstein has terrible allergies to most foods, just can't be in the space as them. Can we have him at EAG? How much work am I willing to put in for him to have a good EAG? Do I have to figure out a way to feed everyone a very exclusive yet wholesome diet that means he can join? Perhaps.

Similarly, if I'm running a physics conference and Einstein is in prison for murder, will I have him in? Again, I'm pretty open to video calls, I'm pretty willing to put in the time to make sure everyone knows what sort of risks he is, and make sure he isn't allowed to end up in a vulnerable situation with someone, because it's worth it for our mission to have him contribute.

You get the picture. Y'know, tradeoffs, where you actually value something and are willing to put in extraordinary effort to make it work.

Comment by ben-pace on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T22:38:30.847Z · score: 7 (5 votes) · EA · GW

Thx for the long writeup. FWIW I will share some of my own impressions.

Robin's one of the most generative and influential thinkers I know. He has consistently produced fascinating ideas and contributed to a lot of the core debates in EA, like giving now vs later, AI takeoff, prediction markets, great filter, and so on. His comments regarding common discussion of inequality are all of a kind with the whole of his 'elephant in the brain work', noticing weird potential hypocrisies in others. I don't know how to easily summarize the level of his intellectual impact on the world, so I'll stop here.

It seems like there's been a couple of (2-4) news articles taking potshots at Hanson for his word choices, off the back of an angry mob, and this is just going to be a fairly standard worry for even mildly interesting or popular figures, given that the mob is going after people daily on Twitter. (As the OP says, not everyone, but anyone.)

It seems to me understandable if some new group like EA Munich (this was one of their first events?) feels out of their depth when trying to deal with the present-day information and social media ecosystem, and that's why they messed up. But overall this level of lack of backbone mustn't be the norm, else the majority of interesting thinkers will not be interested in interacting with EA. I am less interested in contributing-to and collaborating-with others in the EA community as a result of this. I mean, there's lots of things I don't like that are just small quibbles, which is your price for joining, but this kind of thing strikes at the basic core of what I think is necessary for EA to help guide civilization in a positive direction, as opposed to being some small cosmetic issue or personal discomfort.

Also, it seems to me like it would be a good idea for the folks at EA Munich to re-invite Robin to give the same talk, as a sign of goodwill. (I don't expect they will and am not making a request, I'm saying what it seems like to me.)

Comment by ben-pace on Hiring engineers and researchers to help align GPT-3 · 2020-10-07T18:31:58.348Z · score: 2 (3 votes) · EA · GW

Yeah. Well, not that they cannot be posted, but that they will not be frontpaged by the mods, and instead kept in the personal blog / community section, which has less visibility.

Added: As it currently says on the About page:

Community posts

Posts that focus on the EA community itself are given a "community" tag. By default, these posts will be hidden from the list of posts on the Forum's front page. You can change how these posts are displayed by using...

Comment by ben-pace on Open Communication in the Days of Malicious Online Actors · 2020-10-07T03:19:32.671Z · score: 4 (3 votes) · EA · GW

Thanks, I found this post to be quite clear and a helpful addition to the conversation.

Comment by ben-pace on If you like a post, tell the author! · 2020-10-06T18:27:17.075Z · score: 15 (12 votes) · EA · GW

(I like this post.)

Comment by ben-pace on Sign up for the Forum's new email digest · 2020-10-05T16:26:31.193Z · score: 2 (1 votes) · EA · GW

You can subscribe with RSS via using the "Subscribe (RSS)" button at bottom of the left menu on the frontpage.

Comment by ben-pace on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T19:25:34.269Z · score: 3 (2 votes) · EA · GW

(Yes, I'm pretty sure this is the standard way to use those terms.)

Comment by ben-pace on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T01:56:13.005Z · score: 13 (8 votes) · EA · GW

I find Big 5 correlates very interesting, so thanks for doing this! The graphs make it very easy to see the differences.

Comment by ben-pace on Suggestion that Zvi be awarded a prize for his COVID series · 2020-09-24T21:24:23.620Z · score: 28 (12 votes) · EA · GW

For those who don't know Zvi's series, it has come out weekly, included case numbers and graphs, and analysis of the news that week. Here's a few:

Plus some general analysis, like Seemingly Popular Covid-19 Model is Obvious Nonsense, and Covid-19: My Current Model which was a major factor in me choosing to stop cleaning all my packages and groceries and to stop putting takeout food in the oven for 15 minutes, as well as feeling safe about outdoors. 

His 9/10 update on Vitamin D also caused me to make sure my family started taking Vitamin D, which is important because one of them has contracted the virus.

Comment by ben-pace on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-19T23:39:17.861Z · score: 3 (2 votes) · EA · GW

Do you mean CS or ML? Because (I believe) ML is an especially new and 'flat' field where it doesn't take as long to get to the cutting edge, so it probably isn't representative.

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-19T01:31:04.218Z · score: 5 (3 votes) · EA · GW

Yeah, I agree about how much variance in productivity is available, your numbers seem more reasonable. I'd actually edited it by the time you wrote your comment.

Also agree last year was probably unusually slow all round. I expect the comparison is still comparing like-with-like.

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-19T00:16:58.586Z · score: 7 (4 votes) · EA · GW

I read the top comment again after reading this comment by you, and I think I understand the original intent better now. I was mostly confused on initial reading, and while I thought SLG's comment was otherwise good and I had a high prior on the intent being very cooperative, I couldn't figure out what the first line meant other than "I expect I'm the underdog here". I now read it as saying "I really don't want to cause conflict needlessly, but I do care about discussing this topic," which seems pretty positive to me. I am pretty pro SLG writing more comments like this in future when it seems to them like an important mistake is likely being made :)

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-19T00:05:12.181Z · score: 9 (8 votes) · EA · GW

By the way, I also was surprised by Rob only making 4 videos in the last year. But I actually now think Rob is producing a fairly standard number of high-quality videos annually.

The first reason is that (as Jonas points out upthread) he also did three for Computerphile, which brings his total to 7.

The second reason is that I looked into a bunch of top YouTube individual explainers, and I found that they produce a similar number of highly-produced videos annually. Here's a few:

  • 3 Blue 1 Brown has 10 highly produced videos in the last year (1, 2, 3, 4, 5, 6, 7, 8, 9, 10). He has other videos, which include a vide of Grant talking a walk, a short footnote video to one of the main ones, 10 lockdown livestream videos,  and a video turning someone's covid blogpost into a video. For highly produced videos, he's averaging just under 1/month.
  • CGP Grey has 10 highly produced videos in the last year (1, 2, 3, 4, 5, 6, 7, 8, 9, 10). He has other videos, which include a video of CGP Grey talking a walk, a few videos of him exploring a thing like a spreadsheet or an old building, and one or two commentaries on other videos of his.
  • Vi Hart in her peak made 19 videos in one year (her first year, 9 years ago) all of which I think were of a similar quality level to each other.
  • Veritasium has 14 highlighy produced videos in the last year, plus one short video of the creator monologuing after their visit to NASA.

CGP Grey, 3Blue 1Brown and Veritasium I believe are working on their videos full time, so I think around 10 main videos plus assorted extra pieces is within standard range for highly successful explainers on YouTube. I think this suggests potentially Rob could make more videos to fill out the space between the videos on his channel, like Q&A livestreams and other small curiosities that he notices, and could plausibly be more productive a year in terms of making a couple more of the main, highly-produced videos.

But I know he does a fair bit of other work outside of his main channel, and also he is in some respects doing a harder task than some of the above, of explaining ideas from a new research field, and one with a lot of ethical concerns around the work, not just issues of how to explain things well, which I expect increases the amount of work that goes into the videos.

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T20:46:41.840Z · score: 7 (5 votes) · EA · GW

:)  Appreciated the conversation! It also gave me an opportunity to clarify my own thoughts about success on YouTube and related things.

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T18:02:29.182Z · score: 5 (5 votes) · EA · GW

Thx!

Following up, and sorry for continuing to critique after you already politely made an edit, but doesn't that change your opinion of the object level thing, which is indeed the phenomenon Scott's talking about? It's great to send signals of cooperativeness and genuineness, and I appreciate So-Low Growth's effort to do so, but adding in talk of how the concern is controversial is the standard example of opening a bravery debate.

The application of Scott's post here would be to separate clarification of intent and bravery talk – in this situation, separating "I don't intend any personal attack on this individual" from "My position is unpopular". Again, the intention is not in question, it's the topic, and that's the phenomenon Scott's discussing in his post.

Comment by ben-pace on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-18T17:44:21.650Z · score: 5 (3 votes) · EA · GW

I'd heard that the particular journal had quite a high quality bar. Do you have a sense of whether that's true or how hard it is to get into that journal? I guess we could just check the number of PhD students who get published in an edition of the journal to check the comparison.

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:59:20.301Z · score: 40 (13 votes) · EA · GW

I think one of the things Rob has that is very hard to replace is his audience. Overall I continue to be shocked by the level of engagement Rob Miles' youtube videos get. Averaging over 100k views per video! I mostly disbelieve that it would be plausible to hire someone that can (a) understand technical AI alignment well, and (b) reliably create youtube videos that get over 100k views, for less than something like an order of magnitude higher cost.

I am mostly confused about how Rob gets 100k+ views on each video. My mainline hypothesis is that Rob has successfully built his own audience through his years of videos including on places like Computerphile, and that they have followed him to his own channel.

Building an audience like this takes many years and often does not pay off. Once you have a massive audience that cares about the kind of content you produce, this is very quickly not replaceable, and I expect to find someone other than Rob to do this, it would either take the person 3-10 years to build this size of audience, or require paying a successful youtube content creator to change the videos that they are making substantially, in a way that risks losing their audience, and thus require a lot of money to cover the risk (I'm imagining $300k–$1mil per year for the first few years).

Another person to think of here is Tim Urban, who writes Wait But Why. That blog has I think produced zero major writeups in the last year, but he has a massive audience who knows him and is very excited to read his content in detail, which is valuable and not easily replaceable. If it were possible to pay Tim Urban to write a piece on a technical topic of your choice, this would be exceedingly widely-read in detail, and would be worth a lot of money even if he didn't publish anything for a whole year.

Comment by ben-pace on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:43:04.465Z · score: 5 (3 votes) · EA · GW

I want to add that Scott isn't describing a disingenuous argumentative tactic, he's saying that the topic causes dialogue to get derailed very quickly. Analogous to the rule that bringing in a comparison to Nazis always derails internet discussion, making claims about whether the position one is advocating is the underdog or the mainstream also derails internet discussion.

Comment by ben-pace on Sign up for the Forum's new email digest · 2020-09-17T18:47:15.807Z · score: 4 (2 votes) · EA · GW

Huh, that's a nice idea. And of course a straightforward "filter for posts I've read".

Comment by ben-pace on Some thoughts on EA outreach to high schoolers · 2020-09-13T23:23:02.043Z · score: 20 (13 votes) · EA · GW

FWIW I found and read the sequences when I was about 14, and went to a CFAR workshop before uni. I think if these things had happened later they'd have been less impactful for me in a number of ways.

Comment by ben-pace on Asking for advice · 2020-09-10T19:17:22.360Z · score: 2 (1 votes) · EA · GW

You're welcome :) 

I don't want to claim it happens regularly, but enough that it's become salient to me that I may spend all this time planning for and around the meeting and then have it be wasted effort, such that there's some consistent irritation cost to me interacting with calendlys. 

But now that I've put in to words some of my concerns, I think I'll generally like interacting with calendly more now, as I'll notice when I'm feeling this particular worry and more pro-actively deal with it. As I said, I think it's a great tool and I'm glad it exists.

Comment by ben-pace on Asking for advice · 2020-09-09T19:31:06.062Z · score: 5 (5 votes) · EA · GW

My feelings are both that it's a great app and yet sometimes I'm irritated when the other person sends me theirs.

If I introspect on the times when I feel the irritation, I notice I feel like they are shirking some work. Previously we were working together to have a meeting, but now I'm doing the work to have a meeting with the other person, where it's my job and not theirs to make it happen.

I think I expect some of of the following asymmetries in responsibility to happen with a much higher frequency than with old-fashioned-coordination:

  • I will book a time, then in a few days they will tell me actually the time doesn't work for them and I should pick again (this is a world where I had made plans around the meeting time and they hadn't)
  • I will book a time, and just before the meeting they will email to say they hadn't realised when I'd booked it and actually they can't make it and need to reschedule, and they will feel this is calendly's fault far more than theirs
  • I will book a time, and they won't show up or will show up late and feel that they don't hold much responsibility for this, thinking of it as a 'technical failure' on behalf of calendly.

All of these are quite irritating and feel like I'm the one holding my schedule open for them, right up until it turns out they can't make it.

I think I might be happier if there was an explicit and expected part of the process where the other person  confirms they are aware of the meeting and will show up, either by emailing to say "I'll see you at <time>!" or if they have to click "going" to the calendar invitation and I would get a notification saying "They confirmed", and only then was it 'officially happening'.

Having written this out, I may start pinging people for confirmation after filling out their calendlys...

Comment by ben-pace on Does Economic History Point Toward a Singularity? · 2020-09-07T19:05:03.684Z · score: 11 (7 votes) · EA · GW

This is one of my favorite comments on the Forum. Thanks for the thorough response.

Comment by ben-pace on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T18:12:10.218Z · score: 17 (17 votes) · EA · GW

It sends public signals that you'll submit to blackmail and that you think people shouldn't affiliate with the speaker. The former has strong negative effects on others in EA because they'll face increased blackmail threats, and the latter has negative effects on the speaker and their reputation, which in turn makes it less likely for interesting speakers to want to speak with EA because they expect EA will submit to blackmail about them if any online mob decides to put their crosshairs on that speaker today.

Comment by ben-pace on How are the EA Funds default allocations chosen? · 2020-08-12T17:21:26.402Z · score: 3 (2 votes) · EA · GW

Interesting. Thank you very much.

Comment by ben-pace on How are the EA Funds default allocations chosen? · 2020-08-11T17:13:49.961Z · score: 11 (5 votes) · EA · GW

This seems to be a coincidence. Less than 10% of total donation volume is given according to the default allocation.

I roll to disbelieve? Why do you think this? Like, even if there’s slight variation I expect it’s massively anchored on the default allocation.

Comment by ben-pace on Donor Lottery Debrief · 2020-08-09T23:05:05.266Z · score: 3 (2 votes) · EA · GW

I think a lot of people in the Bay lack funding.

Comment by ben-pace on The 80,000 Hours job board is the skeleton of effective altruism stripped of all misleading ideologies · 2020-08-08T06:54:23.948Z · score: 9 (7 votes) · EA · GW

I'm not Oli, but jotting down some of my own thoughts: I feel like the job board gives a number of bits of useful selection pressure about which orgs are broadly 'helping out' in the world; out of all the various places people go in careers, it's directing a bit of energy towards some better ones. Analogous to helping raise awareness of which foods are organic or something, which is only a little helpful for the average person, but creating that information can be pretty healthy for a massive population. I expect 80k was motivated to make the board because such a large order of magnitude of people who wanted their advice, and they felt that this was an improvement on the margin that had a large effect if thousands of people tried to follow the advice.

As I wouldn't expect this was a massive change to your health to start eating organic food, I wouldn't suddenly become excited about someone and their impact if they became the 100th employee at John Hopkins or if they were the marginal civil servant in the UK government. 

In fact (extending this analogy to its breaking point) nutrition is an area where it's hard to give general advice, the data mostly comes from low quality observational studies, and the truth is you have to do a lot of self-experimentation and building your own models of the domain to get any remotely confident beliefs about your own diet and health. Similarly, I'm excited by people who try a lot of their own projects and have some successes at weird things like forming a small team and creating a very valuable product that people pay a lot of money for, or people who do weird but very insightful research (like Gwern or Scott Alexander to give obvious examples, but also things like this that take 20 hours and falsifies a standard claim from psychology), who figure out for themselves what's valuable and try very very hard to achieve it directly without waiting for others to give them permission.

Comment by ben-pace on EA Forum update: New editor! (And more) · 2020-08-04T17:23:13.428Z · score: 6 (3 votes) · EA · GW

Narrator: “He was right.”

Comment by ben-pace on A list of good heuristics that the case for AI X-risk fails · 2020-07-16T21:36:45.899Z · score: 9 (5 votes) · EA · GW

(I would include the original author’s name somewhere in the crosspost, especially at the top.)

Comment by ben-pace on EA Forum feature suggestion thread · 2020-06-26T01:48:18.814Z · score: 15 (5 votes) · EA · GW

+50 points for making UI mockups, makes it much more likely to get the feature.

Comment by ben-pace on EA Forum feature suggestion thread · 2020-06-24T22:00:26.876Z · score: 2 (1 votes) · EA · GW

Hah! You're forgiven. I've seen this sort of thing a lot from users.

Comment by ben-pace on EA Forum feature suggestion thread · 2020-06-20T16:50:37.959Z · score: 4 (2 votes) · EA · GW

The new editor has this! :)

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-10T07:28:55.983Z · score: 2 (1 votes) · EA · GW
suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay?

Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. I'd suggest the following from the Fuzzies/Utilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)

--

If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:

  • To purchase warm fuzzies, find some hard-working but poverty-stricken woman who's about to drop out of state college after her husband's hours were cut back, and personally, but anonymously, give her a cashier's check for $10,000.  Repeat as desired.
  • To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price.  Make a big deal out of it, show up for their press events, and brag about it for the next five years.
  • Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar.  Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.

But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time... Of course, if you're not a millionaire or even a billionaire—then you can't be quite as efficient about things, can't so easily purchase in bulk.  But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries.  Volunteer at a soup kitchen.  Or just get your warm fuzzies from holding open doors for little old ladies.  Let that be validated by your other efforts to purchase utilons, but don't confuse it with purchasing utilons.  Status is probably cheaper to purchase by buying nice clothes.

And when it comes to purchasing expected utilons—then, of course, shut up and multiply.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-10T07:16:12.117Z · score: 2 (1 votes) · EA · GW
If there were no great essays with similar themes aside from Eliezer's, I'd be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn't get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I'm likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.

I see. As I hear you, it's not that we must go overboard on avoiding atheism, but that it's a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.

From my vantage point, there aren't essays in this series that deal with giving up hope as directly as this essay. I think Singer's piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you don't even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-09T23:35:55.022Z · score: 13 (5 votes) · EA · GW
As for "Beyond the Reach of God," I'd prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.

I think that if the essay said things like "Religious people are stupid isn't it obvious" and attempted to do social shaming of religious people, then I'd be pretty open to suggesting edits to such parts.

But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.

I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case they'll be perfectly able to appreciate the finer points of the post even though it's written in a way that misunderstands their relationship to their religion.

I think either way, if they're going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn't expect that there's an interventionist aligned superintelligence (my terminology, I don't mean nothing by it). I don't think it's right to walk on eggshells around religious people, and I don't think it makes sense to throw out powerful ideas and pieces of strongly emotional/artistic work to make sure such people don't need to learn to engage with art and ideas that don't share their specific assumptions about the world.

Scott's piece was part of the second edition of the Handbook, and I agree that it's a classic; I'd like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott's piece fits in well there).

Checks out, that makes sense.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-09T23:35:06.598Z · score: 9 (2 votes) · EA · GW

*nods* I'll respond to the specific things you said about the different essays. I split this into two comments for length.

I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn't use obfuscating jargon or dive off into tangents

I think there's a few pieces of jargon that you could change (e.g. Unit of Caring talks about 'akrasia', which isn't relevant). I imagine it'd be okay to request a few small edits to the essay.

But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I don't think you should be afraid to show people a high-level discussion, just because they don't know all of the details being discussed already. It's okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear – I think this is good, it helps see that there are levels above to reach.

It's like how EA student group events would always be "Intro to EA". Instead, I think it's really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you don't fully understand all the terms they use like "System 1 / System 2" or "conservation law" or "derivatives market". I would give the Feynman lectures to a young teenager who doesn't know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.

Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Graham's essays are always said in simple language, but I don't think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and that's necessary to make their points.

See the graph about dating strategies here. The goal is to get strong hits that make a person say "This is one of the most important things I've ever read", not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if there's true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.

Comment by ben-pace on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-06-05T02:54:38.133Z · score: 4 (2 votes) · EA · GW

Congratulations! I'm happy to hear that.

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-04T19:21:46.096Z · score: 8 (4 votes) · EA · GW

That makes me happy to hear :)

Comment by ben-pace on EA Handbook, Third Edition: We want to hear your feedback! · 2020-06-03T21:20:39.527Z · score: 33 (10 votes) · EA · GW

I thought a bit about essays that were key on me becoming more competent and able to take action in the world to improve it, that connected to what I cared about. I'll list some and the ways they helped me. (I filled out the rest of the feedback form too.)

---

Feeling moral by Eliezer Yudkowsky. Showed me an example where my deontological intuitions were untrustworthy and that simple math was actually effective.

Purchase Fuzzies and Utilons Separately by Eliezer. Showed me where attempts to do good can get very confused and simply looking at outcomes can avoid a lot of problems from reasoning by association or by what's 'considered a good idea'.

Ends Don’t Justify Means (Among Humans) by Eliezer. Helped me understand a very clear constraint on naive utilitarian reasoning, which avoided worlds where I would naively trust the math in all situations.

Dive In by Nate Soares. Helped point my flailing attempts to improve and do better in a direction where I would actually get feedback. Only by actually repeatedly delivering a product, even if you changed your mind about what you should be doing and whether it was valuable 10 times a day, can you build up real empirical data about what you can accomplish and what's valuable. Encouraged me to follow-through on projects a whole lot more.

Beyond the Reach of God by Eliezer. This helped ground me, it helped me point at what it's like to have false hope and false trust, and recognise it more clearly in myself. I think it's accurate to say that looking directly and with precision at the current state of the world involves trusting the world a lot less than most people, and a lot less than establishment narratives would say (Steven Pinker's "Everything is getting better and will continue to get better" isn't the right way to conceptualise our position in history, there's much more risk involved than that). A lot of important improvements in my ability to improve the world have involved me realising I had unfounded trust in people or institutions, and realising that unless I took responsibility for things myself, I couldn't trust that they would work out well by default, and this essay was one of the first places I clearly conceptualised what false hope feels like.

Money: The Unit of Caring by Eliezer. Similar things to the Fuzzies and Utilons post, but a bit more practical. And Kelsey named her whole Tumblr after this, which I guess is a fair endorsement.

Desperation by Nate. This does similar things to Beyond the Reach of God, but in a more hopeful way (although it's called 'Desperation', so how hopeful can it be?). It helped me conceptualise what it looks like to actually try to do something difficult that people don't understand or think looks funny, and to notice whether or not it was something I had been doing. It also helped me notice (more cynically) that a lot of people weren't doing things that tended to look like this, and to not try to emulate those kind of people so much.

Scope Insensitivity by Eliezer. Similar things to Feeling Moral, but a bit simpler / more concrete and tries to be actionable.

--

Some that I came up with that you already included:

  • On Caring
  • 500 Million, But Not A Single One More

It's odd that you didn't include Scott Alexander's classic on Efficient Charity or Eliezer's Scope Insensitivity, although Nate's "On Caring" maybe is sufficient to get the point about scope and triage across.

Comment by ben-pace on EA Forum: Data analysis and deep learning · 2020-05-13T05:45:10.618Z · score: 3 (4 votes) · EA · GW

This is awesome.

Comment by Ben Pace on [deleted post] 2020-04-03T21:19:27.887Z

+1

Comment by ben-pace on April Fool's Day Is Very Serious Business · 2020-04-02T07:04:54.412Z · score: 4 (2 votes) · EA · GW

I'm sorry, I've been overwhelmed with things lately, I didn't get round to it. But please do something similar next year!

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T20:07:53.352Z · score: 4 (3 votes) · EA · GW

I think your questions are great. I suggest that you leave 7 separate comments so that users can vote on the ones that they’re most interested in.

Comment by ben-pace on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T17:57:01.891Z · score: 2 (1 votes) · EA · GW

This is such an odd question. Could produce surprising answers though, if it’s something like “the least interesting ideas that people still took seriously” or “the least interesting ideas that are still a little bit interesting”. Upvoted.