Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T23:32:11.760Z · score: 5 (3 votes) · EA · GW

Thx.

My sense is you might get more of the experience you want using ea.greaterwrong.com, which doesn't require javascript, is focused on speed, and generally has a lot of custom options. The site has all identical content.

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T23:28:35.800Z · score: 6 (4 votes) · EA · GW
Ok, that's quite a lot more helpful than I'd realised - why not make it more prominent though?

You can get it by clicking "view all posts" at the bottom of the recent post list on the frontpage. As you can see on LessWrong (which this site is a clone of) it's also permanently on the left side of the screen even more prominently. The folks working on this site have slightly different site goals and haven't included that (yet).

I mainly used the 'top posts in <various time periods>' option (typically the 1 or 3 month options, IIRC); median time between visits was probably something like 1-3 months, so that fit pretty well.

Interesting. I realise there's a class of users who check on that regularity, and want to see the highlights from a couple of months. On LW we have the curated section which does this sort of thing, but the EA Forum doesn't, so I guess it'd be especially useful here. This does move it up my priorities list quite a bit. Thx.

even on the old forum I strongly wished for a way to filter by subject... my favourite forums for UX were probably the old phpBB style ones, where you'd have forums devoted to arbitrarily many subtopics

My teammate Oli Habryka has strong opinions here, I'll let him write stuff if he has time. Current plan is to not do this anytime soon.

I agree following users is important.

Often a friend would link me to a post that had already been around for a week or two when I read it.

In general I myself keep cool-looking tabs open for a while, and if I don't read them and I close them I know that there's no easy way to get back to them. I agree many sites are more static than this Forum - compare HackerNews to SlateStarCodex, where I can see all the SSC posts from the past few months listed on a screen, whereas with HN I can't see all the posts from the last hour on the screen. But for the majority of places I'm interested in, if I don't save the link prominently or recall the title clearly, I won't lose them, so I'm surprised this is more prominent for you with this Forum.

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T20:54:16.996Z · score: 15 (4 votes) · EA · GW

Here's an editor guide I just updated.

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T20:14:57.224Z · score: 9 (3 votes) · EA · GW

A year ago I did write a little editor guide, but many parts of it quickly went out of date. I'll post to the Forum if I update it.

Edit: I updated it.

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T20:12:47.607Z · score: 13 (3 votes) · EA · GW

Gotcha. Not being able to easily copy-in from G-Docs and fotnotes+pictures being lotsa work.

Chatting with the team, their sense is that copy-pasting footnotes is very unlikely to ever work between editors (e.g. I don't expect footnotes to be copied functionally into MS word, Dropbox paper, or any other editor you might use). If that's the case, I would like to build the ability to do a direct import from g-docs, which would solve these problems.

Also agree with the images. The big thing we don't do right now is host images, which means you have to upload them to the internet yourself then put the URL into our editor.

The current state of the plan is to do a big overhaul of the editor framework either this quarter or next, where I expect us to spend time on these issues and others. In general we found that making small edits to the current editor for things like this were too costly in both the short and long run, and we'd also prefer an editor a bit more like google docs in a bunch of ways.

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T16:31:47.010Z · score: 15 (4 votes) · EA · GW

Can you say more about what you find frustrating about using the editor/posting? Am also interested to know if you find it better/worse than the old site.

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T16:23:50.145Z · score: 8 (5 votes) · EA · GW

Thx for the post.

Re: searching for great posts, there is also an archive page where you can order by top and other things in the gear menu.

Can you say more about how you used the old forum? I’m hearing something like “A couple of times per year I’d look at the top-posts list and read new things there”. (I infer a couple of times per year because once you’ve done it once or twice I’d guess you’ve read all the top posts.) I think that’s still very doable using the archive feature.

Am also surprised that you lose posts. My sense is that for a post to leave the frontpage takes a couple of days to a week. Do you keep tabs open that long? Or are you finding the posts somewhere else?

Comment by ben-pace on I find this forum increasingly difficult to navigate · 2019-07-05T16:14:00.179Z · score: 16 (9 votes) · EA · GW

Tah for trying to generally make the Forum a nicer place Michelle. That said I want to say that in this case, for me, I had zero negative experiences reading the post, and the line “The latest version has reached the point where I just don't see the point of visiting the forum any more” was the most useful part of the post for me. I’ve not heard anyone tell me the new Forum is unusable for them, and I’m interested in further (unfiltered) info from Arepo + others (though I don’t have a lot of time to engage).

(@everyone else, in case it’s not apparent, I’m part of the LW team who created the codebase for the new site)

Comment by ben-pace on X-risks of SETI and METI? · 2019-07-03T17:39:07.231Z · score: 6 (2 votes) · EA · GW

The obvious paper that is related is Bostrom’s Where Are They? Why I Hope The Search For Extraterrestrial Life Finds Nothing. This argues not that the search itself would be an x-risk, but that finding advanced life in the universe would (via anthropics and the fermi equation) cause us to heavily update that some x-risk was in our near future. Very interesting.

(Relatedly, Nick was interviewed on this paper for the last ~1/3rd of his interview on the Sam Harris podcast.)

Comment by ben-pace on Why did three GiveWell board members resign in April 2019? · 2019-05-22T19:02:02.723Z · score: 3 (2 votes) · EA · GW

I may be misremembering, but I have the cached belief that GiveWell records and publishes something like all meetings including board meetings. If so you could listen to the last board meeting to see how things were at.

Comment by ben-pace on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-21T21:02:42.605Z · score: 11 (4 votes) · EA · GW

A high quality podcast has been made (for free, by the excellent fanbase). It’s at www.hpmorpodcast.com.

Comment by ben-pace on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T19:41:44.322Z · score: 20 (8 votes) · EA · GW

I think this comment suggests there's a wide inferential gap here. Let me see if I can help bridge it a little.

If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.

I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it's not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom's is much higher[1].

It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society's massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they'd done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.

In general I think someone's ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA [2]. I don't think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.

I'm focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the 'safest bets'. I am interested to know whether this perspective makes the grant's intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.

---

[1] I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.

[2] Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you've not already figured out how to incentivise - I don't think we've figured it all out yet.

Comment by ben-pace on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-03T12:10:32.516Z · score: 8 (3 votes) · EA · GW

Ah yes, agree. I meant coordination, not collusion. Promotion also seems fine.

Comment by ben-pace on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-03T11:11:18.552Z · score: 3 (2 votes) · EA · GW

MIRI helped us know how much to donate and how much of a multiplier it would be, and updated this recommendation as other donors made their moves. I added something like $80 at one point because a MIRI person told me it would have a really cool multiplier, but not if I donated a lot more or a lot less.

Comment by ben-pace on Request for comments: EA Projects evaluation platform · 2019-03-22T22:49:39.293Z · score: 12 (4 votes) · EA · GW

I imagined Alex was talking about the grant reports, which are normally built around “case for the grant” and “risks”. Example: https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology

Comment by ben-pace on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T13:49:07.992Z · score: 17 (6 votes) · EA · GW

I haven't yet finished thinking about how the EA Forum Team should go about doing this, given their particular relationship to the site's members, but here's a few thoughts.

I think, for a platform to be able to incentivise long-term intellectual progress in a community, it's important that there are individuals trusted on the platform to promote the best content to a place on the site that is both lasting and clearly more important than other content, like I and others have done on the AI Alignment Forum and LessWrong. Otherwise the site devolves into a news site, with a culture that depends on who turns up that particular month.

I do think the previous incarnation of the EA Forum was much more of a news site, where the most activity occurred when people turned up to debate the latest controversy posted there, and that the majority of posts and discussion on the new Forum are much more interested in discussion of the principles and practice of EA, rather than conflict in the community.

(Note that, while it is not the only or biggest difference, LessWrong and Hacker News both have the same sorting algorithm on their posts list, yet LW has the best content shown above the recent content, and thus is more clearly a site that rewards the best content over the most recent content.)

It's okay to later build slower and more deliberative processes for figuring out what gets promoted (although you must move much more quickly than the present day academic journal system, and with more feedback between researchers and evaluators). I think the Forum's monthly prize system is a good way to incentivise good content, but it crucially doesn't ensure that the rewarded content will continue to be read by newcomers 5 years after it was written. (Added: And similarlycurrent new EAs on the Forum are not reading the best EA content of the past 10 years, just the most recent content.)

I agree it's good for members of the community to be able to curate content themselves. Right now anyone can build a sequence on LessWrong, then the LW team moves some of them up into a curated section which later get highlighted on the front page (see the library page, which will become more prominent on the site after our new frontpage rework). I can imagine this being an automatic process based on voting, but I have an intuition that it's good for humans to be in the loop. One reason is that when humans make decisions, you can ask why, but when 50 people vote, it's hard to interrogate that system as to the reason behind its decision, and improve its reasoning the next time.

(Thanks for your comment Brian, and please don't feel any obligation to respond. I just noticed that I didn't intuitively agree with the thrust of your suggestion, and wanted to offer some models pointing in a different direction.)

Comment by ben-pace on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T23:36:10.372Z · score: 19 (11 votes) · EA · GW

I did spend a day or two collating some potential curated sequences for the forum.

  • I still have a complete chronological list of all public posts between Eliezer and Holden (&friends) on the subject of Friendly AI, which I should publish at some point
  • I spent a while reading through people's work like Nick Bostrom and Brian Tomasik (I didn't realise how much amazing stuff Tomasik had written)
  • I found a bunch of old EA blogs by people like Paul Christiano, Carl Shulman, and Sam Bankman-Fried that would be good to collate the best pieces from
  • I constructed a mini versions of things like the sequences, the codex, and Owen Cotton-Barratt's excellent intro to EA (prospecting for gold) as ideas for curated sequences on the Forum.

I think it would be good from a long-term community norms standpoint to know that great writing will be curated and read widely.

Alas, CEA did not seem to have the time to work through any sequences (seemed like there was a lot of worries about what signals the sequences would send, and working through the worries was very slow going). At some point if this ever gets going again, it would be good to have a discussion pointing to any good old posts that should be included.

Comment by ben-pace on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-10T19:30:48.527Z · score: 4 (4 votes) · EA · GW

+1, a friend of mine thought it was an official statement from CEA when he saw the headline, was thoroughly surprised and confused

Comment by ben-pace on EA grants available to individuals (crosspost from LessWrong) · 2019-02-07T22:00:34.924Z · score: 2 (2 votes) · EA · GW

(Your crossposting link goes to the edit page of your post, not the post itself.)

Comment by ben-pace on EA Forum Prize: Winners for December 2018 · 2019-01-30T23:44:27.250Z · score: 2 (2 votes) · EA · GW

Woop! Congrats to all the prize winners. Great posts!

Comment by ben-pace on Simultaneous Shortage and Oversupply · 2019-01-29T07:52:49.329Z · score: 4 (3 votes) · EA · GW

Conceptually related: SSC on Joint Over- and Underdiagnosis.

Comment by ben-pace on Disentangling arguments for the importance of AI safety · 2019-01-24T20:07:43.733Z · score: 2 (2 votes) · EA · GW

I think this is a good comment about how the brain works, but do remember that the human brain can both hunt in packs and do physics. Most systems you might build to hunt are not able to do physics, and vice versa. We're not perfectly competent, but we're still general.

Comment by ben-pace on The Global Priorities of the Copenhagen Consensus · 2019-01-08T08:00:26.045Z · score: 4 (4 votes) · EA · GW

+1 on being confused, I've heard good things about CC. Just now checking the wikipedia page, their actual priorities list is surprisingly close to GiveWell priorities lists (macronutrients, malaria, deworming, and then further down cash transfers) - and I see Thomas Schelling was on the panel! In particular he seems to have criticised the use of discount rates on evaluating the impact of climate change (which sounds close to an x-risk perspective).

I would be interested in a write-up from anyone who looked into it and made a conscious choice to not associate with / to not try to coordinate with them, about why they made that choice.

Comment by ben-pace on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-31T19:25:42.174Z · score: 2 (2 votes) · EA · GW

+1 Distill is excellent and high-quality, and plausibly has important relationships to alignment. (FYI some of the founders lately joined OpenAI, if you're figuring out which org to put it under, though Distill is probably its own thing).

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-23T07:11:53.034Z · score: 2 (2 votes) · EA · GW

That all makes a lot of sense! Thanks.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T21:18:20.690Z · score: 2 (2 votes) · EA · GW

I think it does, it just is unlikely to change it by all that much.

Imagine there are two donor lotteries, each one having had 40k donated to them, one with lots of people in the lottery you think are very thoughtful about what projects to donate to, and one with lots of people in the lottery you think are not thoughtful about what projects to donate to. You're considering which to add your 10k to. In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there's increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

This isn't the main consideration - unless you think the other people will do something actively very harmful with the money. You'd have to think that the other people will (in expectation) do something worse with a marginal 10k than you giving away 10k does good.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T21:12:26.878Z · score: 1 (1 votes) · EA · GW

I think there are busy people will have the connections to make a good grant but won't have the time to write a full report. In fact, I think there are many competent people that are very busy.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T19:31:39.488Z · score: 12 (6 votes) · EA · GW

You're right that I had subtly become nervous about joining the donor lottery because "then I'd have to do all the work that Adam did". Thanks for reminding me I don't have to if it doesn't seem worth the opportunity cost, and that I can just donate to whatever seems like the best opportunity given my own models :)

Comment by ben-pace on Long-Term Future Fund AMA · 2018-12-19T22:53:07.557Z · score: 8 (5 votes) · EA · GW

I also think this sort of question might be useful to ask on a more individual basis - I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they'd encourage you to inform them about.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-28T14:42:57.851Z · score: 8 (4 votes) · EA · GW
Also it may be the case if someone who the grant-makers would be excited about had applied, they would had given them support, but there weren't such applicants. (Note that Bay Area biosec got the the grant)

When I spoke to ~3 people about it in the Bay, none of them knew the grant existed or that there was an option for them to work on community building in the bay full time.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-27T22:04:51.875Z · score: 10 (8 votes) · EA · GW

CEA doesn't run any regular events, community spaces, or fund people to do active community building in the Bay that I know of, which seemed odd given the density of EAs in the area and thus the marginal benefit of increased coordination there.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-27T18:16:10.164Z · score: 3 (2 votes) · EA · GW

+1 this seems really quite weird

Comment by ben-pace on 2018 GiveWell Recommendations · 2018-11-26T19:07:05.572Z · score: 4 (3 votes) · EA · GW

(fyi, you don't need to add '[link]' to the title, the site does it automatically)

Comment by ben-pace on MIRI 2017 Fundraiser and Strategy Update · 2018-11-26T14:28:13.888Z · score: 2 (2 votes) · EA · GW

Yep, seems like a database error of sorts. Probably a site-admin should set the post back to its original post date which is December 1st 2017.

Comment by ben-pace on Takeaways from EAF's Hiring Round · 2018-11-20T21:56:04.532Z · score: 10 (4 votes) · EA · GW

I think that references are a big deal and putting them off as a 'safety check' after the offer is made seems weird. That said, I agree with them being a blocker for applicants at the early stage - wanting to ask a senior person to be a reference if they're seriously being considered, but not ask if they're not, and not wanting to bet wrong.

Comment by ben-pace on Effective Altruism Making Waves · 2018-11-16T03:14:33.599Z · score: 2 (2 votes) · EA · GW

Do you have rough data on quantity of tweets over time?

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-14T14:23:09.618Z · score: 4 (3 votes) · EA · GW

I think it is easy to grow too early, and I think that many of the naive ways of putting effort into growth would be net negative compared to the counterfactual (somewhat analagous to a company that quickly makes 1 million when it might've made 1 billion).

Focusing on actually making more progress with the existing people, by building more tools for them to coordinate and collaborate, seems to me the current marginal best use of resources for the community.

(I agree that effort should be spent improving the community, I just think 'size' isn't the right dimension to improve.)

Added: I suppose I should link back to my own post on the costs of coordinating at scale.

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-14T14:19:06.557Z · score: 2 (2 votes) · EA · GW

Bostrom has also cited him in his papers.

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-13T23:17:07.890Z · score: 16 (8 votes) · EA · GW
there isn't even an organisation dedicated to growing the movement

Things that are not movements:

  • Academic physics
  • Successful startups
  • The rationality community

They all need to grow to some extent, but they have a particular goal that is not generic 'growth'. Most 'movements' are primarily looking for something like political power, and I think that's a pretty bad goal to optimise for. It's the perennial offer to all communities that scale: "try to grab political power". I'm quite happy to continue being for something other than that.

Regarding the size of the rationality and EA communities right now, this doesn't really seem to me like a key metric? A more important variable is whether you have infrastructure that sustains quality at the scale the community is at.

  • The standard YC advice says the best companies stay small long. An example of Paul Graham saying it is here, search "I may be an extremist, but I think hiring people is the worst thing a company can do."
  • There are many startups that have 500 million dollars and 100 employees more than your startup, but don't actually have a product-market fit, and are going to crash next year. Whereas you might work for 5-10 years then have a product that can scale to several billions of dollars of value. Again, scaling right now will seems shiny and appealing, but something you often should fight against.
  • Regarding growth in the rationality community, I think a scientific field is a useful analogue. And if I told you I'd started some new field and in the first 20 years I'd gotten a research group in every university, is this necessarily good? Am I machine learning? Am I bioethics? I bet all the fields that hit the worst of the replication crisis have experienced fast growth at some point in the past 50 years. Regardless of intentions, the infrastructure matters, and it's not hard to simply make the world worse.

Other thoughts: I agree that the rationality project has resulted in a number of top people working on AI x-risk, effective altruism, and related projects, and that the ideas produced a lot of the epistemic bedrock for the community to be successful at noticing important and new ideas. I am also sad there hasn't been better internal infrastructure built in the past few years. As Oli Habryka said downthread (amongst some other important points), the org I work at that built the new LessWrong (and AI Alignment Forum and EA Forum, which is evidence for your 'rationalists work on AI and EA claim' ;) ) is primarily trying to build community infrastructure.

Meta thoughts: I really liked the OP, it concisely brought up a relevant proposal and placed it clearly in the EA frame (pareto principle, heavy tailed outcomes, etc).

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-12T15:29:31.188Z · score: 1 (1 votes) · EA · GW

<unfinished>

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-10T00:42:32.917Z · score: 2 (2 votes) · EA · GW

*nods* I think what I wrote there wasn't very clear.

To restate my general point: I'm suggesting that your general frame contains a weird inversion. You're supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others' behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.

In the first one, you would be surprised to find out we've randomly been selected to have the right morality by evolution. In the second, it's almost definitional that evolution has produced us to have the right morality. There's still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.

Does the former seem like an accurate description of the way you're proposing to think about morality?

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-09T20:46:55.054Z · score: 7 (7 votes) · EA · GW

It's been many years (about 6?) since I've read an argument like this, so, y'know, you win on nostalgia. I also notice that my 12-year old self would've been really excited to be in a position to write a response to this, and given that I've never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I'm going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.

Also, sorry it's so long, I didn't have the time to make it short.

I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here's what seems to me to be a key crux of this arg (I've bolded the key sentences):

It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor's post here for more on the objectivist/non-objectivist distinction.)
...[I]magine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to - worlds with normative realism and human access to moral facts - are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.

Object-level response: this is confused about how values come into existence.

The things I care about aren't written into the fabric of the universe. There is no clause in the laws of physics to distinguish what's good and bad. I am a human being with desires and goals, and those are things I *actually care about*.

For any 'moral' law handed to me on high, I can always ask why I should care about it. But when I actually care, there's no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking "Yeah, but why should I care about this?" These sorts of things I'm happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.

(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright's "The Moral Animal" was really great, and Joshua Greene's "Moral Tribes" is a slightly more abstract version that also contains some key insights about how morality actually works.)

My model of the person who believes the OP wants to say

"Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they're actually good?"

To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem - I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there's something else I should care about instead, the world just makes sense now.

To point to an example of the process turning out the other way: there's a been a variety of updates I've made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:

  • Learning about scope insensitivity and framing effects
  • Learning about how the rate of economic growth has changed so suddenly since the industrial revolution (i.e. very recently in evolutionary terms)
  • Learning about the various dutch book theorems and axioms of rational behaviour that imply a rational agent is equivalent to an expected-utility maximiser.

These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it's what I reflectively care about.

I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn't in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn't how this works.

Meta-level response: I don't trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I'm actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.

Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.

Here's some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):

…the way we know Christianity to be true is by the self-authenticating witness of God’s Holy Spirit. Now what do I mean by that? I mean that the experience of the Holy Spirit is… unmistakable… for him who has it; …that arguments and evidence incompatible with that truth are overwhelmed by the experience of the Holy Spirit…

…it is the self-authenticating witness of the Holy Spirit that gives us the fundamental knowledge of Christianity’s truth. Therefore, the only role left for argument and evidence to play is a subsidiary role… The magisterial use of reason occurs when reason stands over and above the gospel… and judges it on the basis of argument and evidence. The ministerial use of reason occurs when reason submits to and serves the gospel. In light of the Spirit’s witness, only the ministerial use of reason is legitimate. Philosophy is rightly the handmaid of theology. Reason is a tool to help us better understand and defend our faith…

[The inner witness of the Spirit] trumps all other evidence.

My impression is that it's fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.  If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing.  But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for.  In this case, the real algorithm is "Never repair anything expensive."  If this is a good algorithm, fine; if this is a bad algorithm, oh well.  The arguments you write afterward, above the bottom line, will not change anything either way.

My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it's not even worth exploring the particular mistakes made once you've reached this conclusion).

This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it's actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don't think it's worth spending time engaging with intellectually.

If you find yourself confused by a theologian's argument, I don't mean to say you should ignore that and pretend that you're not confused. That's a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don't expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.

---

To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn't respond to further comments in this thread any less than usual :)

Comment by ben-pace on Burnout: What is it and how to Treat it. · 2018-11-07T22:59:22.103Z · score: 1 (1 votes) · EA · GW

It has never occurred to me that pulling an all-nighter should imply eating more, though it seems like such a natural conclusion in retrospect (though I strongly avoid taking all-nighters).

What's the actual reasoning? How does the body determine how much food it can intake and where does the energy expenditure come from precisely? Movement? Cognitive work?

Comment by ben-pace on Burnout: What is it and how to Treat it. · 2018-11-07T17:42:27.489Z · score: 5 (5 votes) · EA · GW

In general I moved from a model where the limiting factor was absolute number of hours worked to quality of peak hours in the day, where (I believe) the latter is much higher variance and also significantly affected by not having sufficient sleep. I moved from taking modafinil (which never helped me) to taking melatonin (which helps a lot), and always letting myself sleep in as much as I need. I think this has helped a lot.

Comment by ben-pace on EA Concepts: Share Impressions Before Credences · 2018-10-19T18:06:30.106Z · score: 3 (3 votes) · EA · GW

Yeah. As I've said before, it's good to be fully aware of what you understand, what model your inside view is using, and what credence it outputs, before/separate to any social updating of the decision-relevant credence. Or at least, this is the right thing to do if you want to have accurate models in the long run, rather than accurate decision-relevant credences in the short run.

Comment by ben-pace on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-09T06:53:53.965Z · score: 6 (5 votes) · EA · GW

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields.

Quite. I think that my model of Eli was setting the highest standard possible - not merely a good researcher, but a great one, the sort of person who can bring whole new paradigms/subfields into existence (Kahneman & Tversky, Von Neumann, Shannon, Einstein, etc), and then noting that because the tails come apart (aka regressional goodharting), optimising for the normal metrics used in standard hiring practices won't get you these researchers (I realise that probably wasn't true for Von Neumann, but I think it was true for all the others).

Comment by ben-pace on 500 Million, But Not A Single One More · 2018-09-14T01:39:59.921Z · score: 2 (2 votes) · EA · GW

It is remarkable what humans can do when we think carefully and coordinate.

This short essay inspires me to work harder for the things I care about. Thank you for writing it.

Comment by ben-pace on Additional plans for the new EA Forum · 2018-09-12T06:24:58.650Z · score: 1 (1 votes) · EA · GW

Yeah, this matches my personal experience a bunch. I'm planning to look into this literature sometime soon, but I'd be interested to know if anyone has strong opinions about what first-principles model best fits with the existing work in this area.

Comment by ben-pace on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:33:54.614Z · score: 3 (3 votes) · EA · GW

I don't have the time to join the debate, but I'm pretty sure Dunja's point isn't "I know that OpenPhil's strategy is bad" but "Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?" It seems like people act OpenPhil's strategy is good and aren't massively confused / explicitly clear that they don't have the info that is required to assess the strategy.

Dunja, is that accurate?

(Small note: I'd been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn't get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by 'benitopace', I'd appreciate that.)

Comment by ben-pace on Wrong by Induction · 2018-09-07T18:58:21.535Z · score: 5 (5 votes) · EA · GW

Is this a real quote from Kant?

The usual touchstone, whether that which someone asserts is merely his persuasion — or at least his subjective conviction, that is, his firm belief — is betting. It often happens that someone propounds his views with such positive and uncompromising assurance that he seems to have entirely set aside all thought of possible error. A bet disconcerts him.

Seriously though? I feel like we should've shouted this from the rooftops if it were so. This is an awesome quote. Where exactly is it from / did you find it?