Comment by ben-pace on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T19:41:44.322Z · score: 20 (8 votes) · EA · GW

I think this comment suggests there's a wide inferential gap here. Let me see if I can help bridge it a little.

If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.

I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it's not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom's is much higher[1].

It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society's massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they'd done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.

In general I think someone's ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA [2]. I don't think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.

I'm focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the 'safest bets'. I am interested to know whether this perspective makes the grant's intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.

---

[1] I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.

[2] Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you've not already figured out how to incentivise - I don't think we've figured it all out yet.

Comment by ben-pace on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-03T12:10:32.516Z · score: 8 (3 votes) · EA · GW

Ah yes, agree. I meant coordination, not collusion. Promotion also seems fine.

Comment by ben-pace on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-03T11:11:18.552Z · score: 3 (2 votes) · EA · GW

MIRI helped us know how much to donate and how much of a multiplier it would be, and updated this recommendation as other donors made their moves. I added something like $80 at one point because a MIRI person told me it would have a really cool multiplier, but not if I donated a lot more or a lot less.

Comment by ben-pace on Request for comments: EA Projects evaluation platform · 2019-03-22T22:49:39.293Z · score: 12 (4 votes) · EA · GW

I imagined Alex was talking about the grant reports, which are normally built around “case for the grant” and “risks”. Example: https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology

Comment by ben-pace on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T13:49:07.992Z · score: 17 (6 votes) · EA · GW

I haven't yet finished thinking about how the EA Forum Team should go about doing this, given their particular relationship to the site's members, but here's a few thoughts.

I think, for a platform to be able to incentivise long-term intellectual progress in a community, it's important that there are individuals trusted on the platform to promote the best content to a place on the site that is both lasting and clearly more important than other content, like I and others have done on the AI Alignment Forum and LessWrong. Otherwise the site devolves into a news site, with a culture that depends on who turns up that particular month.

I do think the previous incarnation of the EA Forum was much more of a news site, where the most activity occurred when people turned up to debate the latest controversy posted there, and that the majority of posts and discussion on the new Forum are much more interested in discussion of the principles and practice of EA, rather than conflict in the community.

(Note that, while it is not the only or biggest difference, LessWrong and Hacker News both have the same sorting algorithm on their posts list, yet LW has the best content shown above the recent content, and thus is more clearly a site that rewards the best content over the most recent content.)

It's okay to later build slower and more deliberative processes for figuring out what gets promoted (although you must move much more quickly than the present day academic journal system, and with more feedback between researchers and evaluators). I think the Forum's monthly prize system is a good way to incentivise good content, but it crucially doesn't ensure that the rewarded content will continue to be read by newcomers 5 years after it was written. (Added: And similarlycurrent new EAs on the Forum are not reading the best EA content of the past 10 years, just the most recent content.)

I agree it's good for members of the community to be able to curate content themselves. Right now anyone can build a sequence on LessWrong, then the LW team moves some of them up into a curated section which later get highlighted on the front page (see the library page, which will become more prominent on the site after our new frontpage rework). I can imagine this being an automatic process based on voting, but I have an intuition that it's good for humans to be in the loop. One reason is that when humans make decisions, you can ask why, but when 50 people vote, it's hard to interrogate that system as to the reason behind its decision, and improve its reasoning the next time.

(Thanks for your comment Brian, and please don't feel any obligation to respond. I just noticed that I didn't intuitively agree with the thrust of your suggestion, and wanted to offer some models pointing in a different direction.)

Comment by ben-pace on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T23:36:10.372Z · score: 19 (11 votes) · EA · GW

I did spend a day or two collating some potential curated sequences for the forum.

  • I still have a complete chronological list of all public posts between Eliezer and Holden (&friends) on the subject of Friendly AI, which I should publish at some point
  • I spent a while reading through people's work like Nick Bostrom and Brian Tomasik (I didn't realise how much amazing stuff Tomasik had written)
  • I found a bunch of old EA blogs by people like Paul Christiano, Carl Shulman, and Sam Bankman-Fried that would be good to collate the best pieces from
  • I constructed a mini versions of things like the sequences, the codex, and Owen Cotton-Barratt's excellent intro to EA (prospecting for gold) as ideas for curated sequences on the Forum.

I think it would be good from a long-term community norms standpoint to know that great writing will be curated and read widely.

Alas, CEA did not seem to have the time to work through any sequences (seemed like there was a lot of worries about what signals the sequences would send, and working through the worries was very slow going). At some point if this ever gets going again, it would be good to have a discussion pointing to any good old posts that should be included.

Comment by ben-pace on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-10T19:30:48.527Z · score: 4 (4 votes) · EA · GW

+1, a friend of mine thought it was an official statement from CEA when he saw the headline, was thoroughly surprised and confused

Comment by ben-pace on EA grants available to individuals (crosspost from LessWrong) · 2019-02-07T22:00:34.924Z · score: 2 (2 votes) · EA · GW

(Your crossposting link goes to the edit page of your post, not the post itself.)

Comment by ben-pace on EA Forum Prize: Winners for December 2018 · 2019-01-30T23:44:27.250Z · score: 2 (2 votes) · EA · GW

Woop! Congrats to all the prize winners. Great posts!

Comment by ben-pace on Simultaneous Shortage and Oversupply · 2019-01-29T07:52:49.329Z · score: 4 (3 votes) · EA · GW

Conceptually related: SSC on Joint Over- and Underdiagnosis.

Comment by ben-pace on Disentangling arguments for the importance of AI safety · 2019-01-24T20:07:43.733Z · score: 2 (2 votes) · EA · GW

I think this is a good comment about how the brain works, but do remember that the human brain can both hunt in packs and do physics. Most systems you might build to hunt are not able to do physics, and vice versa. We're not perfectly competent, but we're still general.

Comment by ben-pace on The Global Priorities of the Copenhagen Consensus · 2019-01-08T08:00:26.045Z · score: 4 (4 votes) · EA · GW

+1 on being confused, I've heard good things about CC. Just now checking the wikipedia page, their actual priorities list is surprisingly close to GiveWell priorities lists (macronutrients, malaria, deworming, and then further down cash transfers) - and I see Thomas Schelling was on the panel! In particular he seems to have criticised the use of discount rates on evaluating the impact of climate change (which sounds close to an x-risk perspective).

I would be interested in a write-up from anyone who looked into it and made a conscious choice to not associate with / to not try to coordinate with them, about why they made that choice.

Comment by ben-pace on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-31T19:25:42.174Z · score: 2 (2 votes) · EA · GW

+1 Distill is excellent and high-quality, and plausibly has important relationships to alignment. (FYI some of the founders lately joined OpenAI, if you're figuring out which org to put it under, though Distill is probably its own thing).

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-23T07:11:53.034Z · score: 2 (2 votes) · EA · GW

That all makes a lot of sense! Thanks.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T21:18:20.690Z · score: 2 (2 votes) · EA · GW

I think it does, it just is unlikely to change it by all that much.

Imagine there are two donor lotteries, each one having had 40k donated to them, one with lots of people in the lottery you think are very thoughtful about what projects to donate to, and one with lots of people in the lottery you think are not thoughtful about what projects to donate to. You're considering which to add your 10k to. In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there's increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

This isn't the main consideration - unless you think the other people will do something actively very harmful with the money. You'd have to think that the other people will (in expectation) do something worse with a marginal 10k than you giving away 10k does good.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T21:12:26.878Z · score: 1 (1 votes) · EA · GW

I think there are busy people will have the connections to make a good grant but won't have the time to write a full report. In fact, I think there are many competent people that are very busy.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T19:31:39.488Z · score: 12 (6 votes) · EA · GW

You're right that I had subtly become nervous about joining the donor lottery because "then I'd have to do all the work that Adam did". Thanks for reminding me I don't have to if it doesn't seem worth the opportunity cost, and that I can just donate to whatever seems like the best opportunity given my own models :)

Comment by ben-pace on Long-Term Future Fund AMA · 2018-12-19T22:53:07.557Z · score: 8 (5 votes) · EA · GW

I also think this sort of question might be useful to ask on a more individual basis - I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they'd encourage you to inform them about.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-28T14:42:57.851Z · score: 4 (4 votes) · EA · GW
Also it may be the case if someone who the grant-makers would be excited about had applied, they would had given them support, but there weren't such applicants. (Note that Bay Area biosec got the the grant)

When I spoke to ~3 people about it in the Bay, none of them knew the grant existed or that there was an option for them to work on community building in the bay full time.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-27T22:04:51.875Z · score: 10 (8 votes) · EA · GW

CEA doesn't run any regular events, community spaces, or fund people to do active community building in the Bay that I know of, which seemed odd given the density of EAs in the area and thus the marginal benefit of increased coordination there.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-27T18:16:10.164Z · score: 3 (2 votes) · EA · GW

+1 this seems really quite weird

Comment by ben-pace on 2018 GiveWell Recommendations · 2018-11-26T19:07:05.572Z · score: 4 (3 votes) · EA · GW

(fyi, you don't need to add '[link]' to the title, the site does it automatically)

Comment by ben-pace on MIRI 2017 Fundraiser and Strategy Update · 2018-11-26T14:28:13.888Z · score: 2 (2 votes) · EA · GW

Yep, seems like a database error of sorts. Probably a site-admin should set the post back to its original post date which is December 1st 2017.

Comment by ben-pace on Takeaways from EAF's Hiring Round · 2018-11-20T21:56:04.532Z · score: 10 (4 votes) · EA · GW

I think that references are a big deal and putting them off as a 'safety check' after the offer is made seems weird. That said, I agree with them being a blocker for applicants at the early stage - wanting to ask a senior person to be a reference if they're seriously being considered, but not ask if they're not, and not wanting to bet wrong.

Comment by ben-pace on Effective Altruism Making Waves · 2018-11-16T03:14:33.599Z · score: 2 (2 votes) · EA · GW

Do you have rough data on quantity of tweets over time?

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-14T14:23:09.618Z · score: 4 (3 votes) · EA · GW

I think it is easy to grow too early, and I think that many of the naive ways of putting effort into growth would be net negative compared to the counterfactual (somewhat analagous to a company that quickly makes 1 million when it might've made 1 billion).

Focusing on actually making more progress with the existing people, by building more tools for them to coordinate and collaborate, seems to me the current marginal best use of resources for the community.

(I agree that effort should be spent improving the community, I just think 'size' isn't the right dimension to improve.)

Added: I suppose I should link back to my own post on the costs of coordinating at scale.

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-14T14:19:06.557Z · score: 2 (2 votes) · EA · GW

Bostrom has also cited him in his papers.

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-13T23:17:07.890Z · score: 15 (7 votes) · EA · GW
there isn't even an organisation dedicated to growing the movement

Things that are not movements:

  • Academic physics
  • Successful startups
  • The rationality community

They all need to grow to some extent, but they have a particular goal that is not generic 'growth'. Most 'movements' are primarily looking for something like political power, and I think that's a pretty bad goal to optimise for. It's the perennial offer to all communities that scale: "try to grab political power". I'm quite happy to continue being for something other than that.

Regarding the size of the rationality and EA communities right now, this doesn't really seem to me like a key metric? A more important variable is whether you have infrastructure that sustains quality at the scale the community is at.

  • The standard YC advice says the best companies stay small long. An example of Paul Graham saying it is here, search "I may be an extremist, but I think hiring people is the worst thing a company can do."
  • There are many startups that have 500 million dollars and 100 employees more than your startup, but don't actually have a product-market fit, and are going to crash next year. Whereas you might work for 5-10 years then have a product that can scale to several billions of dollars of value. Again, scaling right now will seems shiny and appealing, but something you often should fight against.
  • Regarding growth in the rationality community, I think a scientific field is a useful analogue. And if I told you I'd started some new field and in the first 20 years I'd gotten a research group in every university, is this necessarily good? Am I machine learning? Am I bioethics? I bet all the fields that hit the worst of the replication crisis have experienced fast growth at some point in the past 50 years. Regardless of intentions, the infrastructure matters, and it's not hard to simply make the world worse.

Other thoughts: I agree that the rationality project has resulted in a number of top people working on AI x-risk, effective altruism, and related projects, and that the ideas produced a lot of the epistemic bedrock for the community to be successful at noticing important and new ideas. I am also sad there hasn't been better internal infrastructure built in the past few years. As Oli Habryka said downthread (amongst some other important points), the org I work at that built the new LessWrong (and AI Alignment Forum and EA Forum, which is evidence for your 'rationalists work on AI and EA claim' ;) ) is primarily trying to build community infrastructure.

Meta thoughts: I really liked the OP, it concisely brought up a relevant proposal and placed it clearly in the EA frame (pareto principle, heavy tailed outcomes, etc).

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-12T15:29:31.188Z · score: 1 (1 votes) · EA · GW

<unfinished>

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-10T00:42:32.917Z · score: 2 (2 votes) · EA · GW

*nods* I think what I wrote there wasn't very clear.

To restate my general point: I'm suggesting that your general frame contains a weird inversion. You're supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others' behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.

In the first one, you would be surprised to find out we've randomly been selected to have the right morality by evolution. In the second, it's almost definitional that evolution has produced us to have the right morality. There's still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.

Does the former seem like an accurate description of the way you're proposing to think about morality?

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-09T20:46:55.054Z · score: 7 (7 votes) · EA · GW

It's been many years (about 6?) since I've read an argument like this, so, y'know, you win on nostalgia. I also notice that my 12-year old self would've been really excited to be in a position to write a response to this, and given that I've never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I'm going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.

Also, sorry it's so long, I didn't have the time to make it short.

I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here's what seems to me to be a key crux of this arg (I've bolded the key sentences):

It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor's post here for more on the objectivist/non-objectivist distinction.)
...[I]magine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to - worlds with normative realism and human access to moral facts - are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.

Object-level response: this is confused about how values come into existence.

The things I care about aren't written into the fabric of the universe. There is no clause in the laws of physics to distinguish what's good and bad. I am a human being with desires and goals, and those are things I *actually care about*.

For any 'moral' law handed to me on high, I can always ask why I should care about it. But when I actually care, there's no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking "Yeah, but why should I care about this?" These sorts of things I'm happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.

(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright's "The Moral Animal" was really great, and Joshua Greene's "Moral Tribes" is a slightly more abstract version that also contains some key insights about how morality actually works.)

My model of the person who believes the OP wants to say

"Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they're actually good?"

To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem - I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there's something else I should care about instead, the world just makes sense now.

To point to an example of the process turning out the other way: there's a been a variety of updates I've made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:

  • Learning about scope insensitivity and framing effects
  • Learning about how the rate of economic growth has changed so suddenly since the industrial revolution (i.e. very recently in evolutionary terms)
  • Learning about the various dutch book theorems and axioms of rational behaviour that imply a rational agent is equivalent to an expected-utility maximiser.

These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it's what I reflectively care about.

I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn't in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn't how this works.

Meta-level response: I don't trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I'm actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.

Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.

Here's some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):

…the way we know Christianity to be true is by the self-authenticating witness of God’s Holy Spirit. Now what do I mean by that? I mean that the experience of the Holy Spirit is… unmistakable… for him who has it; …that arguments and evidence incompatible with that truth are overwhelmed by the experience of the Holy Spirit…

…it is the self-authenticating witness of the Holy Spirit that gives us the fundamental knowledge of Christianity’s truth. Therefore, the only role left for argument and evidence to play is a subsidiary role… The magisterial use of reason occurs when reason stands over and above the gospel… and judges it on the basis of argument and evidence. The ministerial use of reason occurs when reason submits to and serves the gospel. In light of the Spirit’s witness, only the ministerial use of reason is legitimate. Philosophy is rightly the handmaid of theology. Reason is a tool to help us better understand and defend our faith…

[The inner witness of the Spirit] trumps all other evidence.

My impression is that it's fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.  If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing.  But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for.  In this case, the real algorithm is "Never repair anything expensive."  If this is a good algorithm, fine; if this is a bad algorithm, oh well.  The arguments you write afterward, above the bottom line, will not change anything either way.

My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it's not even worth exploring the particular mistakes made once you've reached this conclusion).

This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it's actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don't think it's worth spending time engaging with intellectually.

If you find yourself confused by a theologian's argument, I don't mean to say you should ignore that and pretend that you're not confused. That's a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don't expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.

---

To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn't respond to further comments in this thread any less than usual :)

Comment by ben-pace on Burnout: What is it and how to Treat it. · 2018-11-07T22:59:22.103Z · score: 1 (1 votes) · EA · GW

It has never occurred to me that pulling an all-nighter should imply eating more, though it seems like such a natural conclusion in retrospect (though I strongly avoid taking all-nighters).

What's the actual reasoning? How does the body determine how much food it can intake and where does the energy expenditure come from precisely? Movement? Cognitive work?

Comment by ben-pace on Burnout: What is it and how to Treat it. · 2018-11-07T17:42:27.489Z · score: 5 (5 votes) · EA · GW

In general I moved from a model where the limiting factor was absolute number of hours worked to quality of peak hours in the day, where (I believe) the latter is much higher variance and also significantly affected by not having sufficient sleep. I moved from taking modafinil (which never helped me) to taking melatonin (which helps a lot), and always letting myself sleep in as much as I need. I think this has helped a lot.

Comment by ben-pace on EA Concepts: Share Impressions Before Credences · 2018-10-19T18:06:30.106Z · score: 3 (3 votes) · EA · GW

Yeah. As I've said before, it's good to be fully aware of what you understand, what model your inside view is using, and what credence it outputs, before/separate to any social updating of the decision-relevant credence. Or at least, this is the right thing to do if you want to have accurate models in the long run, rather than accurate decision-relevant credences in the short run.

Comment by ben-pace on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-09T06:53:53.965Z · score: 4 (4 votes) · EA · GW

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields.

Quite. I think that my model of Eli was setting the highest standard possible - not merely a good researcher, but a great one, the sort of person who can bring whole new paradigms/subfields into existence (Kahneman & Tversky, Von Neumann, Shannon, Einstein, etc), and then noting that because the tails come apart (aka regressional goodharting), optimising for the normal metrics used in standard hiring practices won't get you these researchers (I realise that probably wasn't true for Von Neumann, but I think it was true for all the others).

Comment by ben-pace on 500 Million, But Not A Single One More · 2018-09-14T01:39:59.921Z · score: 2 (2 votes) · EA · GW

It is remarkable what humans can do when we think carefully and coordinate.

This short essay inspires me to work harder for the things I care about. Thank you for writing it.

Comment by ben-pace on Additional plans for the new EA Forum · 2018-09-12T06:24:58.650Z · score: 1 (1 votes) · EA · GW

Yeah, this matches my personal experience a bunch. I'm planning to look into this literature sometime soon, but I'd be interested to know if anyone has strong opinions about what first-principles model best fits with the existing work in this area.

Comment by ben-pace on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:33:54.614Z · score: 3 (3 votes) · EA · GW

I don't have the time to join the debate, but I'm pretty sure Dunja's point isn't "I know that OpenPhil's strategy is bad" but "Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?" It seems like people act OpenPhil's strategy is good and aren't massively confused / explicitly clear that they don't have the info that is required to assess the strategy.

Dunja, is that accurate?

(Small note: I'd been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn't get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by 'benitopace', I'd appreciate that.)

Comment by ben-pace on Wrong by Induction · 2018-09-07T18:58:21.535Z · score: 5 (5 votes) · EA · GW

Is this a real quote from Kant?

The usual touchstone, whether that which someone asserts is merely his persuasion — or at least his subjective conviction, that is, his firm belief — is betting. It often happens that someone propounds his views with such positive and uncompromising assurance that he seems to have entirely set aside all thought of possible error. A bet disconcerts him.

Seriously though? I feel like we should've shouted this from the rooftops if it were so. This is an awesome quote. Where exactly is it from / did you find it?

Comment by ben-pace on Which piece got you more involved in EA? · 2018-09-07T16:06:35.839Z · score: 2 (2 votes) · EA · GW

I also have never read anything on Felicifia.org (but would like to)! If there's anything easy to link to, I'd be interested to have a read through any archived content that you thought was especially good / novel / mind-changing.

Comment by ben-pace on Which piece got you more involved in EA? · 2018-09-07T04:44:57.557Z · score: 2 (2 votes) · EA · GW

I never read Nick's thesis. I'm curious if there are particular sections you can point to that might give me a sense of why it was influential on you? I have a vague sense that it's primarily mathematical population ethics calculations or something, and I'm guessing I might be wrong.

Comment by ben-pace on Which piece got you more involved in EA? · 2018-09-06T20:25:48.537Z · score: 6 (3 votes) · EA · GW

LessWrong sequences really changed the way I think (after first reading posts like Epistemologies of Reckless Endangerment on Luke Muehlhauser's Common Sense Atheism). If I think back to the conversations I had as a teenager in school and the general frameworks I still use today, the posts that were most influential on me were (starting with most important):

And then later reading HPMOR was a Big Deal, for really feeling what it would be like to act throughout my life, in accordance with these models. Those things I think were the biggest reading experiences for me (and they were some of the most influential things on my decisions and how I live my life). Everything in EA felt very natural to me after that.

Comment by ben-pace on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-23T19:46:12.527Z · score: 4 (6 votes) · EA · GW

I think that one of the constraints that is faced here is a lack of experienced grantmakers who have a good knowledge of x-risk and the EA community.

I'm not sure I agree that this constraint is real, I think I probably know a lot of good people who I'd trust to be competent EA / x-risk grantmakers, but I certainly haven't spent 10 hours thinking about what key qualities for the role are, and it's plausible that I'd find there are far fewer people competent enough than I currently think.

But if there are more grant managers, I think that I disagree with your costs. Two or more grantmakers acting on their own, different, first-principles models seems great to me, and to increase the likelihood of good grantmaking occuring, not increasing tension or anything. Competition is really rare and valuable in domains like this.

Comment by ben-pace on EA Forum 2.0 Initial Announcement · 2018-07-23T12:50:21.172Z · score: 5 (5 votes) · EA · GW

Yup, we actually already built this for LessWrong 2.0 (check it out on the frontpage, where each post says how many minutes reading it is), and so you'll get them when the CEA team launches the new EA Forum 2.0.

Comment by ben-pace on EA Forum 2.0 Initial Announcement · 2018-07-22T19:18:00.121Z · score: 7 (7 votes) · EA · GW

I actually have made detailed notes on the first 65% of the book, and hope to write up some summaries of the chapters.

It’s a great work. To do the relevant literature reviews would likely have taken me 100s of hours, rather than the 10s to study the book. As with all social science, the conclusions from most of the individual studies are suspect, but I think it sets out some great and concrete models to start from and test against other data we have.

Added: I’m Ben Pace, from LessWrong.

Added2: I finished the book. Not sure when my priorities will allow me to turn it into blogposts, alas.

Comment by ben-pace on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-24T11:16:09.144Z · score: 6 (6 votes) · EA · GW

I’m such a big fan of “outreach is an offer, not persuasion”.

In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)

If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.

I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.

Comment by ben-pace on How to improve EA Funds · 2018-04-05T19:22:51.014Z · score: 1 (1 votes) · EA · GW

Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it - which does indeed feel disconnected. It seems that there's a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so.

The evidence about it is not the 'evidence-backed charities' that made GiveWell famous/trustworthy, but is "here is a high status person in a related field that has a strong connection to EA", which seems not that different from the way other communities ask their members to give funding - it's based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.

Comment by ben-pace on How to improve EA Funds · 2018-04-05T00:52:10.328Z · score: 5 (5 votes) · EA · GW

Note: EA is totally a trust network - I don't think the funds are trying to be anything like GiveWell, who you're supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of "have you personally seen CEA make good decisions in this area" or "do you specifically trust one of the re-granters". Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking "this will look like if I give to Oxfam, and will have the same accountability structure" then you'll correctly be surprised to find out it works significantly via personal connections.

The same way you'd only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons - and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn't be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.

Comment by ben-pace on How to improve EA Funds · 2018-04-04T18:46:26.781Z · score: 12 (12 votes) · EA · GW

On trust networks: These are very powerful and effective. YCombinator, for example, say they get most of their best companies via personal recommendation, and the top VCs say that the best way to get funded by them is an introduction by someone they trust.

(Btw I got an EA Grant last year I expect in large part because CEA knew me because I successfully ran an EAGx conference. I think the above argument is strong on its own but my guess is many folks around here would like me to mention this fact.)

On things you can do with your money that are better than EA funds: personally I don’t have that much money, but with my excess I tend to do things like buy flights and give money to people I’ve made friends with who seem like they could get a lot of value from it (e.g. buy a flight to a CFAR workshop, fund them living somewhere to work on a project for 3 months, etc). This is the sort of thing only a small donor with personal connections can do, at least currently.

On EA grants:

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Yes. If I were running EA grants I would continually be in contact with the community, finding out peoples project ideas, discussing it with them for 5 hours and getting to know them and how much I could trust them, and then handing out money as I saw fit. This is one of the biggest funding bottlenecks in the community. The place that seems most to have addressed them has actually been the winners of the donor lotteries, who seemed to take it seriously and use the personal information they had.

I haven’t even heard about EA grants this time around, which seems like a failure on all the obvious axes (including the one of letting grantees know that the EA community is a reliable source of funding that you can make multi-year plans around - this makes me mostly update toward EA grants being a one-off thing that I shouldn’t rely on).

Comment by ben-pace on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:09:04.040Z · score: 1 (1 votes) · EA · GW

Thanks Holden!