Comment by ben-pace on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-10T19:30:48.527Z · score: 4 (4 votes) · EA · GW

+1, a friend of mine thought it was an official statement from CEA when he saw the headline, was thoroughly surprised and confused

Comment by ben-pace on EA grants available to individuals (crosspost from LessWrong) · 2019-02-07T22:00:34.924Z · score: 2 (2 votes) · EA · GW

(Your crossposting link goes to the edit page of your post, not the post itself.)

Comment by ben-pace on EA Forum Prize: Winners for December 2018 · 2019-01-30T23:44:27.250Z · score: 2 (2 votes) · EA · GW

Woop! Congrats to all the prize winners. Great posts!

Comment by ben-pace on Simultaneous Shortage and Oversupply · 2019-01-29T07:52:49.329Z · score: 4 (3 votes) · EA · GW

Conceptually related: SSC on Joint Over- and Underdiagnosis.

Comment by ben-pace on Disentangling arguments for the importance of AI safety · 2019-01-24T20:07:43.733Z · score: 2 (2 votes) · EA · GW

I think this is a good comment about how the brain works, but do remember that the human brain can both hunt in packs and do physics. Most systems you might build to hunt are not able to do physics, and vice versa. We're not perfectly competent, but we're still general.

Comment by ben-pace on The Global Priorities of the Copenhagen Consensus · 2019-01-08T08:00:26.045Z · score: 4 (4 votes) · EA · GW

+1 on being confused, I've heard good things about CC. Just now checking the wikipedia page, their actual priorities list is surprisingly close to GiveWell priorities lists (macronutrients, malaria, deworming, and then further down cash transfers) - and I see Thomas Schelling was on the panel! In particular he seems to have criticised the use of discount rates on evaluating the impact of climate change (which sounds close to an x-risk perspective).

I would be interested in a write-up from anyone who looked into it and made a conscious choice to not associate with / to not try to coordinate with them, about why they made that choice.

Comment by ben-pace on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-31T19:25:42.174Z · score: 2 (2 votes) · EA · GW

+1 Distill is excellent and high-quality, and plausibly has important relationships to alignment. (FYI some of the founders lately joined OpenAI, if you're figuring out which org to put it under, though Distill is probably its own thing).

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-23T07:11:53.034Z · score: 2 (2 votes) · EA · GW

That all makes a lot of sense! Thanks.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T21:18:20.690Z · score: 2 (2 votes) · EA · GW

I think it does, it just is unlikely to change it by all that much.

Imagine there are two donor lotteries, each one having had 40k donated to them, one with lots of people in the lottery you think are very thoughtful about what projects to donate to, and one with lots of people in the lottery you think are not thoughtful about what projects to donate to. You're considering which to add your 10k to. In either one the returns are good in expectation purely based on you getting a 20% chance to 5x your donation (which is good if you think there's increasing marginal returns to money at this level), but also in the other 80% of worlds you have a preference for your money being allocated by people who are more thoughtful.

This isn't the main consideration - unless you think the other people will do something actively very harmful with the money. You'd have to think that the other people will (in expectation) do something worse with a marginal 10k than you giving away 10k does good.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T21:12:26.878Z · score: 1 (1 votes) · EA · GW

I think there are busy people will have the connections to make a good grant but won't have the time to write a full report. In fact, I think there are many competent people that are very busy.

Comment by ben-pace on Should donor lottery winners write reports? · 2018-12-22T19:31:39.488Z · score: 12 (6 votes) · EA · GW

You're right that I had subtly become nervous about joining the donor lottery because "then I'd have to do all the work that Adam did". Thanks for reminding me I don't have to if it doesn't seem worth the opportunity cost, and that I can just donate to whatever seems like the best opportunity given my own models :)

Comment by ben-pace on Long-Term Future Fund AMA · 2018-12-19T22:53:07.557Z · score: 8 (5 votes) · EA · GW

I also think this sort of question might be useful to ask on a more individual basis - I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they'd encourage you to inform them about.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-28T14:42:57.851Z · score: 4 (4 votes) · EA · GW
Also it may be the case if someone who the grant-makers would be excited about had applied, they would had given them support, but there weren't such applicants. (Note that Bay Area biosec got the the grant)

When I spoke to ~3 people about it in the Bay, none of them knew the grant existed or that there was an option for them to work on community building in the bay full time.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-27T22:04:51.875Z · score: 10 (8 votes) · EA · GW

CEA doesn't run any regular events, community spaces, or fund people to do active community building in the Bay that I know of, which seemed odd given the density of EAs in the area and thus the marginal benefit of increased coordination there.

Comment by ben-pace on EA Community Building Grants Update · 2018-11-27T18:16:10.164Z · score: 3 (2 votes) · EA · GW

+1 this seems really quite weird

Comment by ben-pace on 2018 GiveWell Recommendations · 2018-11-26T19:07:05.572Z · score: 4 (3 votes) · EA · GW

(fyi, you don't need to add '[link]' to the title, the site does it automatically)

Comment by ben-pace on MIRI 2017 Fundraiser and Strategy Update · 2018-11-26T14:28:13.888Z · score: 2 (2 votes) · EA · GW

Yep, seems like a database error of sorts. Probably a site-admin should set the post back to its original post date which is December 1st 2017.

Comment by ben-pace on Takeaways from EAF's Hiring Round · 2018-11-20T21:56:04.532Z · score: 10 (4 votes) · EA · GW

I think that references are a big deal and putting them off as a 'safety check' after the offer is made seems weird. That said, I agree with them being a blocker for applicants at the early stage - wanting to ask a senior person to be a reference if they're seriously being considered, but not ask if they're not, and not wanting to bet wrong.

Comment by ben-pace on Effective Altruism Making Waves · 2018-11-16T03:14:33.599Z · score: 2 (2 votes) · EA · GW

Do you have rough data on quantity of tweets over time?

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-14T14:23:09.618Z · score: 2 (2 votes) · EA · GW

I think it is easy to grow too early, and I think that many of the naive ways of putting effort into growth would be net negative compared to the counterfactual (somewhat analagous to a company that quickly makes 1 million when it might've made 1 billion).

Focusing on actually making more progress with the existing people, by building more tools for them to coordinate and collaborate, seems to me the current marginal best use of resources for the community.

(I agree that effort should be spent improving the community, I just think 'size' isn't the right dimension to improve.)

Added: I suppose I should link back to my own post on the costs of coordinating at scale.

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-14T14:19:06.557Z · score: 2 (2 votes) · EA · GW

Bostrom has also cited him in his papers.

Comment by ben-pace on Rationality as an EA Cause Area · 2018-11-13T23:17:07.890Z · score: 9 (6 votes) · EA · GW
there isn't even an organisation dedicated to growing the movement

Things that are not movements:

  • Academic physics
  • Successful startups
  • The rationality community

They all need to grow to some extent, but they have a particular goal that is not generic 'growth'. Most 'movements' are primarily looking for something like political power, and I think that's a pretty bad goal to optimise for. It's the perennial offer to all communities that scale: "try to grab political power". I'm quite happy to continue being for something other than that.

Regarding the size of the rationality and EA communities right now, this doesn't really seem to me like a key metric? A more important variable is whether you have infrastructure that sustains quality at the scale the community is at.

  • The standard YC advice says the best companies stay small long. An example of Paul Graham saying it is here, search "I may be an extremist, but I think hiring people is the worst thing a company can do."
  • There are many startups that have 500 million dollars and 100 employees more than your startup, but don't actually have a product-market fit, and are going to crash next year. Whereas you might work for 5-10 years then have a product that can scale to several billions of dollars of value. Again, scaling right now will seems shiny and appealing, but something you often should fight against.
  • Regarding growth in the rationality community, I think a scientific field is a useful analogue. And if I told you I'd started some new field and in the first 20 years I'd gotten a research group in every university, is this necessarily good? Am I machine learning? Am I bioethics? I bet all the fields that hit the worst of the replication crisis have experienced fast growth at some point in the past 50 years. Regardless of intentions, the infrastructure matters, and it's not hard to simply make the world worse.

Other thoughts: I agree that the rationality project has resulted in a number of top people working on AI x-risk, effective altruism, and related projects, and that the ideas produced a lot of the epistemic bedrock for the community to be successful at noticing important and new ideas. I am also sad there hasn't been better internal infrastructure built in the past few years. As Oli Habryka said downthread (amongst some other important points), the org I work at that built the new LessWrong (and AI Alignment Forum and EA Forum, which is evidence for your 'rationalists work on AI and EA claim' ;) ) is primarily trying to build community infrastructure.

Meta thoughts: I really liked the OP, it concisely brought up a relevant proposal and placed it clearly in the EA frame (pareto principle, heavy tailed outcomes, etc).

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-12T15:29:31.188Z · score: 1 (1 votes) · EA · GW

<unfinished>

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-10T00:42:32.917Z · score: 2 (2 votes) · EA · GW

*nods* I think what I wrote there wasn't very clear.

To restate my general point: I'm suggesting that your general frame contains a weird inversion. You're supposing that there is an objective morality, and then wondering how we can find out about it and whether our moral intuitions are right. I first notice that I have very strong feelings about my and others' behaviour, and then attempt to abstract that into a decision procedure, and then learn which of my conflicting intuitions to trust.

In the first one, you would be surprised to find out we've randomly been selected to have the right morality by evolution. In the second, it's almost definitional that evolution has produced us to have the right morality. There's still a lot of work to do to turn the messy desires of a human into a consistent utility function (or something like that), which is a thing I spend a lot of time thinking about.

Does the former seem like an accurate description of the way you're proposing to think about morality?

Comment by ben-pace on Even non-theists should act as if theism is true · 2018-11-09T20:46:55.054Z · score: 7 (7 votes) · EA · GW

It's been many years (about 6?) since I've read an argument like this, so, y'know, you win on nostalgia. I also notice that my 12-year old self would've been really excited to be in a position to write a response to this, and given that I've never actually responded to this argument outside of my own head (and otherwise am never likely to in the future), I'm going to do some acausal trade with my 12-year old self here: below are my thoughts on the post.

Also, sorry it's so long, I didn't have the time to make it short.

I appreciate you making this post relatively concise for arguments in its reference class (which usually wax long). Here's what seems to me to be a key crux of this arg (I've bolded the key sentences):

It is difficult to see how unguided evolution would give humans like Tina epistemic access to normative reasons. This seems to particularly be the case when it comes to a specific variety of reasons: moral reasons. There are no obvious structural connections between knowing correct moral facts and evolutionary benefit. (Note that I am assuming that non-objectivist moral theories such as subjectivism are not plausible. See the relevant section of Lukas Gloor's post here for more on the objectivist/non-objectivist distinction.)
...[I]magine that moral reasons were all centred around maximising the number of paperclips in the universe. It’s not clear that there would be any evolutionary benefit to knowing that morality was shaped in this way. The picture for other potential types of reasons, such as prudential reasons is more complicated, see the appendix for more. The remainder of this analysis assumes that only moral reasons exist.
It therefore seems unlikely that an unguided evolutionary process would give humans access to moral facts. This suggests that most of the worlds Tina should pay attention to - worlds with normative realism and human access to moral facts - are worlds in which there is some sort of directing power over the emergence of human agents leading humans to have reliable moral beliefs.

Object-level response: this is confused about how values come into existence.

The things I care about aren't written into the fabric of the universe. There is no clause in the laws of physics to distinguish what's good and bad. I am a human being with desires and goals, and those are things I *actually care about*.

For any 'moral' law handed to me on high, I can always ask why I should care about it. But when I actually care, there's no question. When I am suffering, when those around me suffer, or when someone I love is happy, no part of me is asking "Yeah, but why should I care about this?" These sorts of things I'm happy to start with as primitive, and this question of abstractly where meaning comes from is secondary.

(As for the particular question of how evolution created us and the things we care about, how the bloody selection of evolution could select for love, for familial bonds, for humility, and for playful curiosity about how the world works, I recommend any of the standard introductions to evolutionary psychology, which I also found valuable as a teenager. Robert Wright's "The Moral Animal" was really great, and Joshua Greene's "Moral Tribes" is a slightly more abstract version that also contains some key insights about how morality actually works.)

My model of the person who believes the OP wants to say

"Yes, but just because you can tell a story about how evolution would give you these values, how do you know that they're actually good?"

To which I explain that I do not worry about that. I notice that I care about certain things, and I ask how I was built. Understanding that evolution created these cares and desires in me resolves the problem - I have no further confusion. I care about these things and it makes sense that I would. There is no part of me wondering whether there's something else I should care about instead, the world just makes sense now.

To point to an example of the process turning out the other way: there's a been a variety of updates I've made where I no longer trust or endorse basic emotions and intuitions, since a variety of factors have all pointed in the same direction:

  • Learning about scope insensitivity and framing effects
  • Learning about how the rate of economic growth has changed so suddenly since the industrial revolution (i.e. very recently in evolutionary terms)
  • Learning about the various dutch book theorems and axioms of rational behaviour that imply a rational agent is equivalent to an expected-utility maximiser.

These have radically changed which of my impulses I trust and endorse and listen to. After seeing these, I realise that subprocess in my brain are trying to approximate how much I should care about groups of difficult scales and failing at their goal, so I learn to ignore those and teach myself to do normative reasoning (e.g. taking into account orders-of-magnitude intuitively), because it's what I reflectively care about.

I can overcome basic drives when I discover large amounts of evidence from different sources that predicts my experience that ties together into a cohesive worldview for me and explains how the drive isn't in accordance with my deepest values. Throwing out the basic things I care about because of an abstract argument with none of the strong varieties of evidential backing of the above, isn't how this works.

Meta-level response: I don't trust the intellectual tradition of this group of arguments. I think religions have attempted to have a serious conversation about meaning and value in the past, and I'm actually interested in that conversation (which is largely anthropological and psychological). But my impression of modern apologetics is primarily one of rationalization, not the source of religion's understanding of meaning, but a post-facto justification.

Having not personally read any of his books, I hear C.S. Lewis is the guy who most recently made serious attempts to engage with morality and values. But the most recent wave of this philosophy of religion stuff, since the dawn of the internet era, is represented by folks like the philosopher/theologian/public-debater William Lane Craig (who I watched a bunch as a young teenager), who sees argument and reason as secondary to his beliefs.

Here's some relevant quotes of Lane Craig, borrowed from this post by Luke Muehlhauser (sources are behind the link):

…the way we know Christianity to be true is by the self-authenticating witness of God’s Holy Spirit. Now what do I mean by that? I mean that the experience of the Holy Spirit is… unmistakable… for him who has it; …that arguments and evidence incompatible with that truth are overwhelmed by the experience of the Holy Spirit…

…it is the self-authenticating witness of the Holy Spirit that gives us the fundamental knowledge of Christianity’s truth. Therefore, the only role left for argument and evidence to play is a subsidiary role… The magisterial use of reason occurs when reason stands over and above the gospel… and judges it on the basis of argument and evidence. The ministerial use of reason occurs when reason submits to and serves the gospel. In light of the Spirit’s witness, only the ministerial use of reason is legitimate. Philosophy is rightly the handmaid of theology. Reason is a tool to help us better understand and defend our faith…

[The inner witness of the Spirit] trumps all other evidence.

My impression is that it's fair to characterise modern apologetics as searching for arguments to provide in defense of their beliefs, and not as the cause of them, nor as an accurate model of the world. Recall the principle of the bottom line:

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.  If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing.  But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for.  In this case, the real algorithm is "Never repair anything expensive."  If this is a good algorithm, fine; if this is a bad algorithm, oh well.  The arguments you write afterward, above the bottom line, will not change anything either way.

My high-confidence understanding of the whole space of apologetics is that the process generating them is, on a basic level, not systematically correlated with reality (and man, argument space is so big, just choosing which hypothesis to privilege is most of the work, so it's not even worth exploring the particular mistakes made once you've reached this conclusion).

This is very different from many other fields. If a person with expertise in chemistry challenged me and offered an argument that was severely mistaken as I believe the one in the OP to be, I would still be interested in further discussion and understanding their views because these models have predicted lots of other really important stuff. With philosophy of religion, it is neither based in the interesting parts of religion (which are somewhat more anthropological and psychological), nor is it based in understanding some phenomena of the world where it's actually made progress, but is instead some entirely different beast, not searching for truth whatsoever. The people seem nice and all, but I don't think it's worth spending time engaging with intellectually.

If you find yourself confused by a theologian's argument, I don't mean to say you should ignore that and pretend that you're not confused. That's a deeply anti-epistemic move. But I think that resolving these particular confusions will not be interesting, or useful, it will just end up being a silly error. I also don't expect the field of theology / philosophy of religion / apologetics to accept your result, I think there will be further confusions and I think this is fine and correct and you should move on with other more important problems.

---

To clarify, I wrote down my meta-level response out of a desire to be honest about my beliefs here, and did not mean to signal I wouldn't respond to further comments in this thread any less than usual :)

Comment by ben-pace on Burnout: What is it and how to Treat it. · 2018-11-07T22:59:22.103Z · score: 1 (1 votes) · EA · GW

It has never occurred to me that pulling an all-nighter should imply eating more, though it seems like such a natural conclusion in retrospect (though I strongly avoid taking all-nighters).

What's the actual reasoning? How does the body determine how much food it can intake and where does the energy expenditure come from precisely? Movement? Cognitive work?

Comment by ben-pace on Burnout: What is it and how to Treat it. · 2018-11-07T17:42:27.489Z · score: 5 (5 votes) · EA · GW

In general I moved from a model where the limiting factor was absolute number of hours worked to quality of peak hours in the day, where (I believe) the latter is much higher variance and also significantly affected by not having sufficient sleep. I moved from taking modafinil (which never helped me) to taking melatonin (which helps a lot), and always letting myself sleep in as much as I need. I think this has helped a lot.

Comment by ben-pace on EA Concepts: Share Impressions Before Credences · 2018-10-19T18:06:30.106Z · score: 2 (2 votes) · EA · GW

Yeah. As I've said before, it's good to be fully aware of what you understand, what model your inside view is using, and what credence it outputs, before/separate to any social updating of the decision-relevant credence. Or at least, this is the right thing to do if you want to have accurate models in the long run, rather than accurate decision-relevant credences in the short run.

Comment by ben-pace on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-09T06:53:53.965Z · score: 4 (4 votes) · EA · GW

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields.

Quite. I think that my model of Eli was setting the highest standard possible - not merely a good researcher, but a great one, the sort of person who can bring whole new paradigms/subfields into existence (Kahneman & Tversky, Von Neumann, Shannon, Einstein, etc), and then noting that because the tails come apart (aka regressional goodharting), optimising for the normal metrics used in standard hiring practices won't get you these researchers (I realise that probably wasn't true for Von Neumann, but I think it was true for all the others).

Comment by ben-pace on 500 Million, But Not A Single One More · 2018-09-14T01:39:59.921Z · score: 2 (2 votes) · EA · GW

It is remarkable what humans can do when we think carefully and coordinate.

This short essay inspires me to work harder for the things I care about. Thank you for writing it.

Comment by ben-pace on Additional plans for the new EA Forum · 2018-09-12T06:24:58.650Z · score: 1 (1 votes) · EA · GW

Yeah, this matches my personal experience a bunch. I'm planning to look into this literature sometime soon, but I'd be interested to know if anyone has strong opinions about what first-principles model best fits with the existing work in this area.

Comment by ben-pace on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-09-10T22:33:54.614Z · score: 3 (3 votes) · EA · GW

I don't have the time to join the debate, but I'm pretty sure Dunja's point isn't "I know that OpenPhil's strategy is bad" but "Why does everyone around here act as though it is knowable that their strategy is good, given their lack of transparency?" It seems like people act OpenPhil's strategy is good and aren't massively confused / explicitly clear that they don't have the info that is required to assess the strategy.

Dunja, is that accurate?

(Small note: I'd been meaning to try to read the two papers you linked me to above a couple months ago about continental drift and whatnot, but I couldn't get non-paywalled versions. If you have them, or could send them to me at gmail.com preceeded by 'benitopace', I'd appreciate that.)

Comment by ben-pace on Wrong by Induction · 2018-09-07T18:58:21.535Z · score: 5 (5 votes) · EA · GW

Is this a real quote from Kant?

The usual touchstone, whether that which someone asserts is merely his persuasion — or at least his subjective conviction, that is, his firm belief — is betting. It often happens that someone propounds his views with such positive and uncompromising assurance that he seems to have entirely set aside all thought of possible error. A bet disconcerts him.

Seriously though? I feel like we should've shouted this from the rooftops if it were so. This is an awesome quote. Where exactly is it from / did you find it?

Comment by ben-pace on Which piece got you more involved in EA? · 2018-09-07T16:06:35.839Z · score: 2 (2 votes) · EA · GW

I also have never read anything on Felicifia.org (but would like to)! If there's anything easy to link to, I'd be interested to have a read through any archived content that you thought was especially good / novel / mind-changing.

Comment by ben-pace on Which piece got you more involved in EA? · 2018-09-07T04:44:57.557Z · score: 2 (2 votes) · EA · GW

I never read Nick's thesis. I'm curious if there are particular sections you can point to that might give me a sense of why it was influential on you? I have a vague sense that it's primarily mathematical population ethics calculations or something, and I'm guessing I might be wrong.

Comment by ben-pace on Which piece got you more involved in EA? · 2018-09-06T20:25:48.537Z · score: 6 (3 votes) · EA · GW

LessWrong sequences really changed the way I think (after first reading posts like Epistemologies of Reckless Endangerment on Luke Muehlhauser's Common Sense Atheism). If I think back to the conversations I had as a teenager in school and the general frameworks I still use today, the posts that were most influential on me were (starting with most important):

And then later reading HPMOR was a Big Deal, for really feeling what it would be like to act throughout my life, in accordance with these models. Those things I think were the biggest reading experiences for me (and they were some of the most influential things on my decisions and how I live my life). Everything in EA felt very natural to me after that.

Comment by ben-pace on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-23T19:46:12.527Z · score: 4 (6 votes) · EA · GW

I think that one of the constraints that is faced here is a lack of experienced grantmakers who have a good knowledge of x-risk and the EA community.

I'm not sure I agree that this constraint is real, I think I probably know a lot of good people who I'd trust to be competent EA / x-risk grantmakers, but I certainly haven't spent 10 hours thinking about what key qualities for the role are, and it's plausible that I'd find there are far fewer people competent enough than I currently think.

But if there are more grant managers, I think that I disagree with your costs. Two or more grantmakers acting on their own, different, first-principles models seems great to me, and to increase the likelihood of good grantmaking occuring, not increasing tension or anything. Competition is really rare and valuable in domains like this.

Comment by ben-pace on EA Forum 2.0 Initial Announcement · 2018-07-23T12:50:21.172Z · score: 5 (5 votes) · EA · GW

Yup, we actually already built this for LessWrong 2.0 (check it out on the frontpage, where each post says how many minutes reading it is), and so you'll get them when the CEA team launches the new EA Forum 2.0.

Comment by ben-pace on EA Forum 2.0 Initial Announcement · 2018-07-22T19:18:00.121Z · score: 7 (7 votes) · EA · GW

I actually have made detailed notes on the first 65% of the book, and hope to write up some summaries of the chapters.

It’s a great work. To do the relevant literature reviews would likely have taken me 100s of hours, rather than the 10s to study the book. As with all social science, the conclusions from most of the individual studies are suspect, but I think it sets out some great and concrete models to start from and test against other data we have.

Added: I’m Ben Pace, from LessWrong.

Added2: I finished the book. Not sure when my priorities will allow me to turn it into blogposts, alas.

Comment by ben-pace on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-24T11:16:09.144Z · score: 6 (6 votes) · EA · GW

I’m such a big fan of “outreach is an offer, not persuasion”.

In general, my personal attitude to outreach in student groups is not to ‘get’ the best people via attraction and sales, but to just do something awesome that seems to produce value (e.g. build a research group around a question, organise workshops around a thinking tool, write a talk on a topic you’re confused about and want to discuss), and then the best people will join you on your quest. (Think quests, not sales.)

If your quest involves sales as a side-effect (e.g. you’re running an EAGx) then that’s okay, as long as the core of what you’re doing is trying to solve a real problem and make progress on an open question you have. Run EAGxes around a goal of moving the needle forward on certain questions, on making projects happen, solving some coordination problem in the community, or some other concrete problem-based metric. Not just “get more EAs”.

I think the reason this post (and all other writing on the topic) has had difficulty suggesting particular quests is that they tend to be deeply tied up in someone’s psyche. Nonetheless l think this is what’s necessary.

Comment by ben-pace on How to improve EA Funds · 2018-04-05T19:22:51.014Z · score: 1 (1 votes) · EA · GW

Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it - which does indeed feel disconnected. It seems that there's a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so.

The evidence about it is not the 'evidence-backed charities' that made GiveWell famous/trustworthy, but is "here is a high status person in a related field that has a strong connection to EA", which seems not that different from the way other communities ask their members to give funding - it's based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.

Comment by ben-pace on How to improve EA Funds · 2018-04-05T00:52:10.328Z · score: 5 (5 votes) · EA · GW

Note: EA is totally a trust network - I don't think the funds are trying to be anything like GiveWell, who you're supposed to trust based on the publicly-verifiable rigour of their research. EA funds is much more toward the side of the spectrum of "have you personally seen CEA make good decisions in this area" or "do you specifically trust one of the re-granters". Which is fine, trust is how tightly-knit teams and communities often get made. But if you gave to it thinking "this will look like if I give to Oxfam, and will have the same accountability structure" then you'll correctly be surprised to find out it works significantly via personal connections.

The same way you'd only fund a startup if you knew them and how they worked, you should probably only fund EA funds for similar reasons - and if the startup tried to make its business plan such that anyone would have reason to fund it, the business plan probably wouldn't be very good. I think that EA should continue to be a trust-based network, and so on the margin I guess people should give less to EA funds rather than EA funds make grants that are more defensible.

Comment by ben-pace on How to improve EA Funds · 2018-04-04T18:46:26.781Z · score: 12 (12 votes) · EA · GW

On trust networks: These are very powerful and effective. YCombinator, for example, say they get most of their best companies via personal recommendation, and the top VCs say that the best way to get funded by them is an introduction by someone they trust.

(Btw I got an EA Grant last year I expect in large part because CEA knew me because I successfully ran an EAGx conference. I think the above argument is strong on its own but my guess is many folks around here would like me to mention this fact.)

On things you can do with your money that are better than EA funds: personally I don’t have that much money, but with my excess I tend to do things like buy flights and give money to people I’ve made friends with who seem like they could get a lot of value from it (e.g. buy a flight to a CFAR workshop, fund them living somewhere to work on a project for 3 months, etc). This is the sort of thing only a small donor with personal connections can do, at least currently.

On EA grants:

Part of the early-stage projects grant support problem is it generally means investing into people. Investing in people needs either trust or lot of resources to evaluate the people (which is in some aspects more difficult than evaluating projects which are up and running)

Yes. If I were running EA grants I would continually be in contact with the community, finding out peoples project ideas, discussing it with them for 5 hours and getting to know them and how much I could trust them, and then handing out money as I saw fit. This is one of the biggest funding bottlenecks in the community. The place that seems most to have addressed them has actually been the winners of the donor lotteries, who seemed to take it seriously and use the personal information they had.

I haven’t even heard about EA grants this time around, which seems like a failure on all the obvious axes (including the one of letting grantees know that the EA community is a reliable source of funding that you can make multi-year plans around - this makes me mostly update toward EA grants being a one-off thing that I shouldn’t rely on).

Comment by ben-pace on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:09:04.040Z · score: 1 (1 votes) · EA · GW

Thanks Holden!

Comment by ben-pace on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T18:24:01.052Z · score: 8 (8 votes) · EA · GW

I’m pretty confused about the work of the RA role - it seems to include everything from epidemiological literature reviews to philosophical work on population ethics to following up on individual organisations you’ve funded.

Could you give some concrete info about how you and the RA determine what the RA works on?

Comment by ben-pace on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-03-02T13:39:52.957Z · score: 3 (3 votes) · EA · GW

Gotcha. I’ll probably wrap up with this comment, here’s my few last thoughts (all on the topic of building a research field):

(I’m commenting on phone, sorry if paragraphs are unusually long, if they are I’ll try to add more breaks later.)

  • Your list of things that OpenPhil could do (e.g specify the exact questions this new field is trying to solve, or describe what a successful project should accomplish in this field in five years) sound really excellent. I do not think they’re at all easy in this case however.
  • I think one of the things that makes Alignment a difficult problem (and is the sort of thing you might predict if something were correctly in the reference class of ‘biggest problem for humanity’) is that there is not agreement on what research in the field should look like, or even formal specification of the questions - it is in a pre-paradigmatic stage. It took Eliezer 3 years of writing to convey some of the core intuitions, and even then that only worked for a small set of people. I believe Paul Christiano has not written a broadly understandable description of his research plans for similar reasons.
  • However, I’m strongly in agreement that this would be awesome for the field. I recently realised how much effort MIRI themselves have put into trying to set up the basic questions of the field, even though it’s not been successful so far. I can imagine that doing so would be a significant success marker for any AI Alignment researcher group that OpenPhil funds, and it’s something I think about working on myself from time to time.
  • I have a different feeling to you regarding the funding/writing ratio. I feel that OpenPhil’s reasons for funding MIRI are basically all in the first write-up, and the consequent (short) write-up contains just the variables that are now different.
  • In particular, they do say this typically wouldn’t be sufficient for funding a research org, but given the many other positive signs in the first write up, it was sufficient to 2.5x the grant amount (500k/year to 1.25mil/year). I think this is similar to grant amounts to various other grantee in this area, and also much smaller than the total amount OpenPhil is interested in funding this area with (so it doesn’t seem a surprising amount to me).
  • I see this as a similar problem for the other grants to more ‘mainstream’ AI Alignment researchers OpenPhil Funds; it’s not clear to me that they’re working on the correct technical problems either, because the technical problems have not been well specified, because they’re difficult to articulate.
  • My broad strokes thoughts again are that, when you choose to make grants that your models say have the chance of being massive hits, you just will look like you’re occasionally making silly mistakes, even once people take into account that this is what to expect you to look like. Given my personally having spent a bunch of time thinking about MIRI’s work, I have an idea of what models OpenPhil has built that are hard to convey, but it seems reasonable to me that in your epistemic position this looks like a blunder. I think that OpenPhil probably knew it would look like this to some, and decided to make the call anyway.

Final note: of your initial list of three things, the open call for research is the one I think is least useful for OpenPhil. When you’re funding at this scale in any field, the thought is not “what current ideas do people have that I should fund”, but “what new incentives can I add to this field”? And when you’re adding new incentives that are not those that already exist, it’s useful to spend time initially talking a lot with the grandees to make sure they truly understand your models (and you theirs) so that the correct models and incentives are propagated.

For example, I think if OpenPhil has announced a $100 grant scheme for Alignment research, many existing teams would’ve explained why their research already is this, and started using these terms, and it would’ve impeded the ability to build the intended field. I think this is why, even in cause areas like criminal justice and farm animal welfare, OpenPhil has chosen to advertise less and instead open 1-1 lines of communication with orgs they think are promising.

Letting e.g. a criminal justice org truly understand what you care about, and what sorts of projects you are and aren’t willing to fund, helps them plan accordingly for the future (as opposed to going along as usual and then suddenly finding out you aren’t interested in funding them any more). I think the notion that they’d be able to succeed by announcing a call for grants to solve a problem X, is too simplistic a view of how models propagate; in general to cross significant inferential gaps you need (on the short end) several extensive 1-1 conversations, and (on the longer end) textbooks with exercises.

Added: More generally, how many people you can fund quickly to do work is a function of how inferentially far you are away from the work that the people you hope to fund are already doing.

(On the other hand, you want to fund them well to signal to the rest of a field that there is real funding here if they provide what you’re looking for. I’m not sure exactly how to make that tradoeff.)

Comment by ben-pace on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-03-01T16:07:03.610Z · score: 3 (5 votes) · EA · GW

Ah, I see. Thanks for responding.

I notice until now I’ve been conflating whether the OpenPhil grant-makers themselves should be a committee, versus whether they should bring in a committee to assess the researchers they fund. I realise you’re talking about the latter, while I was talking about the former. Regarding the latter (in this situation) here is what my model of a senior staff member at OpenPhil thinks in this particular case of AI.

If they were attempting to make grants in a fairly mainstream area of research (e.g. transfer learning on racing games) then they would have absolutely wanted to use a panel if they were considering some research. However, OpenPhil is attempting to build a novel research field, that is not very similar to existing fields. One of the big things that OpenPhil has changed their mind about in the past few years, is going from believing there was expert consensus in AI that AGI would not be a big problem, to believing that there is not relevant expert class on the topic of forecasting AGI capabilities and timelines; the expert class most people think about (ML researchers) is much better at assessing the near-term practicality of ML research.

As such, there was not a relevant expert class in this case, and OpenPhil picked an unusual method of determining whether to give the grant (that heavily included variables such as the fact that MIRI has a strong track record of thinking carefully about long-term AGI related issues). I daresay MIRI and OpenPhil would not expect MIRI to pass the test you are proposing, because they are trying to do something qualitatively different than anything currently going on in the field.

Does that feel like it hits the core point you care about?


If that does resolve your confusion about OpenPhil’s decision, I will further add:

If your goal is to try to identify good funding opportunities, then we are in agreement: the fact that OpenPhil has funded an organisation (plus the associated write-up about why) is commonly not sufficient information to persuade me that it's sufficiently cost-effective that I should donate to it over, say, a GiveWell top charity.

If your goal however is to figure out whether OpenPhil’s organisation in general is epistemically sound, I would look to other variables than the specific grants where the reasoning is least transparent and looks the most wrong. The main reasons I have an unusually high amount of trust in OpenPhil's decisions is from seeing other positive epistemic signs from its leadership key research staff, not from assessing single grant datapoint. My model of OpenPhil’s competence instead weights more:

  • Their hiring process
  • Their cause selection process
  • The research I’ve seen from their key researchers (e.g. Moral Patienthood, Crime Stats Relpication)
  • Significant epistemic signs from the leadership (e.g. Three Key Things I've Changed My Mind About, building GiveWell)
  • When assessing the grant making in a particular cause, I think look to the particular program manager and see what their output has been like.

Personally in the first four cases, I’ve seen remarkably strong positive evidence. Regarding the latter I actually haven’t got much evidence, the individual program managers do not tend to publish much. Overall I’m very impressed with OpenPhil as an org.

(I'm about to fly on a plane, can find more links to back up some claims later.)

Comment by ben-pace on How effective and efficient is the funding policy of Open Philanthropy concerning projects on AI risks? · 2018-02-28T01:56:39.554Z · score: 17 (17 votes) · EA · GW

I think this stems from a confusion about how OpenPhil works. In their essay essay Hits-Based Giving, written in early 2016, they list some of the ways they go about philanthropy in order to maximise their chance of a big hit (even while many of their grants may look unlikely to work). Here are two principles most relevant to your post above:

We don’t: expect to be able to fully justify ourselves in writing. Explaining our opinions in writing is fundamental to the Open Philanthropy Project’s DNA, but we need to be careful to stop this from distorting our decision-making. I fear that when considering a grant, our staff are likely to think ahead to how they’ll justify the grant in our public writeup and shy away if it seems like too tall an order — in particular, when the case seems too complex and reliant on diffuse, hard-to-summarize information. This is a bias we don’t want to have. If we focused on issues that were easy to explain to outsiders with little background knowledge, we’d be focusing on issues that likely have broad appeal, and we’d have more trouble focusing on neglected areas.

A good example is our work on macroeconomic stabilization policy: the issues here are very complex, and we’ve formed our views through years of discussion and engagement with relevant experts and the large body of public argumentation. The difficulty of understanding and summarizing the issue is related, in my view, to why it is such an attractive cause from our perspective: macroeconomic stabilization policy is enormously important but quite esoteric, which I believe explains why certain approaches to it (in particular, approaches that focus on the political environment as opposed to economic research) remain neglected.

[...]

A core value of ours is to be open about our work. But “open” is distinct from “documenting everything exhaustively” or “arguing everything convincingly.” More on this below.

And

We don’t: avoid the superficial appearance — accompanied by some real risk — of being overconfident and underinformed.

When I picture the ideal philanthropic “hit,” it takes the form of supporting some extremely important idea, where we see potential while most of the world does not. We would then provide support beyond what any other major funder could in order to pursue the idea and eventually find success and change minds.

In such situations, I’d expect the idea initially to be met with skepticism, perhaps even strong opposition, from most people who encounter it. I’d expect that it would not have strong, clear evidence behind it (or to the extent it did, this evidence would be extremely hard to explain and summarize), and betting on it therefore would be a low-probability play. Taking all of this into account, I’d expect outsiders looking at our work to often perceive us as making a poor decision, grounded primarily in speculation, thin evidence and self-reinforcing intellectual bubbles. I’d therefore expect us to appear to many as overconfident and underinformed. And in fact, by the nature of supporting an unpopular idea, we would be at risk of this being true, no matter how hard we tried (and we should try hard) to seek out and consider alternative perspectives.

In your post, you argue that OpenPhil should follow a grant algorithm that includes

  • Considerations not just of a project's importance, but also it's tractability
  • A panel of experts to confirm tractability
  • Only grantees with a strong publication record
  • You also seem to claim that this methodology is the expert consensus of the field of philanthropic funding, a claim for which you do not give any link/citation (?).

Responding in order:

  • The framework in EA of 'scope, tractability and neglectedness' was in fact developed by Holden Karnofsky (the earliest place I know of it being written down is in this GiveWell blogpost) so it was very likely in the grant-maker's mind.
  • This actually is contrary to how OpenPhil works: they attempt to give single individuals a lot of grant-making judgement. This fits in with my general expectation of how good decision making works; do not have a panel, but have a single individual who is rewarded based on their output (unfortunately OpenPhil's work is sufficiently long-term that it's hard to have local incentives, though an interesting financial setup for the project managers would be one where, should they get a win of sufficient magnitude in the next 10 years (e.g. avert a global catastrophic risk), then they get a $10 million bonus). But yeah, I believe in general a panel cannot create common knowledge of the deep models they have, and can many cases be worse than an individual.
  • A strong publication record seems like a great thing. Given the above anti-principles, it's not inconsistent that they should fund someone without it, and so I assume the grant-maker felt they had sufficiently strong evidence in this situation.
  • I've seen OpenPhil put a lot of work into studying the history of philanthropy, and funding research about it. I don't think the expert consensus is as strong as you make it out to be, and would want to see more engagement with the arguments OpenPhil has made before I would believe such a conclusion.

OpenPhil does have one of its goals as improving the global conversation about philanthropy, which is one of the reasons the staff spend so much time writing down their models and reasons (example, meta-example). In general it seems to me that 'panels' are the sorts of thing an organisation develops when it's trying to make defensible decisions, like in politics. I tend to see OpenPhil's primary goals here as optimising more for communicating what it's core beliefs to those interested in (a) helping OpenPhil understand things better or (b) use the info to inform their own decisions, rather than just broadcasting every possible detail but in a defensible way (especially if it's costly in terms of time).

Comment by ben-pace on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2017-12-31T00:26:05.513Z · score: 0 (0 votes) · EA · GW

I copied this exchange to my blog, and there were an additonal bunch of interesting comments there.

Comment by ben-pace on Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans · 2017-12-27T00:27:32.501Z · score: 2 (2 votes) · EA · GW

Examples are totally worth digging into! Yeah, I actually find myself surprised and slightly confused by the situation with Einstein, and do make the active predictions that he had some strong connections in physics (e.g. at some point had a really great physics teacher who'd done some research). In general I think Ramanujan-like stories of geniuses appearing from nowhere are not the typical example of great thinkers / people who significantly change the world. If I'm I right I should be able to tell such stories about the others, and in general I do think that great people tend to get networked together, and that the thinking patterns of the greatest people are noticed by other good people before they do their seminal work cf. Bell Labs (Shannon/Feynman/Turing etc), Paypal Mafia (Thiel/Musk/Hoffman/Nosek etc), SL4 (Hanson/Bostrom/Yudkowsky/Legg etc), and maybe the Republic of Letters during the enlightenment? But I do want to spend more time digging into some of those.

To approach from the other end, what heuristics might I use to find people who in the future will create massive amounts of value that others miss? One example heuristic that Y Combinator uses to determine who in advance is likely to find novel, deep mines of value that others have missed is whether the individuals regularly build things to fix problems in their life (e.g. Zuckerberg built lots of simple online tools to help his fellow students study while at college).

Some heuristics I use to tell whether I think people are good at figuring out what's true, and make plans for it, include:

  • Does the person, in conversation, regularly take long silent pauses to organise their thoughts, find good analogies, analyse your argument, etc? Many people I talk to take silence as a significant cost, due to social awkwardness, and do not make the trade-off toward figuring out what's true. I always trust the people more that I talk to who make these small trade-offs toward truth versus social cost
  • Does the person have a history of executing long-term plans that weren't incentivised by their local environment? Did they decide a personal-project (not, like, getting a degree) was worth putting 2 years into, and then put 2 years into it?
  • When I ask about a non-standard belief they have, can they give me a straightforward model with a few variables and simple relations, that they use to understand the topic we're discussing? In general, how transparent are their models to themselves, and are the models general simple and backed by lots of little pieces of concrete evidence?
  • Are they good at finding genuine insights in the thinking of people who they believe are totally wrong?

My general thought is that there isn't actually a lot of optimisation process put into this, especially in areas that don't have institutions built around them exactly. For example academia will probably notice you if you're very skilled in one discipline and compete directly in it, but it's very hard to be noticed if you're interdisciplinary (e.g. Robin Hanson's book sitting between neuroscience and economics) or if you're not competing along even just one or two of the dimensions it optimises for (e.g. MIRI researchers don't optimise for publishing basically at all, so when they make big breakthroughs in decision theory and logical induction it doesn't get them much notice from standard academia). So even our best institutions at noticing great thinkers with genuine and valuable insights seem to fail at some of the examples that seem most important. I think there is lots of low hanging fruit I can pick up in terms of figuring out who thinks well and will be able to find and mine deep sources of value.


Edit: Removed Bostrom as an example at the end, because I can't figure out whether his success in academia, while nonetheless going through something of a non-standard path, is evidence for or against academia's ability to figure out whose cognitive processes are best at figuring out what's surprising+true+useful. I have the sense that he had to push against the standard incentive gradients a lot, but I might just be false and Bostrom is one of academia's success stories this generation. He doesn't look like he just rose to the top of a well-defined field though, it looks like he kept having to pick which topics were important and then find some route to publishing on them, as opposed to the other way round.